Method for combining pseudo-labeling and model compression to optimize self-training in object Identification tasks
Abstract
This scientific work examines a combined approach to optimizing training by means of self-training for object identification models, integrating a pseudo-labeling mechanism with a model compression technique through knowledge distillation. The main objective of the proposed method is to improve object classification accuracy under limited labeled data while simultaneously reducing the computational costs associated with training deep neural networks. The methodology involves automatically including unlabeled samples in the training set when the teacher model demonstrates high confidence, followed by using these samples to train a more compact student model. The training process employs soft probabilistic labels, enabling better transfer of generalized knowledge from teacher to student. Experimental results on the CIFAR-100 and ImageNet datasets show that the proposed method outperforms traditional self-training techniques across key metrics such as accuracy, precision, recall, and F1-score, and achieves higher training stability and efficiency. The findings confirm the potential of the combined approach for scalable deployment in real-world computer vision tasks, especially when labeled data is scarce or when deploying models on resource-constrained devices. An additional advantage is the reduced rate of incorrect pseudo-labels due to the application of a confidence threshold. The proposed training framework also improves generalization to unseen samples. These results can be used in future research aimed at enhancing training methods with partially or entirely unlabeled data.
References
2. Caesar H., Bankiti V., Lang A. та ін. nuScenes: A Multimodal Dataset for Autonomous Driving // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. P. 11621–11631.
3. Lee D.-H. Pseudo-label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks // ICML Workshop. 2013.
4. Xie Q., Luong M.-T., Hovy E., Le Q. V. Self-training with Noisy Student Improves ImageNet Classification // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. P. 10687–10698.
5. Hinton G., Vinyals O., Dean J. Distilling the Knowledge in a Neural Network. 2015.
Abstract views: 13 PDF Downloads: 5