Method for constructing neural network tools for recognizing the emotional tone of texts in universal computing devices
Abstract
This paper proposes a method for constructing neural-network tools to recognize the emotional coloring of Ukrainian-language texts on “universal” computing platforms (CPU/mobile devices/GPU). The method standardizes the stages of input encoding and neural-network output decoding into probabilistic estimates, and formalizes an algorithm for selecting the architecture based on resource constraints and accuracy requirements. Three branches are provided: lightweight models for tight computational budgets, sequential recurrent models for moderate resources, and Transformer-based models for quality-first scenarios. A unified pipeline for processing text fragments is presented: tokenization/vectorization, neural transformation, normalization, and selection of the dominant emotion. Experimental evaluation of two prototypes confirms the “quality–efficiency” trade-off: RoBERTa outperforms FastText in accuracy, but requires more parameters and exhibits higher inference latency (milliseconds versus microseconds). The method targets Ukrainian and mixed social-media content, and supports optimizations for resource-constrained platforms. Its practical value lies in a guided architecture choice and a reproducible development process that balance accuracy, speed, and energy consumption. Future directions include ensembles/hybrids, handling sarcasm, and expanding the set of emotions. The results show that a systematized approach enables adaptive solutions for real-time mood monitoring and integration into mobile and embedded systems. The proposed methodology is compatible with existing corpora, metrics, and the standards of reproducibility and open science.
References
2. Mohammad, S. M., & Turney, P. D. (2013). Crowdsourcing a Word-Emotion Association Lexicon. Computational Intelligence, 29(3), 436-465.
3. Demszky, D., Movshovitz-Attias, D., Ko, J., Cowen, A., Nemade, G., & Ravi, S. (2020). GoEmotions: A Dataset of Fine-Grained Emotions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL) (pp. 4040-4054). Online: Association for Computational Linguistics.
4. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 4171-4186). Minneapolis, Minnesota: Association for Computational Linguistics.
5. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.
Abstract views: 13 PDF Downloads: 9