EfficientNet B0 Feature Extraction with L2-SVM Classification for Robust Facial Expression Recognition
Abstract
Facial expression recognition (FER) remains a challenging task due to the subtle visual variations between emotional categories and the constraints of small, controlled datasets. Traditional deep learning approaches often require extensive training, large-scale datasets, and data augmentation to achieve robust generalization. To overcome these limitations, this paper proposes a hybrid FER framework that combines EfficientNet B0 as a deep feature extractor with an L2-regularized Support Vector Machine (L2-SVM) classifier. The model is designed to operate effectively on limited data without the need for end-to-end fine-tuning or augmentation, offering a lightweight and efficient solution for resource-constrained environments. Experimental results on the JAFFE and CK+ benchmark datasets demonstrate the proposed method’s strong performance, achieving up to 100% accuracy across various hold-out splits (90:10, 80:20, 70:30) and 99.8% accuracy under 5-fold cross-validation. Evaluation metrics including precision, recall, and F1-score consistently exceeded 95% across all emotion classes. Confusion matrix analysis revealed perfect classification of high-intensity emotions such as Happiness and Surprise, while minor misclassifications occurred in more ambiguous expressions like Fear and Sadness. These results validate the model’s generalization ability, efficiency, and suitability for real-time FER tasks. Future work will extend the framework to in-the-wild datasets and incorporate model explainability techniques to improve interpretability in practical deployment
Keywords: Facial Expression Recognition, EfficientNet, SVM, Deep Features, Emotion Classification
Downloads
References
S. Ullah, J. Ou, Y. Xie, and W. Tian, “Facial expression recognition (FER) survey: a vision, architectural elements, and future directions,” PeerJ Comput. Sci., vol. 10, p. e2024, Jun. 2024, doi: 10.7717/peerj-cs.2024.
M. Kaur and M. Kumar, “Facial emotion recognition: A comprehensive review,” Expert Syst., vol. 41, no. 10, Oct. 2024, doi: 10.1111/exsy.13670.
J. Zhang, X. Wang, J. Lu, L. Liu, and Y. Feng, “The impact of emotional expression by artificial intelligence recommendation chatbots on perceived humanness and social interactivity,” Decis. Support Syst., vol. 187, p. 114347, Dec. 2024, doi: 10.1016/j.dss.2024.114347.
A. Achour-Benallegue, J. Pelletier, G. Kaminski, and H. Kawabata, “Facial icons as indexes of emotions and intentions,” Front. Psychol., vol. 15, May 2024, doi: 10.3389/fpsyg.2024.1356237.
U. A. Khan, Q. Xu, Y. Liu, A. Lagstedt, A. Alamäki, and J. Kauttonen, “Exploring contactless techniques in multimodal emotion recognition: insights into diverse applications, challenges, solutions, and prospects,” Multimed. Syst., vol. 30, no. 3, p. 115, Jun. 2024, doi: 10.1007/s00530-024-01302-2.
R. Guo, H. Guo, L. Wang, M. Chen, D. Yang, and B. Li, “Development and application of emotion recognition technology — a systematic literature review,” BMC Psychol., vol. 12, no. 1, p. 95, Feb. 2024, doi: 10.1186/s40359-024-01581-4.
M. S. L. S. Tomaz, B. J. T. Fernandes, and A. Sciutti, “Identification of Anomalous Behavior Through the Observation of an Individual’s Emotional Variation: A Systematic Review,” IEEE Access, vol. 13, pp. 32927–32943, 2025, doi: 10.1109/ACCESS.2025.3540034.
T. Kopalidis, V. Solachidis, N. Vretos, and P. Daras, “Advances in Facial Expression Recognition: A Survey of Methods, Benchmarks, Models, and Datasets,” Information, vol. 15, no. 3, p. 135, Feb. 2024, doi: 10.3390/info15030135.
G. I. Tutuianu, Y. Liu, A. Alamäki, and J. Kauttonen, “Benchmarking deep Facial Expression Recognition: An extensive protocol with balanced dataset in the wild,” Eng. Appl. Artif. Intell., vol. 136, p. 108983, Oct. 2024, doi: 10.1016/j.engappai.2024.108983.
C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on Local Binary Patterns: A comprehensive study,” Image Vis. Comput., vol. 27, no. 6, pp. 803–816, May 2009, doi: 10.1016/j.imavis.2008.08.005.
S. K. Eng, H. Ali, A. Y. Cheah, and Y. F. Chong, “Facial expression recognition in JAFFE and KDEF Datasets using histogram of oriented gradients and support vector machine,” IOP Conf. Ser. Mater. Sci. Eng., vol. 705, no. 1, 2019, doi: 10.1088/1757-899X/705/1/012031.
S. Subudhiray, H. K. Palo, and N. Das, “K-nearest neighbor based facial emotion recognition using effective features,” IAES Int. J. Artif. Intell., vol. 12, no. 1, p. 57, Mar. 2023, doi: 10.11591/ijai.v12.i1.pp57-65.
C. Li, N. Ma, and Y. Deng, “Multi-Network Fusion Based on CNN for Facial Expression Recognition,” in Proceedings of the 2018 International Conference on Computer Science, Electronics and Communication Engineering (CSECE 2018), 2018, vol. 80, no. Csece, pp. 166–169, doi: 10.2991/csece-18.2018.35.
M. A. H. Akhand, S. Roy, N. Siddique, M. A. S. Kamal, and T. Shimamura, “Facial Emotion Recognition Using Transfer Learning in the Deep CNN,” Electronics, vol. 10, no. 9, p. 1036, Apr. 2021, doi: 10.3390/electronics10091036.
M. R. Appasaheb Borgalli and D. S. Surve, “Deep learning for facial emotion recognition using custom CNN architecture,” J. Phys. Conf. Ser., vol. 2236, no. 1, p. 012004, Mar. 2022, doi: 10.1088/1742-6596/2236/1/012004.
M. Tan and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proceedings of the 36th International Conference on Machine Learning, 2019, vol. 97, pp. 6105–6114.
M. Lyons, M. Kamachi, and J. Gyoba, “The Japanese Female Facial Expression (JAFFE) Dataset,” Zenodo, 1997, doi: 10.5281/zenodo.14974867.
M. J. Lyons, “‘Excavating AI’ Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset,” Jul. 2021, [Online]. Available: http://arxiv.org/abs/2107.13998.
M. J. Lyons, M. Kamachi, and J. Gyoba, “Coding Facial Expressions with Gabor Wavelets (IVC Special Issue),” Sep. 2020, doi: 10.5281/zenodo.4029679.
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, Jun. 2010, pp. 94–101, doi: 10.1109/CVPRW.2010.5543262.
S. Li and W. Deng, “Deep Facial Expression Recognition: A Survey,” IEEE Trans. Affect. Comput., vol. 13, no. 3, pp. 1195–1215, Jul. 2022, doi: 10.1109/TAFFC.2020.2981446.
A. T. Akbar, S. Saifullah, H. Prapcoyo, R. Husaini, and B. M. Akbar, “EfficientNet B0-Based RLDA for Beef and Pork Image Classification BT,” in Proceedings of the 2023 1st International Conference on Advanced Informatics and Intelligent Information Systems (ICAI3S 2023), 2024, pp. 136–145, doi: 10.2991/978-94-6463-366-5_13.
P. Utami, R. Hartanto, and I. Soesanti, “The EfficientNet Performance for Facial Expressions Recognition,” in 2022 5th International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Dec. 2022, pp. 756–762, doi: 10.1109/ISRITI56927.2022.10053007.
M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 10691–10700, 2019.
M. W. Ahdi, Khalid, A. Kunaefi, B. A. Nugroho, and A. Yusuf, “Convolutional Neural Network (CNN) EfficientNet-B0 Model Architecture for Paddy Diseases Classification,” in 2023 14th International Conference on Information & Communication Technology and System (ICTS), Oct. 2023, pp. 105–110, doi: 10.1109/ICTS58770.2023.10330828.
V.-T. Hoang and K.-H. Jo, “Practical Analysis on Architecture of EfficientNet,” in 2021 14th International Conference on Human System Interaction (HSI), Jul. 2021, pp. 1–4, doi: 10.1109/HSI52170.2021.9538782.
H. Dutta, “A Consensus Algorithm for Linear Support Vector Machines,” Manage. Sci., vol. 68, no. 5, pp. 3703–3725, May 2022, doi: 10.1287/mnsc.2021.4042.
X. Ju, Z. Yan, and T. Wang, “Overview of Optimization Algorithms for Large-scale Support Vector Machines,” in 2021 International Conference on Data Mining Workshops (ICDMW), Dec. 2021, pp. 909–916, doi: 10.1109/ICDMW53433.2021.00119.
B. Hu, J. Liu, R. Zhao, Y. Xu, and T. Huo, “A New Fault Diagnosis Method for Unbalanced Data Based on 1DCNN and L2-SVM,” Appl. Sci., vol. 12, no. 19, p. 9880, Sep. 2022, doi: 10.3390/app12199880.
S. Saifullah and R. Dreżewski, “Non-Destructive Egg Fertility Detection in Incubation Using SVM Classifier Based on GLCM Parameters,” Procedia Comput. Sci., vol. 207, pp. 3254–3263, 2022, doi: 10.1016/j.procs.2022.09.383.
S. Saifullah, R. Dreżewski, F. A. Dwiyanto, A. S. Aribowo, Y. Fauziah, and N. H. Cahyana, “Automated Text Annotation Using a Semi-Supervised Approach with Meta Vectorizer and Machine Learning Algorithms for Hate Speech Detection,” Appl. Sci., vol. 14, no. 3, p. 1078, Jan. 2024, doi: 10.3390/app14031078.
L. Liu, P. Li, M. Chu, and Z. Zhai, “L2-Loss nonparallel bounded support vector machine for robust classification and its DCD-type solver,” Appl. Soft Comput., vol. 126, p. 109125, Sep. 2022, doi: 10.1016/j.asoc.2022.109125.
T. Shahzad, K. Iqbal, M. A. Khan, Imran, and N. Iqbal, “Role of Zoning in Facial Expression Using Deep Learning,” IEEE Access, vol. 11, pp. 16493–16508, 2023, doi: 10.1109/ACCESS.2023.3243850.
S. Saifullah, R. Dreżewski, F. A. Dwiyanto, A. S. Aribowo, and Y. Fauziah, “Sentiment Analysis Using Machine Learning Approach Based on Feature Extraction for Anxiety Detection,” in Computational Science – ICCS 2023: 23rd International Conference, Prague, Czech Republic, July 3–5, 2023, Proceedings, Part II, Berlin, Heidelberg: Springer-Verlag, 2023, pp. 365–372.
S. Saifullah, Y. Fauziah, and A. S. Aribowo, “Comparison of Machine Learning for Sentiment Analysis in Detecting Anxiety Based on Social Media Data,” Jan. 2021, [Online]. Available: http://arxiv.org/abs/2101.06353.
S. Saifullah et al., “Nondestructive chicken egg fertility detection using CNN-transfer learning algorithms,” J. Ilm. Tek. Elektro Komput. dan Inform., vol. 9, no. 3, pp. 854–871, 2023, doi: 10.26555/jiteki.v9i3.26722.
J.-H. Kim, B.-G. Kim, P. P. Roy, and D.-M. Jeong, “Efficient Facial Expression Recognition Algorithm Based on Hierarchical Deep Neural Network Structure,” IEEE Access, vol. 7, pp. 41273–41285, 2019, doi: 10.1109/ACCESS.2019.2907327.
L. M. R. Rere, S. Usna, and D. Soegijanto, “Studi Pengenalan Ekspresi Wajah Berbasis Convolutional Neural Network,” in Seminar Nasional Teknologi Informasi dan Komunikasi STI&K (SeNTIK), 2019, vol. 3, pp. 71–78.
F. A. Bachtiar and M. Wafi, “Komparasi Metode Klasifikasi untuk Deteksi Ekspresi Wajah Dengan Fitur Facial Landmark,” J. Teknol. Inf. dan Ilmu Komput., vol. 8, no. 5, pp. 949–956, Oct. 2021, doi: 10.25126/jtiik.2021834434.
C. Gautam and K. . Seeja, “Facial emotion recognition using Handcrafted features and CNN,” Procedia Comput. Sci., vol. 218, pp. 1295–1303, 2023, doi: 10.1016/j.procs.2023.01.108.
A. T. Akbar, S. Saifullah, and H. Prapcoyo, “Klasifikasi Ekspresi Wajah Menggunakan Covolutional Neural Network,” J. Teknol. Inf. dan Ilmu Komput., vol. 11, no. 6, pp. 1399–1412, Dec. 2024, doi: 10.25126/jtiik.1168888.


Copyright (c) 2025 Journal of Information Systems and Informatics

This work is licensed under a Creative Commons Attribution 4.0 International License.
- I certify that I have read, understand and agreed to the Journal of Information Systems and Informatics (Journal-ISI) submission guidelines, policies and submission declaration. Submission already using the provided template.
- I certify that all authors have approved the publication of this and there is no conflict of interest.
- I confirm that the manuscript is the authors' original work and the manuscript has not received prior publication and is not under consideration for publication elsewhere and has not been previously published.
- I confirm that all authors listed on the title page have contributed significantly to the work, have read the manuscript, attest to the validity and legitimacy of the data and its interpretation, and agree to its submission.
- I confirm that the paper now submitted is not copied or plagiarized version of some other published work.
- I declare that I shall not submit the paper for publication in any other Journal or Magazine till the decision is made by journal editors.
- If the paper is finally accepted by the journal for publication, I confirm that I will either publish the paper immediately or withdraw it according to withdrawal policies
- I Agree that the paper published by this journal, I transfer copyright or assign exclusive rights to the publisher (including commercial rights)