Predictions of Criminal Tendency Through Facial Expression Using Convolutional Neural Network

  • Gabriel Gregory James Topfaith University, Nigeria
  • Peace Chiamaka Okafor Bayelsa State Command, Nigeria
  • Emenike Gabriel Chukwu Federal University of Technology, Nigeria
  • Nseobong Archibong Michael Ritman University, Nigeria
  • Oscar Aloysius Ebong Uyo Akwa Ibom State, Nigeria
Keywords: Criminal Tendency, Facial Expression, Convolutional Neural Network, Machine Learning.

Abstract

Criminal intention is a critical aspect of human interaction in the 21st-century digital age where insecurity is on the high side as a major global threat. Kidnapping, killings, molestation of all sorts, gender-based violence, terrorism, and banditry are the trends of criminality in our nation, as such, there is a need to effectively explore innovative means to identify and cope with this evil menace in our society. The facial positioning of humans can tell their evil intention even if they pretended to smile with the evil in their minds. In normal instances, it may be very difficult to predict the heart of man, but with the trending information technology like image processing, the state of a human face could be used as a means to read their tendencies. This paper proposes a deep learning model based on the FER2013 dataset through the implementation of a CNN model that predicts criminal tendencies with the help of facial expressions. With this goal in mind, we explore a new level of image processing to infer criminal tendency from facial images through a convolutional neural network (CNN) deep learning algorithm in other to discriminate between criminal and non-criminal facial images. It was observed that CNN was more consistent in learning to reach its best test accuracy of 90.6%, which contained 8 convolutional layers. To increase the accuracy of this model, several procedures were explored using Random Search from the Keras tuner library, testing out various numbers of convolutional layers and Adam optimizer. It was also noticed that applying the dissection and visualization of the convolutional layers in CNN reveals that the shape of the face, eyebrows, eyeball, pupils, nostrils, and lips are taken advantage of by CNN to classify the images.

Downloads

Download data is not yet available.

References

T. R. Anderson and T. A. Slotkin, “Maturation of the adrenal medulla--IV. Effects of morphine,” Biochem. Pharmacol., vol. 24, no. 16, pp. 1469–1474, Aug. 1975, doi: 10.1016/0006-2952(75)90020-9.

A. Ramos-Michel, M. Pérez-Cisneros, E. Cuevas, and D. Zaldivar, “Image Classification with Convolutional Neural Networks,” in Metaheuristics in Machine Learning: Theory and Applications, vol. 967, D. Oliva, E. H. Houssein, and S. Hinojosa, Eds., in Studies in Computational Intelligence, vol. 967. , Cham: Springer International Publishing, 2021, pp. 445–473. doi: 10.1007/978-3-030-70542-8_18.

A. J. Sarmah, N. Bhattacharyya, R. K. Jain, and P. Maroti, “Emotion Recognition Through Facial Expressions Using CNN,” vol. 13, no. 2, 2023.

M. Hashemi and M. Hall, “Retracted Article: Criminal tendency detection from facial images and the gender bias effect,” J. Big Data, vol. 7, no. 1, p. 2, Dec. 2020, doi: 10.1186/s40537-019-0282-4.

A. G. Reece and C. M. Danforth, “Instagram photos reveal predictive markers of depression,” EPJ Data Sci., vol. 6, no. 1, p. 15, Dec. 2017, doi: 10.1140/epjds/s13688-017-0110-z.

S. Ruder, “An overview of gradient descent optimization algorithms.” arXiv, Jun. 15, 2017. Accessed: Feb. 06, 2024.

X. Geng, K. Smith-Miles, and Z.-H. Zhou, “Facial Age Estimation by Learning from Label Distributions”.

G.G. James, A.E. Okpako, and J.N. Ndunagu, “Fuzzy cluster means algorithm for the diagnosis of confusable disease,” vol. 23, no. 1, Mar. 2017.

P. Ekman and W. V. Friesen, “Facial Action Coding System.” Jan. 14, 2019. doi: 10.1037/t27734-000.

C. Pramerdorfer and M. Kampel, “Facial Expression Recognition using Convolutional Neural Networks: State of the Art.” arXiv, Dec. 08, 2016. Accessed: Feb. 06, 2024.

C. Ituma, G. G. James, and F. U. Onu, “Implementation of Intelligent Document Retrieval Model Using Neuro-Fuzzy Technology,” Int. J. Eng. Appl. Sci. Technol., vol. 4, no. 10, pp. 65–74, Feb. 2020, doi: 10.33564/ijeast.2020.v04i10.013.

A. O. Vorontsov and A. N. Averkin, “Comparison of Different Convolution Neural Network Architectures for The Solution of The Problem of Emotion Recognition By Facial Expression,” 2018.

Cesare Lombroso, Criminal Man. Duke University Press, 2006.

K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition.” arXiv, Apr. 10, 2015. Accessed: Feb. 06, 2024.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization.” arXiv, Jan. 29, 2017. Accessed: Feb. 06, 2024.

N. Tsapatsoulis, K. Karpouzis, G. Stamou, F. Piat, and S. Kollias, “A Fuzzy System for Emotion Classification Based on the MPEG-4 Facial Definition Parameter Set”.

N. Oliver, A. Pentland, and F. Bérard, “LAFTER: a real-time face and lips tracker with facial expression recognition,” Pattern Recognit., vol. 33, no. 8, pp. 1369–1382, Aug. 2000, doi: 10.1016/S0031-3203(99)00113-2.

T. Connie, M. Al-Shabi, W. P. Cheah, and M. Goh, “Facial Expression Recognition Using a Hybrid CNN–SIFT Aggregator,” in Multi-disciplinary Trends in Artificial Intelligence, vol. 10607, S. Phon-Amnuaisuk, S.-P. Ang, and S.-Y. Lee, Eds., in Lecture Notes in Computer Science, vol. 10607. , Cham: Springer International Publishing, 2017, pp. 139–149. doi: 10.1007/978-3-319-69456-6_12.

Chukwu, E. G., James; G. G; Benson-Emenike, M. E.; Michael, N. A, “Observed and Evaluated Service Quality on Patients Waiting Time of University of UYO Teaching Hospital using Queuing Models,” International Journal of Innovative Science and Research Technology, vol. 8, no. 5, pp. 2094–2098, May 2023.

M. I. Georgescu, R. T. Ionescu, and M. Popescu, “Local Learning with Deep and Handcrafted Features for Facial Expression Recognition,” IEEE Access, vol. 7, pp. 64827–64836, May 2019.

A. Khanzada, C. Bai, and F. T. Celepcikay, “Facial Expression Recognition with Deep Learning.” arXiv, Apr. 07, 2020. Accessed: Feb. 06, 2024.

B.-K. Kim, S.-Y. Dong, J. Roh, G. Kim, and S.-Y. Lee, “Fusing Aligned and Non-aligned Face Information for Automatic Affect Recognition in the Wild: A Deep Learning Approach,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 1499–1508. doi: 10.1109/CVPRW.2016.187.

I. J. Goodfellow et al., “Challenges in Representation Learning: A report on three machine learning contests.” arXiv, Jul. 01, 2013. Accessed: Feb. 06, 2024. [Online]. Available: http://arxiv.org/abs/1307.0414

James, G. G.; Chukwu, E. G.; Ekwe, P. O.; ASOGWA, E. C; Darlington, C. H, “Design of an Intelligent based System for the Diagnosis of Lung Cancer,” International Journal of Innovative Science and Research Technology, vol. 8, no. 6, pp. 791–796, Jun. 2023.

R. T. Ionescu, M. Popescu, and C. Grozea, “Local Learning to Improve Bag of Visual Words Model for Facial Expression Recognition”.

Umoh, U. A., Umoh, A. A., James, G. G., Oton, U. U., Udoudo, J. J., B.Eng., “Design of Pattern Recognition System for the Diagnosis of Gonorrhea Disease,” International Journal of Scientific & Technology Research, pp. 74–79, Jun. 2012.

G. G. James, A. E. Okpako, C. Ituma, and J. E. Asuquo, “Development of Hybrid Intelligent based Information Retreival Technique,” Int. J. Comput. Appl., vol. 184, no. 34, pp. 1–13, Oct. 2022, doi: 10.5120/ijca2022922401.

Y. Wang and M. Kosinski, “Deep Neural Networks Can Detect Sexual Orientation from Faces,” Graduate School of Business, Stanford University, Stanford, CA94305, USA, USA, 2020.

Published
2024-03-23
Abstract views: 630 times
Download PDF: 359 times
How to Cite
James, G., Okafor, P., Chukwu, E., Michael, N., & Ebong, O. (2024). Predictions of Criminal Tendency Through Facial Expression Using Convolutional Neural Network. Journal of Information Systems and Informatics, 6(1), 13-29. https://doi.org/10.51519/journalisi.v6i1.635