Autonomous Face Classification Online Self-Training System Using Pretrained ResNet50 and Multinomial Naïve Bayes

cris.lastimport.scopus2024-09-19T01:31:13Z
dc.abstract.enThis paper presents a novel, autonomous learning system working in real-time for face recognition. Multiple convolutional neural networks for face recognition tasks are available; however, these networks need training data and a relatively long training process as the training speed depends on hardware characteristics. Pretrained convolutional neural networks could be useful for encoding face images (after classifier layers are removed). This system uses a pretrained ResNet50 model to encode face images from a camera and the Multinomial Naïve Bayes for autonomous training in the real-time classification of persons. Faces of several persons visible in a camera are tracked using special cognitive tracking agents who deal with machine learning models. After a face in a new position of the frame appears (in a place where there was no face in the previous frames), the system checks if it is novel or not using a novelty detection algorithm based on an SVM classifier; if it is unknown, the system automatically starts training. As a result of the conducted experiments, one can conclude that good conditions provide assurance that the system can learn the faces of a new person who appears in the frame correctly. Based on our research, we can conclude that the critical element of this system working is the novelty detection algorithm. If false novelty detection works, the system can assign two or more different identities or classify a new person into one of the existing groups.
dc.affiliationTransportu i Informatyki
dc.contributor.authorŁukasz Maciura
dc.contributor.authorTomasz Cieplak
dc.contributor.authorDamian Pliszczuk
dc.contributor.authorMichał Maj
dc.contributor.authorTomasz Rymarczyk
dc.date.accessioned2024-04-12T08:24:15Z
dc.date.available2024-04-12T08:24:15Z
dc.date.issued2023
dc.description.abstract<jats:p>This paper presents a novel, autonomous learning system working in real-time for face recognition. Multiple convolutional neural networks for face recognition tasks are available; however, these networks need training data and a relatively long training process as the training speed depends on hardware characteristics. Pretrained convolutional neural networks could be useful for encoding face images (after classifier layers are removed). This system uses a pretrained ResNet50 model to encode face images from a camera and the Multinomial Naïve Bayes for autonomous training in the real-time classification of persons. Faces of several persons visible in a camera are tracked using special cognitive tracking agents who deal with machine learning models. After a face in a new position of the frame appears (in a place where there was no face in the previous frames), the system checks if it is novel or not using a novelty detection algorithm based on an SVM classifier; if it is unknown, the system automatically starts training. As a result of the conducted experiments, one can conclude that good conditions provide assurance that the system can learn the faces of a new person who appears in the frame correctly. Based on our research, we can conclude that the critical element of this system working is the novelty detection algorithm. If false novelty detection works, the system can assign two or more different identities or classify a new person into one of the existing groups.</jats:p>
dc.identifier.doi10.3390/s23125554
dc.identifier.issn1424-8220
dc.identifier.urihttps://repo.akademiawsei.eu/handle/item/186
dc.pbn.affiliationinformation and communication technology
dc.relation.ispartofSensors
dc.rightsCC-BY
dc.subject.enface recognition
dc.subject.enautonomous systems
dc.subject.enonline learning
dc.subject.enMultinomial Naïve Bayes classifier
dc.subject.enconvolutional neural networks
dc.titleAutonomous Face Classification Online Self-Training System Using Pretrained ResNet50 and Multinomial Naïve Bayes
dc.typeReviewArticle
dspace.entity.typePublication
oaire.citation.issue12
oaire.citation.volume23