Edge Implementation of Unsupervised Self-evolving Vision Classifier

Aghasanli, Agil and Angelov, Plamen (2024) Edge Implementation of Unsupervised Self-evolving Vision Classifier. In: IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS) :. IEEE Conference on Evolving and Adaptive Intelligent Systems . Institute of Electrical and Electronics Engineers Inc., ESP. ISBN 9798350366235

Full text not available from this repository.

Abstract

This paper details the implementation of a recently introduced method (called IDEAL) for unsupervised self-evolving vision classifier within the latent feature space defined by a large Vision Transformer (ViT-L/14) based DinoV2 model pre-Trained on large data set LVD-142M. Within the IDEAL concept the pre-Trained DinoV2 (PT-DinoV2) is frozen (its parameters are not changed further) and thus it reduces to a very large but still simply arithmetic transformation. The recently introduced IDEAL method is leveraging the 1024-dimensional final fully connected layer of the PT-DinoV2 as a feature extractor defining the latent feature space called further foundation feature (FF) space. IDEAL utilizes mini-batch k-means clustering of images taken by a micro-camera mounted on Nvidia's Jetson nano development board within the FF space in its initialization phase. It further identifies prototypes that play a critical role in the interpretation of the classifier's decision by evaluation of the similarity between a query image and the prototypes. The proposed implementation in its self-evolving phase also demonstrates the ability to adapt to new data/images by creating new prototypes through a simple 'greedy' clustering. Furthermore, it demonstrates the ability to detect data/images that are significantly different from the ones that were already presented (open set classification or anomaly) resulting in unknown or 'I do not know' type of output (inability to associate such new data/image to any of the existing prototypes). The proposed implementation demonstrates the ability to apply IDEAL method to a federated learning scenario when aggregated data (prototypes and statistics of the data that are associated with the prototypes) are passed to another edge device (Jetson nano) and it is able to continue to correctly classify images. By demonstrating the method's feasibility in practical situations with constrained resources, this implementation substantially decreases the computational and communication overhead; thus, it provides a solution for distributed machine learning applications that are scalable and resource-efficient.

Item Type:
Contribution in Book/Report/Proceedings
Additional Information:
Publisher Copyright: © 2024 IEEE.
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700/1702
Subjects:
?? federated learningfully unsupervised learninginterpretabilitynvidia jetson nanoself-evolvingartificial intelligencecomputer science applications ??
ID Code:
233320
Deposited By:
Deposited On:
27 Oct 2025 14:45
Refereed?:
Yes
Published?:
Published
Last Modified:
27 Oct 2025 23:20