Mehboob, Fozia and Rauf, Abdul and Jiang, Richard and Saudagar, A.K.J. and Malik, Khalid Mahmood and Khan, Muhammad Badruddin and Hasnat, Mozaherul Hoque Abdul and AlTameem, Abdullah and AlKhathami, Mohammed (2022) Towards Robust Diagnosis of COVID-19 using Vision Self-attention Transformer. Scientific Reports, 12: 8922. ISSN 2045-2322
17de1ed2_1e86_4ea6_a42c_a48d1a78025e.pdf - Published Version
Available under License Creative Commons Attribution.
Download (5MB)
Abstract
The outbreak of COVID-19, since its appearance, has affected about 200 countries and endangered millions of lives. COVID-19 is extremely contagious disease, and it can quickly incapacitate the healthcare systems if infected cases are not handled timely. Several Conventional Neural Networks (CNN) based techniques have been developed to diagnose the COVID-19. These techniques require a large, labelled dataset to train the algorithm fully, but there are not too many labelled datasets. To mitigate this problem and facilitate the diagnosis of COVID-19, we developed a self-attention transformer-based approach having self-attention mechanism using CT slices. The architecture of transformer can exploit the ample unlabelled datasets using pre-training. The paper aims to compare the performances of self-attention transformer-based approach with CNN and Ensemble classifiers for diagnosis of COVID-19 using binary Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection and multi-class Hybrid-learning for UnbiaSed predicTion of COVID-19 (HUST-19) CT scan dataset. To perform this comparison, we have tested Deep learning-based classifiers and ensemble classifiers with proposed approach using CT scan images. Proposed approach is more effective in detection of COVID-19 with an accuracy of 99.7% on multi-class HUST-19, whereas 98% on binary class SARS-CoV-2 dataset. Cross corpus evaluation achieves accuracy of 93% by training the model with Hust19 dataset and testing using Brazilian COVID dataset