Branch Mathematics and Statistics Faculty and Staff Publications

Document Type

Article

Publication Date

2025

Abstract

The COVID-19 pandemic has underscored the need for accurate and rapid diagnostic tools to assist clinical decision-making. Conventional deep learning models for COVID-19 detection in Chest X-Ray (CXR) images face challenges in poor generalization across imaging conditions and high computational demands. To address these issues, this study proposes CviTLNN, a novel hybrid model combining Vision Transformers (ViTs) and Liquid Neural Networks (LNNs) to improve feature extraction and classification. Specifically, CviTLNN employs a ViT with 24 transformer encoder blocks for efficient extraction of spatial features. The self-attention mechanism of ViTs effectively captures global and local dependencies in CXR images. Furthermore, it incorporates a four-layer LNN for dynamic refinement of features for decision-making. Experimental results demonstrate a test accuracy of 94%, a precision of 95%, and a recall of 94% on a COVID dataset of 5228 CXRs, minimizing false negatives and ensuring high sensitivity. The proposed model provides an efficient and scalable AI-driven diagnostic solution, making it highly suitable for real-world clinical applications, especially in resource-constrained settings.

Language (ISO)

English

Keywords

Keywords-COVID-19 detection, Vision Transformers (ViTs), Liquid Neural Network (LNN), chest X-ray analysis, medical imaging

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share

COinS