Abstract
Objective
The outbreak of the coronavirus caused major problems in more than 151 countries around the world. An important step in the fight against coronavirus is the search for infected people. The goal of this article is to predict COVID-19 infectious patients.
Methods
We implemented DenseNet201, available on cloud platform, as a learning network. DenseNet201 is a 201-layer networkthat. is trained on ImageNet. The input size of pre-trained DenseNet201 images is 224 × 224 × 3.
Results
Implementation of DenseNet201 was effectively performed based on 80 % of the training X-rays and 20 % of the X-rays of the test phases, respectively. DenseNet201 shows a good experimental result with an accuracy of 99.24 % in 7.47 min. To measure the computational efficiency of the proposed model, we collected more than 6,000 noise-free data infected by tuberculosis, COVID-19, and uninfected healthy chests for implementation.
Conclusions
DenseNet201 available on the cloud platform has been used for the classification of COVID-19-infected patients. The goal of this article is to demonstrate how to achieve faster results.
Introduction
At present, the deep model CNN available on the cloud platform can achieve better accuracy in image classification and analysis (Brunetti et al. 2019; Litjens et al. 2017; Liu et al. 2018). Diagnosis using computer (Asiri et al. 2019; Zhou et al. 2019), investigation of health-related data through electronic tools (Shickel et al. 2017), drug administration and treatment development (Chouhan et al. 2020), atmospheric determination (Malūkas et al. 2018), and human-brain-computer interface (Zhang et al. 2019), with the aim of providing an evaluation for human disease. The accuracy of online CNN depends on the different stages of abstraction (Bakator and Radosav 2018). Professionals can use deep learning to learn about diseases like COVID-19 and other chest diseases. Figure 1 (below) represents the images of chest infection with COVID-19, tuberculosis, and a normal chest.

Infected & no infected chests.
Infection in chest
A chest disease can diverge from normal to significant. We captured a graphic dataset for COVID-19, a normal chest, and tuberculosis (Roosa et al. 2020). Figure 1 represents these chest images.
COVID-19: The coronavirus endemic appeared in China in Dec 2019 and transformed into a pathological state globally (Paules, Marston, and Fauci 2020; Yan et al. 2020). Coronavirus is a dangerous virus (Chen, Liu, and Guo 2020; Cohen and Normile 2020; Hemdan, Shouman, and Karar 2020; https://www.who.int/emergencies/diseases/novel-coronavirus-2019/media-resources/press-briefings 2019; Li et al. 2020; Sohrabi et al. 2020; Zhu et al. 2020).
Tuberculosis (TB): Mycobacterium tuberculosis (MTB) is a bacterium that is responsible for tuberculosis. Most TB patients do not show any symptoms of the disease. The micro-organism that causes TB unfolds once the infected patient sneezes or coughs. The infected person needs treatment that includes several antibiotics (https://www.kaggle.com/saife245/tuberculosis-image-datasets).
A World Health Organization report noted that 75,000 research articles (about what?) had been published by November 2020. However, these research articles were not more useful in the fight against COVID-19 by using the intelligence of computers. It’s time to classify and methodically look at different AI techniques. Therefore, the main objective of this research paper is to recap and focus on useful AI techniques to fight against coronavirus.
The major focus of this research article is to recognize people infected with coronavirus by using DenseNet-201. This research article suggests two key reasons for the classification of chest diseases:
I. Improving the recognition of COVID-19 patients
DenseNet201 shows better performance for image classification in comparison to other available technologies.
II. Recognizing signs of COVID-19
Researching the infected chest helps the patients for the prediction of disease. Table 1 represents time and accuracy on ImageNet.
Time & accuracy on ImageNet.
Sr.No | Authors | Processor | DL library | Time | Accuracy |
---|---|---|---|---|---|
I | This work | Tesla P100 × 1,024 | DenseNet201 | 7.47 min | 99.24 % |
II | Goyal et al. (2017) | Tesla P100 × 256 | Caffe2 | 1 h | 76.3 % |
III | Smith et al. (2017) | Full TPU Pod | TensorFlow | 30 min | 76.1 % |
IV | Jia et al. (2018) | Tesla P40 × 2,048 | TensorFlow | 6.6 min | 75.8 % |
V | He et al. (2016) | Tesla P100 × 8 | Caffe | 29 h | 75.3 % |
VI | Mikami et al. (2018) | Tesla V100 × 3,456 | NNL | 2 min | 75.29 % |
VII | Ying et al. (2018) | TPU V3 × 1,024 | TensorFlow | 1.8 min | 75.2 % |
VIII | Akiba, Suzuki, and Fukuda (2017) | Tesla P100 × 1,024 | Chainer | 15 min | 74.9 % |
In this paper, we show how, after collecting the required images, we used cloud-based DenseNet201to recognize the patients infected with COVID-19. The model presented here will help professionals to eliminate the contamination in the chest in a timely manner.
This paper is organized as follows:
A review of the literature is described in this section.
DenseNet201 available on cloud platforms is described in this section.
This section discusses the chest pathology data set and the preparation of the chest disease image data set.
The proposed model and model training on DenseNet201 available on the cloud platform is explained in this section.
Online DenseNet201 efficiency estimation has been done.
The result from DenseNet201 available on the cloud platform is discussed.
In this section we have concluded our research work and future work is explained.
Literature review
At present, the online DenseNet201 model is implemented to classify images. It has been widely used in machine learning tasks. We propose that DenseNet 201 is an efficient way to predict COVID-19 infections by using X-ray images (Narin, Kaya, and Pamuk 2020).
Abbas et al. used a convolutional neural network for classification through X-ray images. DeTrac architecture is used for classification. In this article, they have applied three steps. In the first step, they extracted features by using a pre-trained deep CNN model. Then, an optimization method is implemented for model training. Lastly, error correction criteria are applied to the softmax layer for image classification using DenseNet201. They have shown an accuracy of 95.119 % on CXE images (Abbas, Abdelsamea, and Gaber 2020).
Zhang et al. presented a CNN approach for rapid and reliable identification of corona disease. As per the article, they computed image classification by implementation on three parts: recognition head, the categorization head, and the spinal head. The recognition and classification head have similar architecture (Zhang et al. 2020).
There are several fields where deep learning approaches have been implemented (Chen et al. 2018; Douarre et al. 2018; Sun et al. 2018; Wang, Wang, and Zhang 2018; Zhang et al. 2018). MI Razaak et al. suggested the problems during the processing of medical images (Razzak, Naz, and Zaib 2018). Dinggang Shen et al. explained different infections using techniques of neural networks (Shen, Wu, and Suk 2017). Andre et al. implemented deep learning methodologies for the classification of dermatology (Esteva et al. 2017). F. Milletari et al. presented a deeplearning model on prostate imaging (Milletari, Navab, and Ahmadi 2016).
Grewal et al. presented a model of deep learning for the identification of brain hemorrhage (Grewal et al. 2018). Farron et al use retinal images for the identification of diabetic retinopathy (Gulshan et al. 2016). Y.-Bar et al. used the CNN approach for the classification of chest diseases (Avni et al. 2010; Bar et al. 2015; Jaeger et al. 2013; Melendez et al. 2014).
Rehman, N. U. et al. proposed a research article on “A Self-Activated CNN Approach for Multi-Class Chest-Related COVID-19 Detection.” In this paper, the authors classified COVID-19 disease among other chest diseases by using a deep convolutional neural network. It shows the possibilities of finding anomalies in X-ray images by using deep learning methods. The convolutional neural networks learn better image demonstration (Rehman et al. 2021).
Allioui, H et al. proposed an article on “A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of Covid-19 CT Image Segmentation.” They offered an efficient method named “multi-agent reinforcement framework” for automatic classification of COVID-19 mask. This method improves semantic partitioning by altering the conventional Deep-Q-Network to learn better mask mining (Allioui et al. 2022).
Zitar, R et al. proposed a “Review on COVID-19 diagnosis model based on machine learning and deep learning approaches.” This paper used the concepts of deep learning and machine learning for the diagnosis of COVID-19 disease (Alyasseri et al. 2022).
Proposed methodology
DenseNet201, available on the cloud platform, is used for the classification of COVID-19 disease. The stages to perform DenseNet201 are presented below. Figure 2 describes the steps for recognizing human chest disease.

Steps for chest disease recognition.
Step i: Upload the RGB image of the affected chest.
Step ii: Images will resize into 224 × 224 × 3 dimensions.
Step iii: Extract the feature of the affected chest.
Step iv: Performance of DenseNet201 is evaluated in this step.
Step v: Classification of COVID-19 has been labeled in this step.
Step vi: We will get the name of the chest disease as the output.
Step vii: Implementation of Grad-Cam using Keras and TensorFlow.
Step viii: Heat Map Visualization is computed by Grad-CAM.
Convolutional neural network (CNN)
General architecture of a Convolutional Neural Network is demonstrated in Figure 3 given below:

Architecture of convolutional neural network.
CNN layers
There are a number of layers of convolutional neural networks.
Input Layer: It accepts 224 × 224 × 3 images as input.
Convolution Layer: It is used for feature extraction.
Rectified Linear Activation Unit: Table 2 represents the output of the rectified linear unit.
Rectified linear unit.
y | −10 | −8 | −6 | −4 | −2 | 0 | 1 | 2 | 3 |
f(y) | f(−10) | f(−8) | f(−6) | f(−4) | f(−2) | f(0) | f(1) | f(2) | f(3) |
F(y) | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 2 | 3 |
Max pooling layer
It considers only maximum valued constituents from the region of the map covered by the filter. We have used the max pooling layer. Figure 4 demonstrates the maximal pooling layer.

Max pooling layer.
Fully connected layer
It is used to convert a multidimensional array into a one-dimensional array.
Prediction by using CNN methodology
Gradient-weighted class activation mapping (Grad-CAM) is implemented to visualize the calculations of our proposed model. The generated heat map is positioned over the images provided as input that focused on the infected area, which is the most significant part of our proposed model.
Explanation of Grad-CAM
We have calculated gradient weight by using the Grad-CAM technique for the model. The approach has been updated to Grad-CAM++ (Chattopadhay et al. 2018). We have computed the weights by applying the second derivative. Figure 5 represents the feature extraction of COVID-19 using Grad-CAM. Figure 6 shows the detection of COVID-19 using DenseNet 201.

Feature extraction of COVID-19 using Grad-CAM.

COVID-19 disease detection using DenseNet 201.
Limitations of the proposed methodology
The InceptionV3 model is more powerful in terms of the reasonable cost incurred and the number of parameters generated by the Inception V3 network. The limitation of our proposed model is that we are not allowed to change the network of the proposed model because if we do so we will lose the computational as well as economic advantages (https://blog.paperspace.com/popular-deep-learning-architectures-resnet-inceptionv3-squeezenet/).
Data set collection and preparation
In this part, we discussed the image data set for human chest diseases and set up the data set for the classification of human chest diseases. Figure 7 shows the implementation of DenseNet 201 on image datasets.

Implementation on image datasets.
Collection of datasets
Images of COVID-19, tuberculosis, and a normal chest are collected from Kaggle. Kaggle contains thirty-eight different kinds of chest disease from 54,323 images. We applied DenseNet201 with different training and testing on 6,018 collected datasets. Table 3 represented below shows the types of dataset.
Different types of infections.
Sr.No. | Classes | Datasets |
---|---|---|
1 | A | COVID-19 |
2 | B | Normal chest |
3 | C | Tuberculosis |
Preparation of datasets
Before training of data using DenseNet201, we converted all RGB images into grayscale and resize the image into 224 × 224 × 3. Figure 8 shows resized gray image.

Resized chest images.
Technique for data visualization
Multiple visualization techniques and methods based on gradient can be implemented using deep learning techniques (Zeiler and Fergus 2014). Grad-CAM highlights the different patterns available in the image and classifies the images on behalf of these patterns by using a deep convolutional neural network of the presented model. The steps involved in the Grad-CAM model are represented by Figure 2. Grad-CAM develops both rearward and onward passes and creates a suitable vision by screening the infected image in the output class. We considered the rearward pass for the deconvolutional techniques. Grad-CAM is suitable for classification using the localization method. It produces visual pictures with high-intensity definitions (Chattopadhay et al. 2018; https://blog.paperspace.com/popular-deep-learning-architectures-resnet-inceptionv3-squeezenet/; Kumar et al. 2023a; Zeiler and Fergus 2014). Equation (1) represents the calculation of the gradient’s score with respect to feature map FM by using Grad-CAM and I stand for the category of the produced image.
Equation (2) represents the average score of gradient and I stand for channel index and p and q stand for length and width of the input image provided.
Equation (3) represents the computation of Grad-CAM. The feature map of the concluded class is represented by nz.
The weight of consequent feature maps is being calculated and the summation of the weighted feature maps is being calculated for the generation of a heat map using Grad-CAM.
Proposed model
We applied a model named DenseNet201 available on the cloud platform. Our focus is on COVID-19 disease classification using a deep convolutional neural network pre-trained with DenseNet201.
Training of the model
We have chosen DenseNet201 for training purposes.
DenseNet201 performance evaluation
DenseNet201 is developed to improve accuracy rejected due to gradient vanishing problems in deep networks. It consists of dense blocks. Where layers are densely linked together, each layer takes output feature maps from the whole previous layers as input. This structure permits every layer to receive additional information from the loss function to a smaller connection. A transition layer decreases the spatial size of the input21 (Huang et al. 2017). Figure 9 represents the DenseNet201 architecture that was formed using the MATLAB function.

Architecture of DenseNet201.
Loss type
There is a function named “cross-entropyex” that is used for the computation of loss images. There is a function called “split each label ( ) for splitting training and testing data from the same dataset. Another function, named “random,” is used for the selection of infected human chests (Kumar et al. 2023b).
The Outcome of Loss function:
Classification Output Layer with properties:
Name: ‘Output’
Classes: [1000 × 1 categorical]
Output Size: 1000
Hyper parameters
Evaluation of DenseNet201 performance
The performance of DenseNet201 is evaluated by a net. Layer (1) function to portray the weight of the first convolution layer. We can observe the weight of the first convolutional layer in Figure 10.

First convolutional layer weight.
Result analysis and discussion
In this section, we have discussed the accuracy and speed obtained by using MATLAB available on the cloud platform. Table 4 represents the confusion matrix of Dense201.
Confusion matrix of DenseNet201 for different training % & testing %.
For 60 % training and 40 % testing | For 65 % training and 35 % testing | For 70 % training and 30 % testing | ||||||
---|---|---|---|---|---|---|---|---|
COVID-19 | Normal chest | Tuberculosis | COVID-19 | Normal chest | Tuberculosis | COVID-19 | Normal chest | Tuberculosis |
84 | 2 | 2 | 74 | 2 | 1 | 63 | 2 | 1 |
2 | 86 | 0 | 2 | 74 | 1 | 5 | 61 | 0 |
0 | 0 | 88 | 0 | 0 | 77 | 0 | 0 | 66 |
For 75 % training and 25 % testing | For 80 % training and 20 % testing | For 85 % training and 15 % testing | ||||||
---|---|---|---|---|---|---|---|---|
COVID-19 | Normal chest | Tuberculosis | COVID-19 | Normal chest | Tuberculosis | COVID-19 | Normal chest | Tuberculosis |
49 | 3 | 3 | 44 | 0 | 0 | 32 | 0 | 1 |
1 | 54 | 0 | 1 | 43 | 0 | 1 | 32 | 0 |
0 | 0 | 55 | 0 | 0 | 44 | 0 | 0 | 33 |
Performance of DenseNet201: In this research paper, we have implemented DenseNet201, available on a cloud platform, for various training and testing percentages as shown in Table 5.
Time elapsed and accuracy by DenseNet201 FOR different testing and training by percentage.
Sr. no | Testing % | Training % | Time elapsed (IN min) | Accuracy % |
---|---|---|---|---|
1 | 40 | 60 | 4.80 | 97.73 |
2 | 35 | 65 | 6.49 | 97.40 |
3 | 30 | 70 | 7.36 | 95.96 |
4 | 25 | 75 | 7.03 | 95.76 |
5 | 20 | 80 | 7.47 | 99.24 |
6 | 15 | 85 | 7.82 | 97.98 |
Accuracy & Speed: We have got an accuracy of 99.24 % in just 7.45 min for 80 % of training and 20 % of testing. If we compromise with accuracy, then the highest speed of computation can be achieved by 60 % Training and 40 % testing. Table 5 shows the accuracy and total time consumed for different training and testing percentages. Figure 11, demonstrated below, shows the graph of accuracy and consumption of time for various training and testing scenarios. Figure 12 represents the standard deviation and mean graph.

Time elapsed (in minutes) & accuracy.

Mean & standard deviation graph.
Statistical Analysis:
Conclusion and future work
We presented a prospective article for the classification of COVID-19 using a convolutional neural network algorithm. The results show deep learning has significantly outperformed the latest technologies. As we know, COVID-19 is very hazardous to human health, so we need to detect COVID-19 infection at its early stage to cure the infected patients. The main goal of this article is to detect early-stage COVID-19 patients. To accomplish this task, we have implemented an efficient DenseNet-201 methodology for prediction of COVID-19 infection. We have gained 99.24 % accuracy within 7.47 min.
Implementation of deep learning technology is an efficient way for disease detection by using medical images. Deep learning technologies can provide more profit to healthcare and biomedical fields after implementation on suitable medical devices. Beyond the work done in this research article, there are many other biomedical fields where we can apply deep learning models like image analysis, body, brain, and machine interface, gene expression analysis, genomic sequence, medical and public health management system.
Acknowledgments
All authors are acknowledged for this research paper.
-
Research funding: None declared.
-
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Competing interests: No conflict of interest.
-
Informed consent: Not applicable.
-
Ethical approval: Not applicable.
References
Abbas, A., M. M. Abdelsamea, and M. M. Gaber. 2020. Classification of COVID-19 in Chest X-Ray Images Using DeTraC Deep Convolutional Neural Network. New York: arXiv preprint arXiv:2003.13815.10.1101/2020.03.30.20047456Search in Google Scholar
Akiba, T., S. Suzuki, and K. Fukuda. 2017. Extremely Large Minibatch Sgd: Training Resnet-50 on Imagenet in 15 Minutes. New York: arXiv preprint arXiv:1711.04325.Search in Google Scholar
Allioui, H., M. A. Mohammed, N. Benameur, B. Al-Khateeb, K. H. Abdulkareem, B. Garcia-Zapirain, R. Damaševičius, R. Maskeliūnas, and R. Maskeliūnas. 2022. “A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of COVID-19 CT Image Segmentation.” Journal of Personalized Medicine 12 (2): 309. https://doi.org/10.3390/jpm12020309.Search in Google Scholar PubMed PubMed Central
Alyasseri, Z. A. A., M. A. Al-Betar, I. A. Doush, M. A. Awadallah, A. K. Abasi, S. N. Makhadmeh, O. A. Alomari, K. H. Abdulkareem, A. Adam, R. Damasevicius, M. A. Mohammed, and R. A. Zitar. 2022. “Review on COVID-19 Diagnosis Models Based on Machine Learning and Deep Learning Approaches.” Expert Systems 39 (3): e12759. https://doi.org/10.1111/exsy.12759.Search in Google Scholar PubMed PubMed Central
Asiri, N., M. Hussain, F. Al Adel, and N. Alzaidi. 2019. “Deep Learning Based Computer-Aided Diagnosis Systems for Diabetic Retinopathy: A Survey.” Artificial Intelligence in Medicine 99: 101701. https://doi.org/10.1016/j.artmed.2019.07.009.Search in Google Scholar PubMed
Avni, U., H. Greenspan, E. Konen, M. Sharon, and J. Goldberger. 2010. “X-Ray Categorization and Retrieval on the Organ and Pathology Level, Using Patch-Based Visual Words.” IEEE Transactions on Medical Imaging 30 (3): 733–46. https://doi.org/10.1109/tmi.2010.2095026.Search in Google Scholar PubMed
Bakator, M., and D. Radosav. 2018. “Deep Learning and Medical Diagnosis: A Review of Literature.” Multimodal Technologies and Interaction 2 (3): 47. https://doi.org/10.3390/mti2030047.Search in Google Scholar
Bar, Y., I. Diamant, L. Wolf, S. Lieberman, E. Konen, and H. Greenspan. 2015. “Chest Pathology Detection Using Deep Learning with Non-medical Training.” In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), pp. 294–7. IEEE.10.1109/ISBI.2015.7163871Search in Google Scholar
Brunetti, A., L. Carnimeo, G. F. Trotta, and V. Bevilacqua. 2019. “Computer-assisted Frameworks for Classification of Liver, Breast and Blood Neoplasias via Neural Networks: A Survey Based on Medical Images.” Neurocomputing 335: 274–98. https://doi.org/10.1016/j.neucom.2018.06.080.Search in Google Scholar
Chattopadhay, A., A. Sarkar, P. Howlader, and V. N. Balasubramanian. 2018. “Grad-cam++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks.” In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 839–47. IEEE.10.1109/WACV.2018.00097Search in Google Scholar
Chen, Y., Q. Liu, and D. Guo. 2020. “Emerging Coronaviruses: Genome Structure, Replication, and Pathogenesis.” Journal of Medical Virology 92 (4): 418–23. https://doi.org/10.1002/jmv.25681.Search in Google Scholar PubMed PubMed Central
Chen, Z., Y. Zhang, C. Ouyang, F. Zhang, and J. Ma. 2018. “Automated Landslides Detection for Mountain Cities Using Multi-Temporal Remote Sensing Imagery.” Sensors 18 (3): 821. https://doi.org/10.3390/s18030821.Search in Google Scholar PubMed PubMed Central
Chouhan, V., S. K. Singh, A. Khamparia, D. Gupta, P. Tiwari, C. Moreira, R. Damaševičius, and V. H. C. De Albuquerque. 2020. “A Novel Transfer Learning Based Approach for Pneumonia Detection in Chest X-Ray Images.” Applied Sciences 10 (2): 559. https://doi.org/10.3390/app10020559.Search in Google Scholar
Cohen, J., and D. Normile. 2020. New SARS-like Virus in China Triggers Alarm. USA: American Association for the Advancement of Science.10.1126/science.367.6475.234Search in Google Scholar PubMed
Douarre, C., R. Schielein, C. Frindel, S. Gerth, and D. Rousseau. 2018. “Transfer Learning from Synthetic Data Applied to Soil–Root Segmentation in X-Ray Tomography Images.” Journal of Imaging 4 (5): 65. https://doi.org/10.3390/jimaging4050065.Search in Google Scholar
Esteva, A., B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun. 2017. “Dermatologist-level Classification of Skin Cancer with Deep Neural Networks.” Nature 542 (7639): 115–8. https://doi.org/10.1038/nature21056.Search in Google Scholar PubMed PubMed Central
Grewal, M., M. M. Srivastava, P. Kumar, and S. Varadarajan. 2018. “Radnet: Radiologist Level Accuracy Using Deep Learning for Hemorrhage Detection in Ct Scans.” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 281–4. IEEE.10.1109/ISBI.2018.8363574Search in Google Scholar
Goyal, P., P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, and K. He. 2017. Accurate, Large Minibatch Sgd: Training Imagenet in 1 Hour. New York: arXiv preprint arXiv:1706.02677.Search in Google Scholar
Gulshan, V., L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, R. Raman, P. C. Nelson, J. L. Mega, D. R. Webster, and R. Kim. 2016. “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.” JAMA 316 (22): 2402–10. https://doi.org/10.1001/jama.2016.17216.Search in Google Scholar PubMed
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep Residual Learning for Image Recognition.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–8.10.1109/CVPR.2016.90Search in Google Scholar
https://www.who.int/emergencies/diseases/novel-coronavirus-2019/media-resources/press-briefings.Search in Google Scholar
https://www.kaggle.com/saife245/tuberculosis-image-datasets.Search in Google Scholar
https://blog.paperspace.com/popular-deep-learning-architectures-resnet-inceptionv3-squeezenet/.Search in Google Scholar
Hemdan, E. E. D., M. A. Shouman, and M. E. Karar. 2020. Covidx-net: A Framework of Deep Learning Classifiers to Diagnose Covid-19 in X-Ray Images. New York: arXiv preprint arXiv:2003.11055.Search in Google Scholar
Huang, G., Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. 2017. “Densely Connected Convolutional Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2261–9. Honolulu.10.1109/CVPR.2017.243Search in Google Scholar
Jaeger, S., A. Karargyris, S. Candemir, L. Folio, J. Siegelman, F. Callaghan, Z. Xue, K. Palaniappan, R. K. Singh, S. Antani, Y.-X. Wang, P.-X. Lu, C. J. McDonald, and G. Thoma. 2013. “Automatic Tuberculosis Screening Using Chest Radiographs.” IEEE Transactions on Medical Imaging 33 (2): 233–45. https://doi.org/10.1109/tmi.2013.2284099.Search in Google Scholar
Jia, X., S. Song, W. He, Y. Wang, H. Rong, F. Zhou, and T. Chen. 2018. Highly Scalable Deep Learning Training System with Mixed-Precision: Training Imagenet in Four Minutes. New York: arXiv preprint arXiv:1807.11205.Search in Google Scholar
Kumar, S., S. Pal, V. P. Singh, and P. Jaiswal. 2023a. “Energy-efficient Model “Inception V3 Based on Deep Convolutional Neural Network” Using Cloud Platform for Detection of COVID-19 Infected Patients.” Epidemiologic Methods 12 (1). https://doi.org/10.1515/em-2021-0046.Search in Google Scholar
Kumar, S., S. Pal, V. P. Singh, and P. Jaiswal. 2023b. “Performance Evaluation of ResNet Model for Classification of Tomato Plant Disease.” Epidemiologic Methods 12 (1). https://doi.org/10.1515/em-2021-0044.Search in Google Scholar
Li, Q., X. Guan, P. Wu, X. Wang, L. Zhou, Y. Tong, R. Ren, K. S. Leung, E. H. Lau, J. Y. Wong, N. Xiang, Y. Wu, C. Li, Q. Chen, D. Li, T. Liu, J. Zhao, M. Liu, W. Tu, C. Chen, L. Jin, R. Yang, Q. Wang, S. Zhou, R. Wang, H. Liu, Y. Luo, Y. Liu, G. Shao, H. Li, Z. Tao, Y. Yang, Z. Deng, B. Liu, Z. Ma, Y. Zhang, G. Shi, T. T. Lam, J. T. Wu, G. F. Gao, B. J. Cowling, B. Yang, G. M. Leung, Z. Feng, and X. Xing. 2020. “Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus–Infected Pneumonia.” New England Journal of Medicine 382 (13): 1199–207, https://doi.org/10.1056/nejmoa2001316.Search in Google Scholar
Litjens, G., T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. van der Laak, B. van Ginneken, C. I. Sánchez, and C. I. Sánchez. 2017. “A Survey on Deep Learning in Medical Image Analysis.” Medical Image Analysis 42: 60–88. https://doi.org/10.1016/j.media.2017.07.005.Search in Google Scholar PubMed
Liu, N., L. Wan, Y. Zhang, T. Zhou, H. Huo, and T. Fang. 2018. “Exploiting Convolutional Neural Networks with Deeply Local Description for Remote Sensing Image Classification.” IEEE Access 6: 11215–28. https://doi.org/10.1109/access.2018.2798799.Search in Google Scholar
Malūkas, U., R. Maskeliūnas, R. Damaševičius, and M. Woźniak. 2018. “Real Time Path Finding for Assisted Living Using Deep Learning.” Journal of Universal Computer Science 24: 475–87.Search in Google Scholar
Melendez, J., B. van Ginneken, P. Maduskar, R. H. Philipsen, K. Reither, M. Breuninger, I. M. O. Adetifa, R. Maane, H. Ayles, and C. I. Sánchez. 2014. “A Novel Multiple-Instance Learning-Based Approach to Computer-Aided Detection of Tuberculosis on Chest X-Rays.” IEEE Transactions on Medical Imaging 34 (1): 179–92. https://doi.org/10.1109/tmi.2014.2350539.Search in Google Scholar
Mikami, H., H. Suganuma, Y. Tanaka, and Y. Kageyama. 2018. Massively Distributed SGD: ImageNet/ResNet-50 Training in a Flash. New York: arXiv preprint arXiv:1811.05233.Search in Google Scholar
Milletari, F., N. Navab, and S. A. Ahmadi. 2016. “V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.” In 2016 Fourth International Conference on 3D Vision (3DV), 565–71. IEEE.10.1109/3DV.2016.79Search in Google Scholar
Narin, A., C. Kaya, and Z. Pamuk. 2020. Automatic Detection of Coronavirus Disease(covid-19) Using X-Ray Images and Deep Convolutional Neural Networks. New York: arXiv preprint arXiv:2003.10849.10.1007/s10044-021-00984-ySearch in Google Scholar PubMed PubMed Central
Paules, C. I., H. D. Marston, and A. S. Fauci. 2020. “Coronavirus Infections—More Than Just the Common Cold.” JAMA 323 (8): 707–8. https://doi.org/10.1001/jama.2020.0757.Search in Google Scholar PubMed
Razzak, M. I., S. Naz, and A. Zaib. 2018. “Deep Learning for Medical Image Processing: Overview, Challenges and the Future.” In Classification in BioApps, 323–50. Cham: Springer.10.1007/978-3-319-65981-7_12Search in Google Scholar
Rehman, N. U., M. S. Zia, T. Meraj, H. T. Rauf, R. Damaševičius, A. M. El-Sherbeeny, and M. A. El-Meligy. 2021. “A Self-Activated CNN Approach for Multi-Class Chest-Related COVID-19 Detection.” Applied Sciences 11 (19): 9023. https://doi.org/10.3390/app11199023.Search in Google Scholar
Roosa, K., Y. Lee, R. Luo, A. Kirpich, R. Rothenberg, J. M. Hyman, P. Yan, and G. Chowell. 2020. “Real-time Forecasts of the COVID-19 Epidemic in China from February 5th to February 24th, 2020.” Infectious Disease Modelling 5: 256–63. https://doi.org/10.1016/j.idm.2020.02.002.Search in Google Scholar PubMed PubMed Central
Shen, D., G. Wu, and H. I. Suk. 2017. “Deep Learning in Medical Image Analysis.” Annual Review of Biomedical Engineering 19: 221–48. https://doi.org/10.1146/annurev-bioeng-071516-044442.Search in Google Scholar PubMed PubMed Central
Shickel, B., P. J. Tighe, A. Bihorac, and P. Rashidi. 2017. “Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis.” IEEE Journal of Biomedical and Health Informatics 22 (5): 1589–604. https://doi.org/10.1109/jbhi.2017.2767063.Search in Google Scholar
Smith, S. L., P. J. Kindermans, C. Ying, and Q. V. Le. (2017). Don’t Decay the Learning Rate, Increase the Batch Size. New York: arXiv preprint arXiv:1711.00489.Search in Google Scholar
Sohrabi, C., Z. Alsafi, N. O’Neill, M. Khan, A. Kerwan, A. Al-Jabir, C. Iosifidis, and R. Agha. 2020. “World Health Organization Declares Global Emergency: A Review of the 2019 Novel Coronavirus (COVID-19).” International Journal of Surgery 76: 71–76, https://doi.org/10.1016/j.ijsu.2020.02.034.Search in Google Scholar PubMed PubMed Central
Sun, C., Y. Yang, C. Wen, K. Xie, and F. Wen. 2018. “Voiceprint Identification for Limited Dataset Using the Deep Migration Hybrid Model Based on Transfer Learning.” Sensors 18 (7): 2399. https://doi.org/10.3390/s18072399.Search in Google Scholar PubMed PubMed Central
Wang, Y., C. Wang, and H. Zhang. 2018. “Ship Classification in High-Resolution SAR Images Using Deep Learning of Small Datasets.” Sensors 18 (9): 2929. https://doi.org/10.3390/s18092929.Search in Google Scholar PubMed PubMed Central
Yan, L., H. T. Zhang, J. Goncalves, Y. Xiao, M. Wang, Y. Guo, C. Sun, X. Tang, L. Jing, M. Zhang, Y. Xiao, H. Cao, Y. Chen, T. Ren, F. Wang, S. Huang, X. Tan, N. Huang, B. Jiao, C. Cheng, Y. Zhang, A. Luo, L. Mombaerts, J. Jin, Z. Cao, S. Li, H. Xu, Y. Yuan, and X. Huang. 2020. “An Interpretable Mortality Prediction Model for COVID-19 Patients.” Nature Machine Intelligence 2: 1–6, https://doi.org/10.1038/s42256-020-0180-7.Search in Google Scholar
Ying, C., S. Kumar, D. Chen, T. Wang, and Y. Cheng. 2018. Image Classification at Supercomputer Scale. New York: arXiv preprint arXiv:1811.06992.Search in Google Scholar
Zhang, Y., G. Wang, M. Li, and S. Han. 2018. “Automated Classification Analysis of Geological Structures Based on Images Data and Deep Learning Model.” Applied Sciences 8 (12): 2493. https://doi.org/10.3390/app8122493.Search in Google Scholar
Zhang, X., L. Yao, X. Wang, J. Monaghan, D. Mcalpine, and Y. Zhang. 2019. A Survey on Deep Learning Based Brain Computer Interface: Recent Advances and New Frontiers. New York: arXiv preprint arXiv:1905.04149.Search in Google Scholar
Zhang, J., Y. Xie, Y. Li, C. Shen, and Y. Xia. 2020. Covid-19 Screening on Chest X-Ray Images Using Deep Learning Based Anomaly Detection. New York: arXiv preprint arXiv:2003.12338.Search in Google Scholar
Zeiler, M. D., and R. Fergus. 2014. “Visualizing and Understanding Convolutional Networks.” In European Conference on Computer Vision, 818–33. Cham: Springer.10.1007/978-3-319-10590-1_53Search in Google Scholar
Zhou, T., K. H. Thung, X. Zhu, and D. Shen. 2019. “Effective Feature Learning and Fusion of Multimodality Data Using Stage-wise Deep Neural Network for Dementia Diagnosis.” Human Brain Mapping 40 (3): 1001–16. https://doi.org/10.1002/hbm.24428.Search in Google Scholar PubMed PubMed Central
Zhu, N., D. Zhang, W. Wang, X. Li, B. Yang, J. Song, X. Zhao, B. Huang, W. Shi, R. Lu, F. Zhan, X. Ma, W. Xu, G. Wu, G. F. Gao, W. Tan, and P. Niu. 2020. “A Novel Coronavirus from Patients with Pneumonia in China, 2019.” New England Journal of Medicine 382: 727–33, https://doi.org/10.1056/nejmoa2001017.Search in Google Scholar
© 2023 Walter de Gruyter GmbH, Berlin/Boston
Articles in the same Issue
- Research Articles
- Outliers in nutrient intake data for U.S. adults: national health and nutrition examination survey 2017–2018
- Using repeated antibody testing to minimize bias in estimates of prevalence and incidence of SARS-CoV-2 infection
- A compartmental model of the COVID-19 pandemic course in Germany
- Energy-efficient model “DenseNet201 based on deep convolutional neural network” using cloud platform for detection of COVID-19 infected patients
- Identification of time delays in COVID-19 data
- A country-specific COVID-19 model
- Incidence and trend of leishmaniasis and its related factors in Golestan province, northeastern Iran: time series analysis
- Application of machine learning tools for feature selection in the identification of prognostic markers in COVID-19
- A study of the impact of policy interventions on daily COVID scenario in India using interrupted time series analysis
- Measuring COVID-19 spreading speed through the mean time between infections indicator
- Performance evaluation of ResNet model for classification of tomato plant disease
- Energy- efficient model “Inception V3 based on deep convolutional neural network” using cloud platform for detection of COVID-19 infected patients
Articles in the same Issue
- Research Articles
- Outliers in nutrient intake data for U.S. adults: national health and nutrition examination survey 2017–2018
- Using repeated antibody testing to minimize bias in estimates of prevalence and incidence of SARS-CoV-2 infection
- A compartmental model of the COVID-19 pandemic course in Germany
- Energy-efficient model “DenseNet201 based on deep convolutional neural network” using cloud platform for detection of COVID-19 infected patients
- Identification of time delays in COVID-19 data
- A country-specific COVID-19 model
- Incidence and trend of leishmaniasis and its related factors in Golestan province, northeastern Iran: time series analysis
- Application of machine learning tools for feature selection in the identification of prognostic markers in COVID-19
- A study of the impact of policy interventions on daily COVID scenario in India using interrupted time series analysis
- Measuring COVID-19 spreading speed through the mean time between infections indicator
- Performance evaluation of ResNet model for classification of tomato plant disease
- Energy- efficient model “Inception V3 based on deep convolutional neural network” using cloud platform for detection of COVID-19 infected patients