Home Nuclear radiation detection based on the convolutional neural network under public surveillance scenarios
Article Open Access

Nuclear radiation detection based on the convolutional neural network under public surveillance scenarios

  • Zhangfa Yan , Zhaohui Zhang , Shuyu Xu , Juxiang Ma , Yansong Hou , Yingcai Ji , Lifeng Sun , Tiantian Dai EMAIL logo and Qingyang Wei EMAIL logo
Published/Copyright: February 8, 2022

Abstract

Nuclear energy is a clean and popular form of energy, but leakage and loss of nuclear material pose a threat to public safety. Radiation detection in public spaces is a key part of nuclear security. Common security cameras equipped with complementary metal oxide semiconductor (CMOS) sensors can help with radiation detection. Previous work with these cameras, however, required slow, complex frame-by-frame processing. Building on the previous work, we propose a nuclear radiation detection method using convolution neural networks (CNNs). This method detects nuclear radiation in changing images with much less computational complexity. Using actual video images captured in the presence of a common Tc-99m radioactive source, we construct training and testing sets. After training the CNN and processing our test set, the experimental results show the high performance and effectiveness of our method.

1 Introduction

Nuclear energy has been rapidly developed and promoted worldwide as an important energy source and now plays an important role in modern industry. However, nuclear leakage [1], loss of nuclear materials and equipment [2], and the use of radioactive minerals as commemorative gifts [3] remind us that nuclear radiation detection and monitoring remain necessary for public safety.

Complementary metal oxide semiconductor (CMOS) image sensors respond directly to X-rays and gamma rays. Because of their decreasing prices and ready availability, they have generated tremendous research interest for detecting radiation. CMOS sensors can measure radiation doses [4,5,6,7], detect radiation events [8,9,10,11], gather cosmic rays, and support space exploration [12,13]. CMOS cameras have been used directly for nuclear radiation detection with the lens covered [14,15,16,17,18,19]. In our previous work [20,21], we used uncovered CMOS cameras to detect nuclear radiation via image processing methods. However, processing every image frame is computationally expensive and time-consuming.

To speed up detection time, computer vision and machine learning methods are increasingly being used. In the case where the camera’s lens is not shielded, the electrons excited by radiation particles and visible light in the depletion layer of the p–n junction of the CMOS sensor are superimposed on each other, resulting in the grayscale value of bright blotches in the video image being larger than the surrounding grayscale value [21]. Using a CMOS camera to detect nuclear radiation will collect a large amount of image data, which inspired us to use a deep learning algorithm that is good at processing a large number of image data to assist nuclear radiation detection. While convolution neural network (CNN) is widely used in target detection of nuclear medical images and nuclear radiation imaging [22,23,24,25], most of the direct detection of nuclear radiation events is a manual judgment by detection instruments. To reduce computational complexity and improve detection efficiency, we proposed a nuclear radiation detection method using a CNN model and an uncovered CMOS camera for public surveillance environment. We obtained surveillance videos from a CMOS camera with a common medical Technetium-99m (Tc-99m) radiation source. We then implemented an image fusion method to construct training and testing sets. Finally, the CNN was trained to test our method’s performance for nuclear radiation detection. We also perform experiments to verify the feasibility and effectiveness of our proposed method.

2 Materials and methods

2.1 Data acquisition

Surveillance videos were recorded using a TTQ-JW-02 camera with a 1/2.7 inch CMOS sensor OV2710-1E [26], a 3 μm pixel size, a frame rate of 25 fps, and 1,920 × 1,080 pixel image size. Table 1 provides the specifications of the OV2710-1E sensor, which is widely used in machine vision [27] and internet of things (IoT) [28] applications.

Table 1

Product specifications of the CMOS sensor

Model OV2710-1E Shutter Rolling
Active array size 1,920 × 1,080 Maximum exposure interval 1,096 tline
Lens size 1/2.7-in Image area 5,856 μm × 3,276 μm
Scan mode Progressive Package dimensions 7,465 μm × 5,865 μm

To obtain the bright spot image without radiation, we covered the camera lens and recorded 48 h of video. We selected 100,000 images with bright blotches from the video and labeled them as Dataset A. We then mounted the camera on a tripod to record videos without radiation, obtaining 200,000 frames of noncontinuous images with people but without radiation blotches designated as Dataset B. We placed a 7 × 108-Bq Tc-99m radioactive source above the camera, as shown in Figure 1. The half-life of Tc-99m is 6.02 h and the γ-ray energy is 140 keV. We used the camera with its lens covered to obtain 100,000 frames of radiation images designated as Dataset C. In addition, we used Dataset 1 and Dataset 2 from other research [21], including 20,000 monitoring images without radiation and 20,000 images with radiation captured in our previous work, to test the effectiveness of our proposed method.

Figure 1 
                  TTQ-JW-02 CMOS camera and Tc-99m radioactive source.
Figure 1

TTQ-JW-02 CMOS camera and Tc-99m radioactive source.

When the CMOS camera lens is not shielded, the electrons that are excited by radiation particles and visible light are superimposed on each other in the p–n junction depletion layer of the CMOS sensor, resulting in the grayscale values of bright blotches being larger than the surrounding grayscale values in the video image. However, it is difficult for researchers to distinguish whether there are radiation bright blotches in images from surveillance cameras, especially when there are moving objects in the image backgrounds. Because nuclear radiation experiments cannot be carried out in public, it is not possible to obtain monitoring images containing radiation images directly. Therefore, we use an image fusion algorithm to process the images with only bright blotches (noise bright blotches and radiation bright blotches) and the monitoring images without radiation bright blotches to obtain the training and test sets for further experiments.

Sample frame images with blotches in Datasets A and C are shown in Figure 2. The corresponding original image without radiation bright blotches in Dataset B is shown in Figure 3. Table 2 shows the image category of each dataset.

Figure 2 
                  Two images with bright blotches: (a) unirradiated; (b) irradiated.
Figure 2

Two images with bright blotches: (a) unirradiated; (b) irradiated.

Figure 3 
                  A monitoring image without radiation.
Figure 3

A monitoring image without radiation.

Table 2

The image category of each dataset

Dataset Image category
A Images with noise bright blotches
B Monitoring images without radiation
C Images with radiation bright blotches

2.2 Acquisition of training set and testing set

2.2.1 Weighted averaging image fusion

Image fusion aims to combine multiple images from one or more sensors into a new image to meet a specific need using an image fusion algorithm. The result contains more useful information than any single image [29]. Current image fusion methods work at three levels [30]. The first and lowest level is pixel-level fusion, which retains original information as much as possible. The second level is feature-level fusion, which might lose important information in the image and distort the image. The third level is decision-making-level fusion, with difficult processing and implementation. We use the pixel-level image fusion method to construct the images with and without radiation from the observed image to retain bright blotch and monitoring information. These images form the training and test sets, respectively.

The weighted average (WA) method is the simplest pixel-level image fusion method and has the advantages of simple implementation, fast operation, and the ability to improve the signal-to-noise ratio of the fused image. The improved algorithm is widely used in infrared imaging, medical imaging, and other fields [31,32,33]. The principle of the WA method can be described as follows. Given images F 1 and F 2, the pixel value of fusion image F at pixel point (x, y) is

(1) F ( x , y ) = w 1 × F 1 ( x , y ) + w 2 × F 2 ( x , y ) ,

where w 1 and w 2 are the weights of F 1(x, y) and F 2(x, y), respectively, and w 1 + w 2 = 1. In this work, w 1 and w 2 are determined by F 1(x, y) and F 2(x, y):

(2) w 1 = 1 , F 1 ( x , y ) F 2 ( x , y ) , 0 , otherwise .

2.2.2 Training set and testing set

We randomly fused 100,000 images in Dataset A and 100,000 images in Dataset B using the WA method and grouped 100,000 fusion images as Dataset N (negative). The remaining images in Datasets B and C were randomly fused to obtain images in Dataset P (positive). We used Datasets N and P as the training set.

In our previous work [21], we obtained Data1 and Data2 that included 10,000 unirradiated and 10,000 radiated monitoring images. In this work, we expanded the number of both images to 20,000, which was used as a testing set.

2.3 Convolutional neural network

Deep learning is one type of machine learning algorithm and employs a deep neural network structure. A CNN is a multiple layer feed-forward neural network designed specifically to process large numbers of image or sensor data in the form of multiple arrays by considering local and global stationary properties [34]. CNN is popular due to its efficient performance in solving object recognition tasks such as gesture recognition, face recognition, object classification, and scene description generation [35,36,37,38,39].

A CNN consists of three kinds of primary hidden layers: a convolution layer, a pooling layer, and a fully connected layer. The neurons of the fully connected layer are arranged in three dimensions: width, height, and depth. Readers interested in understanding the detailed structure and principle of each layer of CNNs are directed to Christian Szegedy [37]. For the sake of a brief review of the layers and the rectified linear unit (ReLU) of the CNN, we briefly describe them here:

  • Convolution layer: A convolution layer is composed of filters convoluted on input images to extract features. This layer discovers features in the image.

  • Pooling layer: The pooling layer receives feature sets from the convolution layer and then shrinks large images while preserving the most important information. Image calculations in the pooling layer do not affect the previous layer’s output because the only maximum value from each window is taken and brought to the upper layer. This layer also preserves the best fits of each feature within the window.

  • Rectified linear unit: The ReLU replaces every negative numeric value from the pooling layer with 0 to address the problems of disappearing gradients and convergence fluctuation. This preserves the CNN’s mathematical stability by preventing learned values from getting stuck near 0 or blowing up toward infinity.

  • Fully connected layer: Each node of the full connection layer is connected with all nodes of the upper layer to synthesize the features extracted from the front, and translate the high-level filtered images into categories with labels.

Figure 4 shows the framework of the CNN used in our research. Our CNN consists of five convolution layers and three fully connected layers. There are three max pooling operations (pooling layer) behind the first, second, and fifth convolution layers, which are not shown in Figure 4. The main structure of the model is input data; convolution, max pooling, ReLu activation; convolution, max pooling, ReLu activation; convolution, ReLu activation; convolution, ReLu activation; convolution, max pooling, ReLu activation; full connection, ReLu activation, dropout; full connection, ReLU activation, dropout; output.

Figure 4 
                  The base architecture of CNN used in this work.
Figure 4

The base architecture of CNN used in this work.

In this CNN, the input data is a 3-channel image of size 224 × 224 pixels. The first convolution layer uses 96 convolution cores of size 11 × 11 × 3 and is divided into two groups (48 in each group). The input layer is convoluted according to the stride size of 4-pixel to produce two groups of 55 × 55 × 48 convolution results. For convolution results, we use the ReLU activation function to obtain the activation results. Using the overlap maximum pooling with a window size of 3 × 3 and a stride size of 2-pixel, we obtain 27 × 27 × 48 pooling results. We apply a local response normalization operation to obtain the normalized result of 27 × 27 × 48. In the later convolution layers, similar operations are performed with different windows and stride sizes. The window size of each layer is shown in Figure 4.

To evaluate the validity of the method, we used the following evaluation indicators commonly used in classification tasks:

(3) Precision = TP TP + FP ,

(4) Recall = TP TP + FN ,

(5) F 1 = 2 × Precision × Recall Precision + Recall ,

(6) Accuracy = TP + TN Total .

In these equations, T represents the result of the classifier prediction being correct and F represents that it is incorrect; P represents positive samples and N the negative samples; TP, TN, FP, and FN represent the number of samples corresponding to the following four cases; TP is true positive, an actual positive sample correctly determined to be positive; TN is true negative, an actual negative sample correctly determined to be negative; FP is false positive, a negative sample incorrectly determined to be positive; FN is false negative, a positive sample incorrectly determined to be negative; and Total is the total number of test samples.

3 Results

In this experiment, most areas of the surveillance images were unchanged, so we cropped areas where people appear along with the bright blotches in Datasets A and B. The pixel size of the cropped images was 224 × 224, as shown in Figure 5. The unirradiated images in Dataset A and the radiated images in Dataset C were randomly fused with the monitoring images in Dataset B (using the method described previously) to obtain the training set. Training images were labeled 0 or 1 according to the presence of radiation. Figure 6 shows the above process.

Figure 5 
               Cropped image set before fusion: (a) monitoring image; (b) bright blotches.
Figure 5

Cropped image set before fusion: (a) monitoring image; (b) bright blotches.

Figure 6 
               Image fusion generates training set samples.
Figure 6

Image fusion generates training set samples.

Thus, we obtained the training set, including the verification set. In our previous work, we obtained monitoring images containing radiation. We used this previous data as the test set for evaluating the CNN model. Table 3 provides details about this prior data. Then the CNN was trained, validated, and tested on our deep learning server, which had the configuration shown in Table 4. The training loss and accuracy are shown in Figure 7.

Table 3

Data used in this research

Type of radiation Unirradiated image Radiated image
Training set 75,000 75,000
Validation set 25,000 25,000
Testing set 20,000 20,000
Size of data set 3 × 224 × 224 (RGB, pixel) 3 × 224 × 224 (RGB, pixel)
Table 4

Experimental setup in this research

CPU version Intel® Core™ E5-4655 CPU @3.20 GHz
GPU version NVIDIA GeForce GTX 1080Ti × 2
Framework TensorFlow
Operation system Ubuntu 16.04
Network used AlexNet
Figure 7 
               Loss and accuracy of CNN training phase.
Figure 7

Loss and accuracy of CNN training phase.

Finally, we used the testing data to test the effectiveness of the CNN model, with results given in Table 4. The precision, recall, F1, and accuracy of the test results indicate that the proposed method effectively detected radiation. In order to further verify the performance of CNN on the research object in this paper, we also trained and tested the other two widely used CNN models, and the final test results are listed in Table 5.

Table 5

Precision, recall, F1, and accuracy on testing data

Model Precision Recall F1 Accuracy
AlexNet 0.8981 0.9736 0.9343 0.9314
GoogleNet 0.9174 0.9293 0.9233 0.9228
ResNet 0.9186 0.9443 0.9443 0.9303

4 Discussion

In this article, a new method based on CNN to detect nuclear radiation using CMOS surveillance cameras is proposed. However, limitations in the ability to obtain radiation images from actual populated places led the authors to use image fusion to construct training and validation sets. The data collected in the previous work as a testing set were also used.

From the results shown in Figure 6 and summarized in Table 5, we point out that our method achieves a recall score of 0.9736 for radiation detection. This score indicates that more radiation events can be identified successfully. Precision performance needs further improvement because a small number of unirradiated events were identified as radiation events.

5 Conclusion

In this article, a new method of detecting nuclear radiation under public surveillance scenarios that uses an uncovered CMOS camera, an image fusion method, and a CNN model is proposed. Surveillance videos were obtained from a CMOS camera with a common medical Technetium-99m (Tc-99m) radiation source. Then, an image fusion method was implemented to construct training and testing sets. Finally, we trained the CNN and tested our method’s performance for nuclear radiation detection. The experimental results show that performance with this testing set was not as high as for the training set, but recalled, F1, and accuracy scores still show significant effectiveness. Our proposed method offers significant promise for real-time detection of nuclear radiation using common monitoring cameras.

Acknowledgments

The authors thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

  1. Funding information: This work was supported in part by the National Natural Science Foundation of China (Grant No. 11975044), Fundamental Research Funds for the Central Universities (Nos. FRF-TP-19-019A3).

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors state no conflict of interest.

References

[1] Hirose K. 2011 Fukushima Dai-ichi nuclear power plant accident: summary of regional radioactive deposition monitoring results. J Environ Radioactivity. 2012;111:13–7. 10.1016/j.jenvrad.2011.09.003.Search in Google Scholar PubMed

[2] Sohu C. Available at http://www.sohu.com/a/114982338_116897 (accessed on Jan. 10, 2021).Search in Google Scholar

[3] Sohu C. Available at https://www.sohu.com/a/362028083_115354 (accessed on Jan. 10, 2021).Search in Google Scholar

[4] Han G, Jjs B, Kl C, Kcn D, Sjh E, Hck E. An investigation of medical radiation detection using CMOS image sensors in smartphones. Nucl Instrum Methods Phys Res A. 2016;823:126–34.10.1016/j.nima.2016.04.007Search in Google Scholar

[5] Wang X, Zhang SL, Song GX, Guo DF, Ma CW, Wang F. Remote measurement of low-energy radiation based on ARM board and ZigBee wireless communication. Nucl Sci Tech. 2018;29(1):31–6. 10.1007/s41365-017-0344-2.Search in Google Scholar

[6] Shoulong X, Shuliang Z, Youjun H. γ-ray detection using commercial off-the-shelf CMOS and CCD image sensors. IEEE Sens J. 2017;17(20):6599–604. 10.1109/JSEN.2017.2732499.Search in Google Scholar

[7] Wang C, Hu S, Gao C, Feng C. Nuclear radiation degradation study on HD camera based on CMOS image sensor at different dose rates. Sensor. 2018;18(2):514. 10.3390/s18020514.Search in Google Scholar PubMed PubMed Central

[8] Pérez M, Lipovetzky J, Haro MS, Sidelnik I, Blostein JJ, Bessia FA, et al. Particle detection and classification using commercial off the shelf CMOS image sensors. Nucl Instrum Methods Phys Res A. 2016;827:171–80. 10.1016/j.nima.2016.04.072.Search in Google Scholar

[9] Cheng QQ, Yuan YZ, Ma CW, Wang F. Gamma measurement based on CMOS sensor and ARM microcontroller. Nucl Sci Tech. 2017;28(9):1–5. 10.1007/s41365-017-0276-x.Search in Google Scholar

[10] Peng ZY, Gu YT, Xie YG, Yan WQ, Zhao H, Li GL, et al. Studies of an X-ray imaging detector based on THGEM and CCD camera. Radiat Detect Technol Methods. 2018;2(1):1–8. 10.1007/s41605-018-0058-y.Search in Google Scholar

[11] Cheng QQ, Ma CW, Yuan YZ, Wang F, Jin F, Liu XF. X-ray detection based on complementary metal-oxide-semiconductor sensors. Nucl Sci Tech. 2019;30(1):1–6. 10.1007/s41365-018-0528-4.Search in Google Scholar

[12] Zheng R, Liu C, Wei X, Wang J, Hu Y. Dark-current estimation method for CMOS APS sensors in mixed radiation environment. Nucl Instrum Methods Phys Res A. 2019;924:230–5. 10.1016/j.nima.2018.09.146.Search in Google Scholar

[13] Virmontois C, Belloir JM, Beaumel M, Vriet A, Perrot N, Sellier C, et al. Dose and single-event effects on a color CMOS camera for space exploration. IEEE Trans Nucl Sci. 2018;66(1):104–10. 10.1109/TNS.2018.2885822.Search in Google Scholar

[14] Van Hoey O, Salavrakos A, Marques A, Nagao A, Willems R, Vanhavere F, et al. Radiation dosimetry properties of smartphone CMOS sensors. Radiat Prot Dosimetry. 2016;168(3):314–21. 10.1093/rpd/ncv352.Search in Google Scholar PubMed

[15] Carrel F, Abou Khalil R, Colas S, De Toro D, Ferrand G, Gaillard-Lecanu E, et al. GAMPIX: a new gamma imaging system for radiological safety and homeland security purposes. 2011 IEEE Nuclear Science Symposium Conference Record. 2011 Oct 23–29; Valencia, Spain: IEEE; 2011. p. 4739–44. 10.1109/NSSMIC.2011.6154706.Search in Google Scholar

[16] Wagner E, Sorom R, Wiles L. Radiation monitoring for the masses. Health Phys. 2016;110(1):37–44. 10.1097/HP.0000000000000407.Search in Google Scholar PubMed

[17] Wei QY, Bai R, Wang Z, Yao RT, Gu Y, Dai TT. Surveying ionizing radiations in real time using a smartphone. Nucl Sci Tech. 2017;28(5):1–5. 10.1007/s41365-017-0215-x.Search in Google Scholar

[18] Wei QY, Wang Z, Dai TT, Gu Y. nuclear radiation detection based on un-covered CMOS camera under static scene. At Energy Sci & Technol. 2017;51(1):175–9. 10.7538/yzk.2017.51.01.0175. In Chinese.Search in Google Scholar

[19] Huang G, Yan Z, Dai T, Lee R, Wei Q. Simultaneous measurement of ionizing radiation and heart rate using a smartphone camera. Open Phys. 2020;18(1):566–73. 10.1515/phys-2020-0181.Search in Google Scholar

[20] Yan Z, Hu Y, Huang G, Dai T, Zhang Z, Wei Q. detecting nuclear radiation with an uncovered CMOS camera & a long-wavelength pass filter. IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC). 2019 Oct 26–Nov 2. Manchester, United Kingdom: IEEE; 2019. p. 1–3. 10.1109/NSS/MIC42101.2019.9059807.Search in Google Scholar

[21] Yan Z, Wei Q, Huang G, Hu Y, Zhang Z, Dai T. Nuclear radiation detection based on uncovered CMOS camera under dynamic scene. Nucl Instrum Methods Phys Res A. 2020;956:163383. 10.1016/j.nima.2019.163383.Search in Google Scholar

[22] Jiao L, Zhang F, Liu F, Yang S, Li L, Feng Z, et al. Survey of deep learning-based object detection. IEEE Access. 2019;7:128837–68. 10.1109/ACCESS.2019.2939201.Search in Google Scholar

[23] Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35(5):1299–312. 10.1109/TMI.2016.2535302.Search in Google Scholar PubMed

[24] Song T-A, Chowdhury SR, Yang F, Dutta J. Super-resolution PET imaging using convolutional neural networks. IEEE Trans Comput Imaging. 2020;6:518–28. 10.1109/TCI.2020.2964229.Search in Google Scholar PubMed PubMed Central

[25] Kromp F, Fischer L, Bozsaky E, Ambros IM, Taschner-Mandl S. Evaluation of deep learning architectures for complex immunofluorescence nuclear image segmentation. IEEE Trans Med Imaging. 2021;99:1. 10.1109/TMI.2021.3069558.Search in Google Scholar PubMed

[26] OV2710-1E. 1080p/720p HD color CMOS image sensor with OmniPixel®3-HS technology. Available from: https://www.ovt.com/sensors/OV2710-1E (accessed on November 9th, 2019).Search in Google Scholar

[27] Mao J, Guo Z, Geng H, Zhang B, Cao Z, Niu W. Design of visual navigation system of farmland tracked robot based on raspberry pie. 2019 14th IEEE Conference on Industrial Electronics and Applications. 2019 Jun 19–2. Xi’an, China: IEEE; 2019. p. 573–7. 10.1109/ICIEA.2019.8834077.Search in Google Scholar

[28] Xiao K, Du Z, Yang L. An embedded wireless sensor system for multi-service agricultural information acquisition. Sens Lett. 2017;15(11):907–14. 10.1166/sl.2017.3897.Search in Google Scholar

[29] Piella G. A general framework for multiresolution image fusion: from pixels to regions. Inf Fusion. 2003;4(4):259–80. 10.1016/S1566-2535(03)00046-0.Search in Google Scholar

[30] Zhang Z, Blum RS. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc IEEE. 1999;87(8):1315–26. 10.1109/5.775414.Search in Google Scholar

[31] Wei C, Blum RS. Theoretical analysis of correlation-based quality measures for weighted averaging image fusion. Inf Fusion. 2010;11(4):301–10. 10.1016/j.inffus.2009.10.006.Search in Google Scholar

[32] Yang G, Tong T, Lu SY, Li ZY, Zheng Y. Fusion of infrared and visible images based on multi-features. Opt Precis Eng. 2014;22(2):489–96. 10.3788/OPE.20142202.0489. In Chinese.Search in Google Scholar

[33] Azis NA, Jeong YS, Choi HJ, Iraqi Y. Weighted averaging fusion for multi-view skeletal data and its application in action recognition. IET Computer Vis. 2016;10(2):134–42. 10.1049/iet-cvi.2015.0146.Search in Google Scholar

[34] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. 10.1038/nature14539 Search in Google Scholar PubMed

[35] Chen YN, Han CC, Wang CT, Jeng BS, Fan KC. The application of a convolution neural network on face and license plate detection. 18th International Conference on Pattern Recognition (ICPR'06). 2006 Aug 20–24. Hong Kong, China: IEEE; 2006. p. 552–5. 10.1109/ICPR.2006.1115.Search in Google Scholar

[36] Bobić V, Tadić P, Kvaščev G. Hand gesture recognition using neural network based techniques. 13th IEEE Symposium on Neural Networks and Applications (NEUREL). 2016 Nov 22–24. Belgrade, Serbia: IEEE; 2016. p. 1–4. 10.1109/NEUREL.2016.7800104.Search in Google Scholar

[37] Jiang Y, Chen L, Zhang H, Xiao X. Breast cancer histopathological image classification using convolutional neural networks with small SE-ResNet module. PLoS One. 2019;14(3):e0214587. 10.1371/journal.pone.0214587 Search in Google Scholar PubMed PubMed Central

[38] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015 Jun 7—15. Boston, UA: IEEE; 2015. p. 1–9. 10.1109/CVPR.2015.7298594.Search in Google Scholar

[39] Sharma N, Jain V, Mishra A. An analysis of convolutional neural networks for image classification. Proc Computer Sci. 2018;132:377–84. 10.1016/j.procs.2018.05.198.Search in Google Scholar

Received: 2021-10-17
Revised: 2021-12-22
Accepted: 2022-01-13
Published Online: 2022-02-08

© 2022 Zhangfa Yan et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Test influence of screen thickness on double-N six-light-screen sky screen target
  3. Analysis on the speed properties of the shock wave in light curtain
  4. Abundant accurate analytical and semi-analytical solutions of the positive Gardner–Kadomtsev–Petviashvili equation
  5. Measured distribution of cloud chamber tracks from radioactive decay: A new empirical approach to investigating the quantum measurement problem
  6. Nuclear radiation detection based on the convolutional neural network under public surveillance scenarios
  7. Effect of process parameters on density and mechanical behaviour of a selective laser melted 17-4PH stainless steel alloy
  8. Performance evaluation of self-mixing interferometer with the ceramic type piezoelectric accelerometers
  9. Effect of geometry error on the non-Newtonian flow in the ceramic microchannel molded by SLA
  10. Numerical investigation of ozone decomposition by self-excited oscillation cavitation jet
  11. Modeling electrostatic potential in FDSOI MOSFETS: An approach based on homotopy perturbations
  12. Modeling analysis of microenvironment of 3D cell mechanics based on machine vision
  13. Numerical solution for two-dimensional partial differential equations using SM’s method
  14. Multiple velocity composition in the standard synchronization
  15. Electroosmotic flow for Eyring fluid with Navier slip boundary condition under high zeta potential in a parallel microchannel
  16. Soliton solutions of Calogero–Degasperis–Fokas dynamical equation via modified mathematical methods
  17. Performance evaluation of a high-performance offshore cementing wastes accelerating agent
  18. Sapphire irradiation by phosphorus as an approach to improve its optical properties
  19. A physical model for calculating cementing quality based on the XGboost algorithm
  20. Experimental investigation and numerical analysis of stress concentration distribution at the typical slots for stiffeners
  21. An analytical model for solute transport from blood to tissue
  22. Finite-size effects in one-dimensional Bose–Einstein condensation of photons
  23. Drying kinetics of Pleurotus eryngii slices during hot air drying
  24. Computer-aided measurement technology for Cu2ZnSnS4 thin-film solar cell characteristics
  25. QCD phase diagram in a finite volume in the PNJL model
  26. Study on abundant analytical solutions of the new coupled Konno–Oono equation in the magnetic field
  27. Experimental analysis of a laser beam propagating in angular turbulence
  28. Numerical investigation of heat transfer in the nanofluids under the impact of length and radius of carbon nanotubes
  29. Multiple rogue wave solutions of a generalized (3+1)-dimensional variable-coefficient Kadomtsev--Petviashvili equation
  30. Optical properties and thermal stability of the H+-implanted Dy3+/Tm3+-codoped GeS2–Ga2S3–PbI2 chalcohalide glass waveguide
  31. Nonlinear dynamics for different nonautonomous wave structure solutions
  32. Numerical analysis of bioconvection-MHD flow of Williamson nanofluid with gyrotactic microbes and thermal radiation: New iterative method
  33. Modeling extreme value data with an upside down bathtub-shaped failure rate model
  34. Abundant optical soliton structures to the Fokas system arising in monomode optical fibers
  35. Analysis of the partially ionized kerosene oil-based ternary nanofluid flow over a convectively heated rotating surface
  36. Multiple-scale analysis of the parametric-driven sine-Gordon equation with phase shifts
  37. Magnetofluid unsteady electroosmotic flow of Jeffrey fluid at high zeta potential in parallel microchannels
  38. Effect of plasma-activated water on microbial quality and physicochemical properties of fresh beef
  39. The finite element modeling of the impacting process of hard particles on pump components
  40. Analysis of respiratory mechanics models with different kernels
  41. Extended warranty decision model of failure dependence wind turbine system based on cost-effectiveness analysis
  42. Breather wave and double-periodic soliton solutions for a (2+1)-dimensional generalized Hirota–Satsuma–Ito equation
  43. First-principle calculation of electronic structure and optical properties of (P, Ga, P–Ga) doped graphene
  44. Numerical simulation of nanofluid flow between two parallel disks using 3-stage Lobatto III-A formula
  45. Optimization method for detection a flying bullet
  46. Angle error control model of laser profilometer contact measurement
  47. Numerical study on flue gas–liquid flow with side-entering mixing
  48. Travelling waves solutions of the KP equation in weakly dispersive media
  49. Characterization of damage morphology of structural SiO2 film induced by nanosecond pulsed laser
  50. A study of generalized hypergeometric Matrix functions via two-parameter Mittag–Leffler matrix function
  51. Study of the length and influencing factors of air plasma ignition time
  52. Analysis of parametric effects in the wave profile of the variant Boussinesq equation through two analytical approaches
  53. The nonlinear vibration and dispersive wave systems with extended homoclinic breather wave solutions
  54. Generalized notion of integral inequalities of variables
  55. The seasonal variation in the polarization (Ex/Ey) of the characteristic wave in ionosphere plasma
  56. Impact of COVID 19 on the demand for an inventory model under preservation technology and advance payment facility
  57. Approximate solution of linear integral equations by Taylor ordering method: Applied mathematical approach
  58. Exploring the new optical solitons to the time-fractional integrable generalized (2+1)-dimensional nonlinear Schrödinger system via three different methods
  59. Irreversibility analysis in time-dependent Darcy–Forchheimer flow of viscous fluid with diffusion-thermo and thermo-diffusion effects
  60. Double diffusion in a combined cavity occupied by a nanofluid and heterogeneous porous media
  61. NTIM solution of the fractional order parabolic partial differential equations
  62. Jointly Rayleigh lifetime products in the presence of competing risks model
  63. Abundant exact solutions of higher-order dispersion variable coefficient KdV equation
  64. Laser cutting tobacco slice experiment: Effects of cutting power and cutting speed
  65. Performance evaluation of common-aperture visible and long-wave infrared imaging system based on a comprehensive resolution
  66. Diesel engine small-sample transfer learning fault diagnosis algorithm based on STFT time–frequency image and hyperparameter autonomous optimization deep convolutional network improved by PSO–GWO–BPNN surrogate model
  67. Analyses of electrokinetic energy conversion for periodic electromagnetohydrodynamic (EMHD) nanofluid through the rectangular microchannel under the Hall effects
  68. Propagation properties of cosh-Airy beams in an inhomogeneous medium with Gaussian PT-symmetric potentials
  69. Dynamics investigation on a Kadomtsev–Petviashvili equation with variable coefficients
  70. Study on fine characterization and reconstruction modeling of porous media based on spatially-resolved nuclear magnetic resonance technology
  71. Optimal block replacement policy for two-dimensional products considering imperfect maintenance with improved Salp swarm algorithm
  72. A hybrid forecasting model based on the group method of data handling and wavelet decomposition for monthly rivers streamflow data sets
  73. Hybrid pencil beam model based on photon characteristic line algorithm for lung radiotherapy in small fields
  74. Surface waves on a coated incompressible elastic half-space
  75. Radiation dose measurement on bone scintigraphy and planning clinical management
  76. Lie symmetry analysis for generalized short pulse equation
  77. Spectroscopic characteristics and dissociation of nitrogen trifluoride under external electric fields: Theoretical study
  78. Cross electromagnetic nanofluid flow examination with infinite shear rate viscosity and melting heat through Skan-Falkner wedge
  79. Convection heat–mass transfer of generalized Maxwell fluid with radiation effect, exponential heating, and chemical reaction using fractional Caputo–Fabrizio derivatives
  80. Weak nonlinear analysis of nanofluid convection with g-jitter using the Ginzburg--Landau model
  81. Strip waveguides in Yb3+-doped silicate glass formed by combination of He+ ion implantation and precise ultrashort pulse laser ablation
  82. Best selected forecasting models for COVID-19 pandemic
  83. Research on attenuation motion test at oblique incidence based on double-N six-light-screen system
  84. Review Articles
  85. Progress in epitaxial growth of stanene
  86. Review and validation of photovoltaic solar simulation tools/software based on case study
  87. Brief Report
  88. The Debye–Scherrer technique – rapid detection for applications
  89. Rapid Communication
  90. Radial oscillations of an electron in a Coulomb attracting field
  91. Special Issue on Novel Numerical and Analytical Techniques for Fractional Nonlinear Schrodinger Type - Part II
  92. The exact solutions of the stochastic fractional-space Allen–Cahn equation
  93. Propagation of some new traveling wave patterns of the double dispersive equation
  94. A new modified technique to study the dynamics of fractional hyperbolic-telegraph equations
  95. An orthotropic thermo-viscoelastic infinite medium with a cylindrical cavity of temperature dependent properties via MGT thermoelasticity
  96. Modeling of hepatitis B epidemic model with fractional operator
  97. Special Issue on Transport phenomena and thermal analysis in micro/nano-scale structure surfaces - Part III
  98. Investigation of effective thermal conductivity of SiC foam ceramics with various pore densities
  99. Nonlocal magneto-thermoelastic infinite half-space due to a periodically varying heat flow under Caputo–Fabrizio fractional derivative heat equation
  100. The flow and heat transfer characteristics of DPF porous media with different structures based on LBM
  101. Homotopy analysis method with application to thin-film flow of couple stress fluid through a vertical cylinder
  102. Special Issue on Advanced Topics on the Modelling and Assessment of Complicated Physical Phenomena - Part II
  103. Asymptotic analysis of hepatitis B epidemic model using Caputo Fabrizio fractional operator
  104. Influence of chemical reaction on MHD Newtonian fluid flow on vertical plate in porous medium in conjunction with thermal radiation
  105. Structure of analytical ion-acoustic solitary wave solutions for the dynamical system of nonlinear wave propagation
  106. Evaluation of ESBL resistance dynamics in Escherichia coli isolates by mathematical modeling
  107. On theoretical analysis of nonlinear fractional order partial Benney equations under nonsingular kernel
  108. The solutions of nonlinear fractional partial differential equations by using a novel technique
  109. Modelling and graphing the Wi-Fi wave field using the shape function
  110. Generalized invexity and duality in multiobjective variational problems involving non-singular fractional derivative
  111. Impact of the convergent geometric profile on boundary layer separation in the supersonic over-expanded nozzle
  112. Variable stepsize construction of a two-step optimized hybrid block method with relative stability
  113. Thermal transport with nanoparticles of fractional Oldroyd-B fluid under the effects of magnetic field, radiations, and viscous dissipation: Entropy generation; via finite difference method
  114. Special Issue on Advanced Energy Materials - Part I
  115. Voltage regulation and power-saving method of asynchronous motor based on fuzzy control theory
  116. The structure design of mobile charging piles
  117. Analysis and modeling of pitaya slices in a heat pump drying system
  118. Design of pulse laser high-precision ranging algorithm under low signal-to-noise ratio
  119. Special Issue on Geological Modeling and Geospatial Data Analysis
  120. Determination of luminescent characteristics of organometallic complex in land and coal mining
  121. InSAR terrain mapping error sources based on satellite interferometry
Downloaded on 13.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/phys-2022-0006/html?lang=en&srsltid=AfmBOooYjdXLaI6Cws_ue6c4Be5RVNEiKWXZRhh08Nb7IzHUU1JrYv-Z
Scroll to top button