Home Evaluation and analysis of passive optical network in investigating real-time cell phone detection in restricted zones
Article
Licensed
Unlicensed Requires Authentication

Evaluation and analysis of passive optical network in investigating real-time cell phone detection in restricted zones

  • Neeraj Kumar , Sharvan Kumar Garg , Sanjive Tyagi and Vikas Sharma ORCID logo EMAIL logo
Published/Copyright: July 14, 2025
Become an author with De Gruyter Brill

Abstract

This research addresses the escalating issue of unauthorized mobile phone use in restricted areas such as examination halls, defense installations, and secure business environments. To mitigate risks ranging from academic dishonesty to data breaches, the study proposes a real-time mobile phone detection system leveraging computer vision, machine learning, and IoT. Utilizing Python and the COCO (Common objects in context) pre-trained object detection model, the system accurately identifies mobile devices within a webcam’s field of view. Upon detection, it captures an image and sends an immediate email alert to designated authorities, enhancing security enforcement. The COCO model’s robustness allows for effective detection under diverse conditions, including low light, partial occlusions, and varied phone types. This eliminates the need for large training datasets, making the system easier and quicker to deploy. Python’s integration capabilities and rich libraries ensure seamless operation and high computational efficiency. Experimental results demonstrate high accuracy and minimal latency, supporting timely responses in sensitive settings. Beyond its immediate applications, this work exemplifies the potential of integrating AI, computer vision, and IoT to tackle real-world challenges. The system’s adaptability and interdisciplinary foundation offer a forward-looking solution for maintaining security in dynamically changing environments, marking a significant advancement in digital surveillance and control mechanisms.


Corresponding author: Vikas Sharma, Department of Electronics and Communication Engineering, Swami Vivekanand Subharti University, Meerut, Uttar Pradesh, India, E-mail:

Acknowledgments

Thanks to all my co-author for the support.

  1. Research ethics: Na.

  2. Informed consent: We all are fully responsible for this paper.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The author states no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

1. Chen, LC, Papandreou, G, Kokkinos, I, Murphy, K, Yuille, AL. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 2018;40:834–48. https://doi.org/10.1109/TPAMI.2017.2699184.Search in Google Scholar PubMed

2. Deng, J, Dong, W, Socher, R, Li, LJ, Li, K, Fei-Fei, L. Imagenet: a large-scale hierarchical image database. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2009:248–55 pp.10.1109/CVPR.2009.5206848Search in Google Scholar

3. Everingham, M, Van Gool, L, Williams, CKI, Winn, J, Zisserman, A. The pascal visual object classes (VOC) challenge. Int J Comput Vis 2010;88:303–38. https://doi.org/10.1007/s11263-009-0275-4.Search in Google Scholar

4. Girshick, R. Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision (ICCV); 2015:1440–8 pp.10.1109/ICCV.2015.169Search in Google Scholar

5. Huang, G, Liu, Z, Van Der Maaten, L, Weinberger, KQ. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2017:4700–8 pp.10.1109/CVPR.2017.243Search in Google Scholar

6. Liu, W, Anguelov, D, Erhan, D, Szegedy, C, Reed, S, Fu, CY, et al.. SSD: single shot multibox detector. In: European conference on computer vision (ECCV). Springer; 2016:21–37 pp.10.1007/978-3-319-46448-0_2Search in Google Scholar

7. Long, J, Shelhamer, E, Darrell, T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2015:3431–40 pp.10.1109/CVPR.2015.7298965Search in Google Scholar

8. Russakovsky, O, Deng, J, Su, H, Krause, J, Satheesh, S, Ma, S, et al.. Imagenet large scale visual recognition challenge. Int J Comput Vis 2015;115:211–52. https://doi.org/10.1007/s11263-015-0816-y.Search in Google Scholar

9. Szegedy, C, Liu, W, Jia, Y, Sermanet, P, Reed, S, Anguelov, D, et al.. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2015:1–9 pp.10.1109/CVPR.2015.7298594Search in Google Scholar

10. Tan, M, Le, QV. EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of the international conference on machine learning (ICML); 2019:6105–14 pp.Search in Google Scholar

11. Uijlings, JRR, van de Sande, KEA, Gevers, T, Smeulders, AWM. Selective search for object recognition. Int J Comput Vis 2013;104:154–71. https://doi.org/10.1007/s11263-013-0620-5.Search in Google Scholar

12. Vaswani, A, Shazeer, N, Parmar, N, Uszkoreit, J, Jones, L, Gomez, AN, et al.. Attention is all you need. In: Advances in neural information processing systems (neurips); 2017:5998–6008 pp.Search in Google Scholar

13. Goodfellow, I, Bengio, Y, Courville, A. Deep learning. Cambridge, MA: MIT Press; 2023.Search in Google Scholar

14. He, K, Zhang, X, Ren, S, Sun, J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2023:770–8 pp.10.1109/CVPR.2016.90Search in Google Scholar

15. Krizhevsky, A, Sutskever, I, Hinton, GE. ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems (NeurIPS); 2023:1097–105 pp.Search in Google Scholar

16. Lin, T.-Y, Maire, M, Belongie, S, Hays, J, Perona, P, Ramanan, D, et al.. Microsoft COCO: common objects in context. In: European conference on computer vision (ECCV). Springer; 2021:740–55 pp.10.1007/978-3-319-10602-1_48Search in Google Scholar

17. Redmon, J, Divvala, S, Girshick, R, Farhadi, A. You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2020:779–88 pp.10.1109/CVPR.2016.91Search in Google Scholar

18. Ren, S, He, K, Girshick, R, Sun, J. Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems (NeurIPS); 2023:91–9 pp.Search in Google Scholar

Received: 2025-05-25
Accepted: 2025-06-12
Published Online: 2025-07-14

© 2025 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 8.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/joc-2025-0205/pdf
Scroll to top button