Startseite A distributed cognitive approach in cybernetic modelling of human vision in a robotic swarm
Artikel
Lizenziert
Nicht lizenziert Erfordert eine Authentifizierung

A distributed cognitive approach in cybernetic modelling of human vision in a robotic swarm

  • Michal Podpora ORCID logo EMAIL logo , Aleksandra Kawala-Sterniuk , Viktoria Kovalchuk , Grzegorz Bialic und Pawel Piekielny
Veröffentlicht/Copyright: 21. Juli 2020
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Objectives

In this paper a novel approach regarding image analysis in Machine Vision applications was proposed.

Methods

The presented concept consists of two issues: (1) shifting some of the complex image processing and understanding algorithms from a mobile robot to distributed computer, and (2) designing the cognitive system (in a distributed computer) in such a way, that it would be common for numerous robots. The authors of this work focused on image processing, and they propose to accelerate vision understanding by using Cooperative Vision (CoV), i.e., to get video input from cooperating robots and process it in a centralized system.

Results

To verify the purposefulness of such approach, a comparative study is currently being conducted, involving a classical single-camera Computer Vision (CV) mobile robot and two (or more) single-camera CV robots cooperating in CoV mode.

Conclusions

The CoV system is being designed and implemented so that the algorithm would be able to utilize multiple video sources for recognition of objects on the very same scene.


Corresponding author: Michal Podpora, Opole University of Technology, Faculty of Electrical Engineering, Automatic Control and Informatics, Opole, Poland, E-mail:

  1. Research funding: None declared.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

  4. Ethical approval: The conducted research is not related to either human or animal use.

References

1. Wandell, B, Thomas, S. Foundations of vision. Psyccritiques 1997;42. https://doi.org/10.1037/000258.Suche in Google Scholar

2. Horn, B, Klaus, B, Horn, P. Robot vision. MIT Press; 1986.Suche in Google Scholar

3. Jain, R, Kasturi, R, Schunck, BG. Machine vision. New York: McGraw-Hill; 1995, vol 5, p. 309–64.Suche in Google Scholar

4. Maliamanis, T, Papakostas, GA. Adversarial computer vision: a current snapshot. In Twelfth International Conference on Machine Vision (ICMV 2019). International Society for Optics and Photonics; 2020, vol 11433, p. 1143328.10.1117/12.2559582Suche in Google Scholar

5. Ballard, DH, Zhang, R. The hierarchical evolution in human vision modeling. Trends Cognit Sci 2020.10.1111/tops.12527Suche in Google Scholar PubMed

6. Ye, XW, Jin, T, Ang, PP. Computer vision-based monitoring of ship navigation for bridge collision risk assessment. In Machine vision and navigation. Cham: Springer; 2020. p. 787–807.10.1007/978-3-030-22587-2_26Suche in Google Scholar

7. Preet Kour V, Arora S. Vision based techniques for image classification: a survey. Sakshi, vision based techniques for image classification: a survey (March 28, 2020); 2020.10.2139/ssrn.3562965Suche in Google Scholar

8. Bagi, R, Dutta, T, Gupta, HP. Deep learning architectures for computer vision applications: a study. In: Advances in data and information sciences. Singapore: Springer; 2020. p. 601–12.10.1007/978-981-15-0694-9_56Suche in Google Scholar

9. Dickinson, SJ, Leonardis, A, Schiele, B, Tarr, MJ, editors. Object categorization: computer and human vision perspectives. Cambridge University Press; 2009.10.1017/CBO9780511635465Suche in Google Scholar

10. Nixon, M, Aguado, A. Feature extraction and image processing for computer vision. Academic Press; 2019.10.1016/B978-0-12-814976-8.00003-8Suche in Google Scholar

11. Li, B, Qi, X, Lukasiewicz, T, Torr, PH. ManiGAN: text-guided image manipulation. arXiv preprint arXiv:1912.06203; 2019.10.1109/CVPR42600.2020.00790Suche in Google Scholar

12. Podpora, M. Vision processing for autonomous robots with the use of distributed systems [Ph.D. dissertation]. Opole University of Technology; 2012 [in Polish].Suche in Google Scholar

13. Tadeusiewicz, R, editor. Theoretical neurocybernetics, chapter 14: DUCH W., cognitive architectures [in Polish]. Warszawa: Wydawnictwa Uniwersytetu Warszawskiego; 2009. ISBN: 978-83-235-0479-5.Suche in Google Scholar

14. Hawkins, J, Blakeslee, S. On intelligence. Times Books; 2004.Suche in Google Scholar

15. Numenta. Hierarchical temporal memory including HTM cortical learning algorithms. https://numenta.com/assets/pdf/whitepapers/ hierarchical-temporal-memory-cortical-learning-algorithm-0.2.1-en.pdf [Accessed Sep 2015].Suche in Google Scholar

16. Rozanska, A, Podpora, M. Multimodal sentiment analysis applied to interaction between patients and a humanoid robot Pepper. IFAC-PapersOnLine 2019;52:411–14. https://doi.org/10.1016/j.ifacol.2019.12.696.Suche in Google Scholar

17. Podpora, M, Gardecki, A, Kawala-Sterniuk, A. Humanoid receptionist connected to IoT subsystems and smart infrastructure is smarter than expected. IFAC-PapersOnLine 2019;52:347–52. https://doi.org/10.1016/j.ifacol.2019.12.685.Suche in Google Scholar

18. Rozanska, A, Rachwaniec-Szczecinska, Z, Kawala-Janik, A, Podpora, M. Internet of Things embedded system for emotion recognition. In: 2018 IEEE 20th international conference on e-Health networking, applications and services (Healthcom). IEEE; 2018. p. 1–5.10.1109/HealthCom.2018.8531100Suche in Google Scholar

19. Podpora, M, Gardecki, A, Beniak, R, Klin, B, Vicario, JL, Kawala-Sterniuk, A. Human interaction smart subsystem—extending speech-based human-robot interaction systems with an implementation of external smart sensors. Sensors 2020;20:2376. https://doi.org/10.3390/s20082376.Suche in Google Scholar

20. Campbell, ME, Whitacre, WW. Cooperative tracking using vision measurements on seascan UAVs. IEEE Trans Control Syst Technol 2007;15:613–26. https://doi.org/10.1109/TCST.2007.899177.Suche in Google Scholar

21. Yue, W, Hussein, II. Cooperative vision-based multi-vehicle dynamic coverage control for underwater applications. IEEE international conference on control applications; 2007. p. 82–7. https://doi.org/10.1109/CCA.2007.4389210.Suche in Google Scholar

22. Rioux, A, Esteves, C, Hayet, JB, Suleiman, W. Cooperative vision-based object transportation by two humanoid robots in a cluttered environment. Int J Hum Robot 2017;14:1750018. https://doi.org/10.1142/S0219843617500189.Suche in Google Scholar

23. Bethke, B, Valenti, M, How, J. Cooperative vision based estimation and tracking using multiple UAVs. Advances in cooperative control and optimization. Springer Berlin Heidelberg; 2007. p. 179–89.10.1007/978-3-540-74356-9_11Suche in Google Scholar

24. Tiszbierek, A, Podpora, M. Overview of popular 3D imaging approaches for mobile robots and a pilot study on a low-cost 3D imaging system. Proceedings of Quaesti 2014 conference. Zilina: EDIS; 2014. p. 515–20. ISBN 978-80-554-0959-7.Suche in Google Scholar

25. Podpora, M, Kawala-Janik, A, Pelc, M. Policy-based self-configuration of autonomous systems information inputs. Proceedings of the 2013 IEEE 7th conference on intelligent data acquisition and advanced computing systems (IDAACS). Berlin; 2013, vol 2, p. 845–8.10.1109/IDAACS.2013.6663047Suche in Google Scholar

26. Podpora, M, Korbas, GP, Kawala-Janik, A. YUV vs RGB – choosing a color space for human-machine interaction. Annals of Computer Science and Information Systems; 2014, vol 3, p. 29–34. ISBN 978-83-60810-60-6, ISSN 2300-5963. https://doi.org/10.15439/2014F206.Suche in Google Scholar

27. Podpora, M. Fuzzified operator language. Conference proceedings of X TERW; 2015.Suche in Google Scholar

28. Ahmed, S., Balasubramanian, H., Stumpf, S., Morrison, C., Sellen, A., Grayson, M. Investigating the intelligibility of a computer vision system for blind users. In: Proceedings of the 25th international conference on intelligent user interfaces; 2020. p. 419–29.10.1145/3377325.3377508Suche in Google Scholar

29. Lee, K. Teachable object recognizers for the blind: using first-person vision. ACM SIGACCESS – Accessibility and Computing; 2020, 1-1.10.1145/3386402.3386405Suche in Google Scholar

30. Dhamani, N, Martin, G, Schubert, C, Singh, P, Hatten, N, Akella, MR. Applications of machine learning and monocular vision for autonomous on-orbit proximity operations. In: AIAA Scitech 2020 Forum; 2020:1376 p.10.2514/6.2020-1376Suche in Google Scholar

Received: 2020-05-07
Accepted: 2020-06-08
Published Online: 2020-07-21

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 5.10.2025 von https://www.degruyterbrill.com/document/doi/10.1515/bams-2020-0025/html
Button zum nach oben scrollen