Home Subtleties of extrinsic calibration of cameras with non-overlapping fields of view
Article
Licensed
Unlicensed Requires Authentication

Subtleties of extrinsic calibration of cameras with non-overlapping fields of view

  • Zaijuan Li

    Zaijuan Li received the M.Sc. degree in Electromechanical Engineering with major in Robotics from Harbin University of Science and Technology, Harbin, China, in 2012. She is currently working toward the Dr.-Ing. degree in the area of computer vision with the Control Methods and Robotics Laboratory, TU Darmstadt, Darmstadt, Germany. Her main research interests are in the field of multi-camera calibration and cooperative mobile vision systems as well as multi-robot localization.

    EMAIL logo
    and Volker Willert

    Volker Willert received the Dipl.-Ing. degree in electrical engineering and information technology and the Dr.-Ing. degree in control theory and robotics, with a focus on dynamical computer vision, from TU Darmstadt, Darmstadt, Germany, in 2002 and 2006, respectively. From 2005 to 2009, he was a Senior Scientist at Honda Research Institute Europe GmbH. Since July 2009, he has been with the Chair of the Control Methods and Robotics Laboratory, TU Darmstadt, and heads the research group Machine Vision and Autonomous Systems. His main research interests are in the fields of machine intelligence, computer vision, distributed controls, and machine learning for mobile robotics, multiagent systems, and driver assistance systems.

Published/Copyright: June 18, 2019

Abstract

The calibration of the relative pose between rigidly connected cameras with non-overlapping fields of view (FOV) is a prerequisite for many applications. In this paper, the subtleties of the experimental realization of such calibration optimization methods like in (Z. Liu, et al., Measurement Science and Technology, 2011, Z. Li, V. Willert, Intelligent Transportation Systems (ITSC), 2018) are presented. Two strategies that could be adapted to certain optimization processes to find better local minima are evaluated. The first strategy is a careful measurement acquisition of pose pairs for solving the calibration problem, which improves the accuracy of the initial value for the following non-linear refinement. The second strategy is the introduction of a quality measure for the image data used for the calibration, which is based on the projection size of the known planar calibration patterns on the image. We show that introducing an additional weighting to the optimization objective chosen as a function of that quality measure improves calibration accuracy and increases robustness against noise. The above strategies are integrated into different setups and their improvement is demonstrated both in simulation and real-world experiment.

Zusammenfassung

Die Kalibrierung der Relativpose zwischen starr verbundenen Kameras, die keine überlappenden Sichtfelder besitzen, ist eine notwendige Voraussetzung für viele Anwendungen der Bildverarbeitung. Der vorliegende Artikel bespricht die technischen Details, die bei der experimentellen Umsetzung der Kalibriermethoden nach (Z. Liu, et al., Measurement Science and Technology, 2011, Z. Li, V. Willert, Intelligent Transportation Systems (ITSC), 2018) beachtet werden müssen, um genaue Kalibrierergebnisse zu erhalten. Es werden zwei Strategien vorgestellt, welche es dem Optimierungsprozess ermöglicht, bessere lokale Minima von nichtkonvexen Gütefunktionen, die zur Kalibrierung benutzt werden, zu finden. Die erste Strategie behandelt die Aufnahme und Auswahl von Messungen von geeigneten Bilderpaaren, wodurch bessere Initialwerte zur Lösung des nichtkonvexen Optimierungsproblems erzeugt werden können. Die zweite Strategie stellt ein Gütemaß auf Basis der Größe der reprojizierten Fläche des Kalibrierkörpers in den zur Kalibrierung verwendeten Bildaufnahmen vor. Dieses Maß kann als zusätzliche Gewichtung in der Gütefunktion verwendet werden und erzeugt genauere Kalibrierergebnisse, die robuster gegen Fehler auf Bildkoordinatenmessungen ausfallen. Beide Strategien werden für unterschiedliche Kamerakonfigurationen sowohl simulativ, als auch anhand echter Messdaten evaluiert.

About the authors

Zaijuan Li

Zaijuan Li received the M.Sc. degree in Electromechanical Engineering with major in Robotics from Harbin University of Science and Technology, Harbin, China, in 2012. She is currently working toward the Dr.-Ing. degree in the area of computer vision with the Control Methods and Robotics Laboratory, TU Darmstadt, Darmstadt, Germany. Her main research interests are in the field of multi-camera calibration and cooperative mobile vision systems as well as multi-robot localization.

Volker Willert

Volker Willert received the Dipl.-Ing. degree in electrical engineering and information technology and the Dr.-Ing. degree in control theory and robotics, with a focus on dynamical computer vision, from TU Darmstadt, Darmstadt, Germany, in 2002 and 2006, respectively. From 2005 to 2009, he was a Senior Scientist at Honda Research Institute Europe GmbH. Since July 2009, he has been with the Chair of the Control Methods and Robotics Laboratory, TU Darmstadt, and heads the research group Machine Vision and Autonomous Systems. His main research interests are in the fields of machine intelligence, computer vision, distributed controls, and machine learning for mobile robotics, multiagent systems, and driver assistance systems.

References

1. Z. Liu, G. Zhang, Z. Wei, and J. Sun, “A global calibration method for multiple vision sensors based on multiple targets,” Measurement Science and Technology, vol. 22, no. 12, p. 125102, 2011.10.1088/0957-0233/22/12/125102Search in Google Scholar

2. Z. Li and V. Willert, “Eye-to-eye calibration for cameras with disjoint fields of view (in press),” in Intelligent Transportation Systems (ITSC). IEEE, 2018.10.1109/ITSC.2018.8569457Search in Google Scholar

3. M. Kaess and F. Dellaert, “Probabilistic structure matching for visual slam with a multi-camera rig,” Computer Vision and Image Understanding, vol. 114, no. 2, pp. 286–296, 2010.10.1016/j.cviu.2009.07.006Search in Google Scholar

4. E. Altuğ, J. P. Ostrowski, and C. J. Taylor, “Control of a quadrotor helicopter using dual camera visual feedback,” The International Journal of Robotics Research, vol. 24, no. 5, pp. 329–341, 2005.10.1177/0278364905053804Search in Google Scholar

5. G. H. Lee, F. Fraundorfer, and M. Pollefeys, “Structureless pose-graph loop-closure with a multi-camera system on a self-driving car,” in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013, pp. 564–571.10.1109/IROS.2013.6696407Search in Google Scholar

6. Y. Suzuki, M. Koyamaishi, T. Yendo, T. Fujii, and M. Tanimoto, “Parking assistance using multi-camera infrastructure,” in Intelligent Vehicles Symposium, 2005. Proceedings. IEEE. IEEE, 2005, pp. 106–111.10.1109/IVS.2005.1505086Search in Google Scholar

7. B. Petit, J.-D. Lesage, C. Menier, J. Allard, J.-S. Franco, B. Raffin, E. Boyer, and F. Faure, “Multicamera real-time 3d modeling for telepresence and remote collaboration,” International journal of digital multimedia broadcasting, vol. 2010, 2010.10.1155/2010/247108Search in Google Scholar

8. S. Nair, G. Panin, M. Wojtczyk, C. Lenz, T. Friedelhuber, and A. Knoll, “A multi-camera person tracking system for robotic applications in virtual reality tv studio,” in Proceedings of the 17th IEEE/RSJ International Conference on Intelligent Robots and Systems 2008. 2008.10.1109/IROS.2008.4650727Search in Google Scholar

9. T. Strauß, J. Ziegler, and J. Beck, “Calibrating multiple cameras with non-overlapping views using coded checkerboard targets,” in Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on. IEEE, 2014, pp. 2623–2628.10.1109/ITSC.2014.6958110Search in Google Scholar

10. R. Xia, M. Hu, J. Zhao, S. Chen, Y. Chen, and S. Fu, “Global calibration of non-overlapping cameras: state of the art,” Optik-International Journal for Light and Electron Optics, vol. 158, pp. 951–961, 2018.10.1016/j.ijleo.2017.12.159Search in Google Scholar

11. J. Wang, L. Wu, M. Q.-H. Meng, and H. Ren, “Towards simultaneous coordinate calibrations for cooperative multiple robots,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on. IEEE, 2014, pp. 410–415.10.1109/IROS.2014.6942592Search in Google Scholar

12. P. Wunsch, S. Winkler, and G. Hirzinger, “Real-time pose estimation of 3d objects from camera images using neural networks,” in Robotics and Automation, 1997. Proceedings., 1997 IEEE International Conference on, vol. 4. IEEE, 1997, pp. 3232–3237.Search in Google Scholar

Received: 2019-03-14
Accepted: 2019-05-06
Published Online: 2019-06-18
Published in Print: 2019-07-26

© 2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 24.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/teme-2019-0030/html?lang=en
Scroll to top button