Framework for 2D-3D image fusion of infrared thermography with preoperative MRI
-
Nico Hoffmann
, Florian Weidner
Abstract
Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.
Introduction
Infrared thermography measures the emitted infrared radiation of the exposed cerebral cortex during neurosurgical interventions. According to the Stefan-Boltzmann law, this radiation correlates with the temperature of the object. In neurosurgery, the temperature distribution of the brain’s surface may help to differentiate between healthy and tumor tissue [6]. One such application was demonstrated by Steiner et al. [22] who unveiled the cortical perfusion by intraoperative infrared thermography. However, thermographic images are difficult to analyze as the infrared camera operates on a different spectrum than human visual perception. This introduces additional challenges when transferring information from infrared thermography to surgical staff. As intraoperative infrared thermography is used as a medical decision support system, it is important for non-technical personnel to interpret thermographic images as well as results originating from data analysis workflows. One approach to this challenge is to fusion the novel infrared thermography with prevailing imaging modalities.
Image fusion denotes the process of joining multiple image sources to one image. The images show the same scene, yet they may have been recorded by different modalities or contain overlapping parts or both. Image fusion requires the images to be aligned such that structures being visible in each appear at the same spatial position. This transformation is called image registration and requires establishing a coordinate transform [16] by feature matching [2, 3], optimizing a similarity measure [12, 15] or by calibration-based approaches [13]. The latter relies on external tracking systems, pre-calibrated instruments or manual measurements. An exhaustive taxonomy of image registration methods can be found in [11].
Feature-based image registration is done by extracting features in all images and solving the correspondence problem. Intensity-based features [7] rely on the detected intensities of pixels or groups of them. In contrast, shape-based features often rely on spatial relationships. Extracting features and solving the correspondence problem is a process that might introduce considerable computational complexity. If the application requires real-time or near-real-time performance, it is necessary to use features that can be calculated and correlated very fast. Furthermore, feature extraction will only work if all images represent similar structures or processes. As thermographic images are mainly influenced by cerebral perfusion while MRI images represent anatomical structures, the identification of corresponding features is cumbersome [4, 20].
Calibration-based image registration approaches are based on offline estimation of camera parameters. These parameters can be enriched by camera tracking information to establish an efficient coordinate transformation [13]. Contrary to feature-based approaches, there is no need for solving potentially expensive correspondence problems or minimization of similarity measures. Sergeeva et al. [21] demonstrate guided intraoperative resection during neurosurgery by calibration-based image registration. The probe of an ultrasound device is tracked by a neuronavigation system. Given the calibration parameters and the tracking data, the tip of the probe and the acquired ultrasound data can then be correlated with the 3D imaging data of the neuronavigation system. The absence of feature extraction and matching makes calibration-based image registration more robust when registering multimodal images. For this reason, we employ calibration-based image registration to join intraoperative 2D thermographic images with preoperative volumetric MRI data.
After successful image registration, the images are aligned and overlayed. In the affine transformation case covered in this study, the required parameters for projecting the 2D thermographic image onto the MRI dataset’s surface require a translation, rotation and a scaling operation. The position of the virtual 2D thermographic image relative to the 3D model resembles the position of the infrared camera’s sensor relative to the subject. In order to fusion both datasets, a projection step is necessary. Projection finds, for each pixel of the 2D thermographic image, the corresponding voxel of the 3D MRI dataset, and fuses the thermographic information with the anatomical information seen in the 3D MRI. By assuming a pin-hole camera model, a projective mapping of every pixel to its corresponding voxel depends on depth information of the imaged 2D scene [23]. If the scene is sufficiently planar, depth information can be recovered from the camera’s focal distance. Other approaches require use of stereo camera systems [20] or light field cameras [24]. Another efficient [17] approach is called texture mapping [4], which was used in [1] for the image fusion of thermographic images with 3D models. Yet, contrary to ray casting [9] or projective mappings, occluded pixels are assigned a color. Therefore, the 3D model is enriched by interpolated information which have not been measured by the infrared camera. In this work, we use texture mapping as it is a fast approach and promises good results.
Kaczmarek et al. [10] perform image fusion in order to analyze burnt skin during cardio surgery and for thermal mammography. For this purpose, they combine information of a 3D object model acquired via 3D scanning and triangulation with 2D thermographic images from an infrared camera. Contrary to the proposed calibration-based method, their approach relies on manual selection of corresponding points in the 3D mesh and the thermographic image. Additionally, their approach relies on a fixed camera position and requires recalibration once the camera’s orientation is changed. In Sanches et al. [19] also propose to improve the diagnostic usefulness of thermal imaging data by CTI and MRI.
The contribution of this study is an efficient framework for 2D-3D image registration and image fusion framework to enhance 2D thermographic images by anatomical structures of preoperative MRI recordings and vice versa. The presented method relies on performing calibration-based 2D-3D image registration of infrared thermography and MRI. By this, the calibration parameters have to be estimated only once and image registration and image fusion is primarily reduced to robust and computationally efficient coordinate transformations.
Materials and methods
Image-guided neurosurgery requires referencing the patient undergoing brain surgery with respect to his or her preoperative MRI dataset. This procedure is specific to the employed neuronavigation system. It is achieved by placing fiducial markers on the patient’s head prior to acquiring the MRI dataset. Afterwards, the surgeon has to manually define or segment the position of these fiducial markers in the recorded MRI dataset. During surgery, the surgeon touches each fiducial marker by the neuronavigation system’s pointing device. Hereby, a coordinate transform between the intraoperative scene and the preoperative MRI dataset is established. By attaching an instrument adapter (IA) to the infrared camera, the neuronavigation system now continuously provides us the position and orientation of the IA relative to the referenced patient.
In the discussed application domain, image registration denotes the process of estimating and using a transformation function to map 2D points from the infrared camera’s coordinate system to the MRI’s 3D coordinate system. This means that the spatial position and orientation of the virtual 2D thermographic image plane with respect to the surface of the MRI dataset resembles the relative position and orientation of the infrared camera to the exposed brain during neurosurgery. Subsequently, the image is projected onto the surface by texture mapping. The whole workflow is sketched in Figure 1. As affine functions are linear in homogeneous coordinates, we represent all 3D Euclidean coordinates p=(x, y, z) by 4D homogeneous coordinates p′=(x′=λx, y′=λy, z′=λz, λ) (see [8] for details) to simplify notation and computations. Note that p can be recovered from p′ by

The parameters for image registration have to be estimated once in a calibration step.
Intraoperatively, each 2D image is then annotated by the 3D position of the 2D imaging device in the referenced 3D MRI coordinate system. This allows us to fusion the results of intraoperative data analysis workflows (purple) with preoperative MRI data. Lastly, the results are visualized using Amira.
Infrared camera tracking
Neuronavigation systems provide several IAs for tracking arbitrary devices of unknown dimension. One such adapter is attached to the infrared camera and is recognized by the neuronavigation system. In order to prevent the usage of inaccurate calibration parameters, the chosen IA has to be re-calibrated at first sight in each image-guided session. This calibration is done by Brainlab’s ICM4 calibration tool and an additional calibration device. The latter had to be developed to enforce a specific orientation of the IA when attached to the calibration device.
Image registration
Tracking the IA allows us to estimate the spatial position and orientation of the infrared camera’s sensor array. In order to project the acquired 2D thermographic image onto the position and orientation of the camera’s sensor array in the MRI coordinate system
Mc∈ℝ4×4 describes the calibration matrix while Ma∈ℝ4×4 handles additional transformations. Vector
Tracking position adjustment
Thermographic image plane adjustment
Pixel size correction
Tracking position adjustment:
The calibration matrix Mc consists of translational and rotational components MTCalib∈ℝ4×4 and MRCalib∈ℝ4×4, respectively:
and describes the orientation adjustment of the image plane with respect to the orientation of the infrared camera (see Figure 2A). The rotational parameters compensate the directional difference between the axis of the IA and the surface normal of the infrared camera’s sensor:

The tracked position of the attached instrument adapter is projected onto the position of the camera’s sensor.
The required operations (translation MTCalib, rotation MRCalib) to project the instrument adapter’s position
Rotation matrices Rx∈ℝ4×4, Ry∈ℝ4×4 and Rz∈ℝ4×4 realize rotations in the X, Y and Z planes given 4D homogenous coordinates. Parameter estimation is achieved by the following two steps: first, the infrared camera is oriented, such that it points vertically downwards (as sketched in Figure 2B). The orientation is validated by comparing the respective dimension of all three non-coplanar point pairs P1, P2 and P3 (see Figure 2B). Physically, these points are represented by fiducial markers attached to the infrared camera. Once the camera is oriented correctly, the two points of pairs P1, P2 and P3 yield the same y, z or x coordinate, respectively. Second, a configuration of rotational parameters (α∈ℝ, β∈ℝ and γ∈ℝ) is estimated manually, such that the normal vector of the infrared camera’s image plane points in the same direction as the physical device (downwards). We chose to realize this process by a graphical user interface that provides feedback about the actual orientation of P1, P2 and P3 as well as the orientation of the normal vector and allows altering rotational parameters.
The required translation parameters are estimated using spatially referenced MRI data of an imaging phantom which is imaged by the infrared camera (see Figures 3 and 4). The parameters (t1, t2 and t3)(are determined manually such that the distance between the spatial positions of the imaging phantom in the MRI coordinate system with its imaged surface in the 2D thermographic image is minimized.

Setup for 3D-2D image fusion.
(A) Neuronavigation system. (B) Imaging phantom with fiducial markers. (C) Instrument adapter. (D) Infrared camera. (E) Laptop running Amira to process tracking and image data.

Thermographic images are joined with volumetric MRI data by parallel projection.
Texture mapping allows us to directly fusion the thermographic image onto the spatial referenced imaging phantom (A). Gray voxels of (B) originate from an MRI dataset of the imaging phantom, while yellow-to-red colored pixels encode its surface temperature distribution that had been detected by the infrared camera.
Thermographic image plane adjustment:
The framework uses two additional transformations to ease handling of the 2D thermographic image, which are collapsed into Ma∈ℝ4×4. MTranslateCenter∈ℝ4×4 translates the virtual origin of the image from the corner to its center. After this translation, the center of scaling and rotation is at the center of the image. Second, the image is rotated by 180° around the global x- and y-axes by rotation matrix RXY180∈ℝ4×4 as the infrared camera provides us a mirrored image.
Pixel size correction:
Image fusion using orthogonal projection requires us to rescale the virtual 2D image such that the pixel resolution of the MRI dataset and thermographic images match. This process depends on estimating the object distance between the infrared camera and the imaged cortical surface. The correlation between the focal point pF∈ℝ and object distance f(pF):ℝ→ℝ is modeled by
INF∈ℝ represents the maximum observable distance, HH∈ℝ approximates the lens’ front principal point and FAC∈ℝ denotes a scaling factor. By creating a training set of focus-point-to-object distance pairs, the parameters {HH, FAC, INF} can be estimated by least-squares.
As we know the horizontal and vertical fields of view of the infrared camera’s lens FOVh∈ℝ, FOVv∈ℝ, we are able to calculate the actual vertical dv∈ℝ and horizontal image size dh∈ℝ at an arbitrary focus value F∈ℝ
Both estimates contribute to the image scaling matrix Mscale∈ℝ4×4:
This matrix allows us to scale the image’s width and height to appropriate size. The constants 640 and 480 originate from the infrared camera’s focal plane array detector size of 640×480 elements (pixels). This transformation, therefore, restores true image dimensions and enables subsequent image fusion.
Image fusion
At this point, the image is transformed such that it is located on the infrared camera’s sensor array, with its size corresponding to the extent of the infrared camera frustum at the focal plane. The final task depicts the projection of the 2D thermographic image onto the surface of the 3D MRI dataset. Hereby, spatial information of the 3D MRI dataset is merged with temperature information extracted from the thermographic image. In the discussed neurosurgical application, an isosurface of the cerebral cortex has to be computed from preoperative MRI data. Brain segmentation algorithms (see [3]) fulfill this task by removing voxels not representing brain tissue. After successful brain segmentation, an isosurface of the cerebral cortex can be computed. As the registered 2D thermographic image is at the correct spatial position, orientation and scale, we are now able to map it onto this isosurface. As discussed, we realize this projection by texture mapping. Hereby, a 2D image patch is projected onto the 3D surface by mapping the image to surface coordinates. In our case, an orthogonal projection can be used to project the registered 2D thermographic image coordinates along the image normal vector onto the isosurface [1] (see Figure 5).

The proposed framework allows the combination of intraoperative thermographic images with the extracted isosurface of the human brain.
The thermographic image was acquired during the resection of a hypotherm renal cell carcinoma metastasis (blue color).
Results
The validation MRI dataset was acquired by a Siemens Magnetom Verio MRI scanner imaging a novel phantom [26] (see Figure 4). The phantom consists of three parts of plastics, an inner balloon, a tube set-up and two syringes as well as necessary connectors. The casing of the phantom simulates the head of a patient. The lid with a circular hole simulates a trepanation. Fiducial markers are attached to three sides of the imaging phantom. Thermographic images were recorded with an InfraTec VarioCAM HD head 680 S infrared camera. We further employed the neuronavigation system BrainLab VectorVision 2.1.1 cranial, BrainLab iPlanNet 3.0 and respective IAs. Image registration and image fusion was done in Amira 5.5 [5].
MRI
The 3D MRI dataset of the imaging phantom has a resolution of 160×512×512 voxels. The used Siemens Magnetom Verio 3T MRI scanner (Siemens Healthcare GmbH, Erlangen, Germany) achieves a voxel resolution of 1 mm×0.48828 mm×0.48828 mm. Therefore, quantization errors during image acquisition lead to a maximum error of twice the resolution. This error also defines a lower bound on the maximum achievable accuracy of the whole image registration and fusion framework as fine-grained structures are smoothed due to the loss of spatial information during MRI data acquisition. State-of-the-art 11.7 T MRI scanners [25] achieve spatial resolutions up to 0.1 mm and could, therefore, significantly minimize this error term.
Tracking beam accuracy
In this test, the imaging phantom was registered three times by Brainlab’s data registration procedure. This procedure consists of two steps. First, the location of the fiducial markers had to be defined in VectorVision iPlanNet 3.0. Second, these markers were touched by BrainLab’s pre-calibrated pointing device. Now, the distance between the position of the pointer tip at the fiducial markers and the respective positions in the 3D MRI dataset were computed (see Figure 2A). For this purpose, the center of each fiducial marker was touched by the pointing device three times in order to quantify the axial error of the coordinates of the pointer tip to the real position in the MRI dataset (see Table 1). We identified two main factors contributing to the tracking beam accuracy. First, the definition of the fiducial marker’s virtual position in VectorVision iPlanNet: this step is typically done manually by the medical personnel and requires great care, in order to ensure that the defined center of the fiducial marker is exactly at its MRI counterpart.
Factors independent of the imaged object also contribute to the overall accuracy of the presented data.
X (mm) | Y (mm) | Z (mm) | |
---|---|---|---|
MRI maximum error | 2 | 0.97656 | 0.97656 |
Tracking beam accuracy | 0.37±0.31 | 0.43±0.31 | 0.99±0.79 |
Secondly, we found a maximum axial error of 0.99 mm in the z-direction which is triple the error in the x- and y-directions. As the z-direction of the coordinate system was nearly parallel to the viewing direction of BrainLab VectorVision2’s tracking camera, we conclude that recovering depth information is less accurate. Therefore, the tracking camera should be oriented so that its viewing direction is not parallel to any dimension of the fiducial markers.
IA
To track the spatial position and orientation of the infrared camera, it is necessary to attach an IA to the camera. The employed IA has to be calibrated in every image-guided session in order to prevent the usage of incorrectly calibrated devices. Brainlab VectorVision Cranial 2.1.1 provides information about the angular and axial error resulting from calibrating the IA. This procedure was performed 10 times and the respective results are shown in Table 2. An average angular error of 0.2° influences the image fusion process. The orientation of the IA affects the determination of the calibration parameters and, therefore, decreases the accuracy of the image registration process.
Calibrating the instrument adapter and mounting it to the infrared camera introduces inaccuracies to the image registration process due to manufacturing inaccuracies.
Angle (°) | Tip (mm) | |
---|---|---|
Calibration | 0.2±0.1 | 0.1±0.1 |
IA mounting | 96±0.5 | 479.4±0.4 |
Following this calibration step, the IA was attached to a pre-existing mounting of the infrared camera (see Figure 6). The camera was kept at a fixed position while the orientation of the IA was evaluated for 10 mountings. While repeatedly mounting the IA, we found a standard deviation of 0.5° of the IA’s orientation. This translates to a variable axial offset depending on the distance between the IA and the subject.

Manufacturing tolerances affect the overall accuracy.
Image (A) shows the mounted instrument adapter, while the actual mounting can be seen in (B). Due to manufacturing tolerances, the instrument adapter can be tilted slightly to the left and the right, decreasing the overall accuracy. This issue can be circumvented by using specialized instrument adapters and mountings that weren’t available for this study.
Object distance estimation
The focus value of the IR camera has to be set manually by medical personnel and is used for object distance estimation. Due to periodically occurring non-uniformity correction of the infrared camera, the focus value deviates even for a fixed object distance. In order to quantify this effect, the camera was placed at a typical intraoperative object distance of 30 cm and was manually focused for 10 times. The digital focus value was 6573±112. This results in an estimated object distance between 27.84 and 29.16 cm causing image size variations ranging from 14.36 mm×10.91 mm to 15.05 mm×11.43 mm. This gives a horizontal inaccuracy of 0.69 mm and a vertical inaccuracy of 0.52 mm.
Discussion
Calibration-based image registration requires a fixed alignment of our 2D camera and the IA. Furthermore, tracking information have to be provided by a neuronavigation system. Both requirements are commonly fulfilled in image-guided surgery. The whole framework was implemented as a C++ plugin for the visualization software Amira 5.5 [5]. Limitations of Amira forced us to approximate the physically accurate pinhole camera model by an orthogonal projection for image fusion. The missing perspective distortion is compensated by the previously discussed scaling operation Mscale. The proposed procedure works well under the assumption that the mapped surface is close to the focal plane. The actual projection is realized by texture mapping, meaning that the 2D thermographic image could be efficiently projected on any surface. The main drawback of texture mapping is that covert objects also get texturized by interpolated information. Yet, as we are imaging the approximately convex surface of the human brain, this drawback can be neglected.
We found that the calibration and the attachment of the IA to the imaging device is the most critical factor for the overall accuracy. To quantify this observation, the IA was placed at two extreme positions, denoted by –φ and +φ. +φ refers to the extreme position, where the adapter is tilted to the right, whilst –φ represents the opposite direction. This kind of attachment is possible as the mechanism to attach the IA to the mounting has a minimal manufacturing tolerance. In the worst case, this effect degrades the overall accuracy to 10.06 mm (see Table 3).
Manufacturing inaccuracies of the instrument adapter’s mounting significantly contribute to the overall error.
−φ error (mm) | +φ error (mm) | |
---|---|---|
Overall maximum error | 7.94±0.52 | 10.06±0.68 |
These inaccuracies allow slight variations of the instrument adapter’s position to the right −φ and left +φ.
Further, the calibration strongly depends on the quality of the tracking data. Inaccurate spatial referencing of the patient or target as well as inaccurately defined fiducial marker positions degrade the overall accuracy. For best accuracy, it is necessary to refine calibration parameters of the image registration transform each time the IA is recalibrated to the neuronavigation system and re-mounted to the camera. Otherwise, the average error increases to 9 mm. By using pre-calibrated IAs and special mountings, an accuracy of 2.46 mm is achievable without refinement of calibration parameters.
Finally, we determined the overall error at varying orientations of the IR camera with optimal mounting of the IA. The camera was rotated such that it points orthogonal, 30° and 60° to the imaged object. The distance of the fiducial markers visible in the fusioned thermographic image plane to the respective position in the 3D MRI dataset was computed. The results at various orientations of the IR camera indicate a cumulative error of 2.46 mm (approximately 2.5–5 voxels) including all preceding influential factors (see Tables 4 and 5 ), while the accuracy is maximized if the camera points orthogonally to the imaged scene. An orthogonal orientation of the infrared camera also depicts a common intraoperative configuration as the signal-to-noise ratio of infrared cameras strongly depends on the angle of incidence of the measured infrared radiation. In this case, no perspective distortions are prevalent as well.
The incidence angle of the infrared camera with respect to the imaged surface as well as the object distance influence overall performance.
Orientation | Object distance | ||
---|---|---|---|
48.48 cm error (mm) | 29.30 cm error (mm) | 15.02 cm error (mm) | |
0° | 0.62±0.16 | 0.52±0.26 | 0.64±0.08 |
30° | 2.70±0.82 | 2.90±1.27 | 3.07±0.76 |
60° | 3.58±0.90 | 3.80±0.50 | 4.30±1.54 |
Increased incidence angles also degrade the amount of detected thermal radiation; thus, orthogonal orientation of the infrared camera (0°) with respect to the imaged object is to be preferred.
The mean error of 0.59 mm increases with increasing incidence angle caused by inaccurate calibration parameters (optimized for orthogonal orientations) as well as by texture mapping.
Camera orientation | 0° | 30° | 60° | Average |
Overall mean error (mm) | 0.59 | 2.89 | 3.89 | 2.46 |
The whole framework is optimized for computational efficiency in order to enable the application in time-critical settings. It further allows subsequent data analysis methods to incorporate anatomical and functional information from preoperative MRI measurements (for example, tumor position or localization of eloquent areas of the cerebral cortex). Furthermore, medical personnel get a tool to validate intraoperative thermographic imaging data by their expectations from prevailing volumetric imaging.
Clinically, most of the required steps for registration and fusion can be done preoperatively. The infrared camera with attached IA has to be calibrated once in order to establish the mapping between the IA’s position and orientation and the camera’s sensor position and orientation. Prior to the actual surgery, the MRI dataset has to be imported into Amira followed by brain segmentation. Intraoperatively, the patient has to be referenced to the neuronavigation system. Attaching a sterile IA to the covered 2D infrared camera now enables the actual image registration and image fusion. All the required software is executed on a laptop. The communication between Amira and the neuronavigation system is handled by the developed Amira plugin via the standardized OpenIGTLink interface. This means that the software is not limited to neuronavigation systems of BrainLab and can be used with any neuronavigation system that implements OpenIGTLink. It might be favorable to either display the fusioned data on a laptop screen or to output the Amira window to the neuronavigation system by its video connector.
Summary
The proposed calibration-based image registration and image fusion framework allows the combination of intraoperative 2D imaging and preoperative 3D MRI data. In neurosurgery, joining intraoperative perfusion information with neuroanatomy provides valuable information to the surgeon. For this purpose, we propose a framework for image registration and image fusion of 3D MRI with 2D thermographic imaging data. The employed calibration-based image registration algorithm transforms the intraoperatively tracked position and orientation information of our 2D infrared camera into the 3D coordinate system originating from preoperative volumetric imaging. By application of an orthogonal projection, the 2D image is projected onto the respective surface of the 3D dataset. In order to quantify the projection accuracy and unveil further potential improvements of the framework, we applied an extensive evaluation scheme. The results indicate a mean accuracy of 2.46 mm given an appropriate setup. We further estimated an upper bound of the accuracy at 10.06 mm. Further work will focus on minimizing this upper bound in order to achieve reasonable accuracy especially when using surgical microscopes with sub-millimeter resolutions.
Acknowledgments
The authors would also like to thank all other organizations and individuals, especially the surgical and nursing staff, which supported this research project.
Funding: This work was supported by the European Social Fund (grant 100087783) and the Free State of Saxony.
References
[1] Akhlou M, Verney B. Multimodal fusion system for NDT and Metrology. 12th Int Conf Quant Infrared Thermogr 2014; 7.10.21611/qirt.2014.173Suche in Google Scholar
[2] Berkels B, Cabrilo I, Haller S, Rumpf M, Schaller K. Co-registration of intra-operative photographs and pre-operative MR images. Int J Comput Assist Radiol Surg 2014; 9: 387–400.10.1007/s11548-014-0979-ySuche in Google Scholar PubMed
[3] Bichinho GL, Gariba MA, Sanches IJ, Gamba HR, Cruz FPF, Nohama P. A computer tool for the fusion and visualization of thermal and magnetic resonance images. J Digit Imaging 2009; 22: 527–534.10.1007/s10278-007-9046-3Suche in Google Scholar PubMed
[4] Catmull EE. A subdivision algorithm for computer display of curved surfaces. University of Utah 1974.Suche in Google Scholar
[5] FEI Visualization Sciences Group. Amira 5.5 2013.Suche in Google Scholar
[6] Gorbach AM, Heiss JD, Kopylev L, Oldfield EH. Intraoperative infrared imaging of brain tumors. J Neurosurg 2004; 101: 960–969.10.3171/jns.2004.101.6.0960Suche in Google Scholar PubMed
[7] Goshtasby AA. Image registration – principles, tools and methods. London: Springer-Verlag 2012.10.1007/978-1-4471-2458-0Suche in Google Scholar
[8] Hartley RI, Zisserman A. Multiple view geometry in computer vision. 2nd ed. Cambridge: Cambridge University Press 2004.10.1017/CBO9780511811685Suche in Google Scholar
[9] Jenkinson M, Pechaud M, Smith S. BET2: MR-based estimation of brain, skull and scalp surfaces. 11th Annu Meeting Organ Hum Brain Mapp 2005.Suche in Google Scholar
[10] Kaczmarek M. Integration of thermographic data with the 3D object model. 12th Int Conf Quant InfraRed Thermogr 2014.10.21611/qirt.2014.154Suche in Google Scholar
[11] Maintz JBA, Viergever MA. A survey of medical image registration. Med Image Anal 1998; 2: 1–36.10.1016/S1361-8415(01)80026-8Suche in Google Scholar PubMed
[12] Mani VRS, Arivazhagan S. Survey of medical image registration. J Biomed Eng Tech 2013; 1: 8–25.Suche in Google Scholar
[13] Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal 2012; 16: 642–661.10.1016/j.media.2010.03.005Suche in Google Scholar PubMed
[14] Mitrović U, Markelj P, Likar B, Miloševič Z, Pernuš F. Gradient-based 3D-2D registration of cerebral angiograms. Proc SPIE 2011; 7962.10.1117/12.877541Suche in Google Scholar
[15] Mitrović U, Špiclin Z, Likar B, Pernuš F. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images. IEEE Trans Med Imaging 2013; 32: 1550–1563.10.1109/TMI.2013.2259844Suche in Google Scholar PubMed
[16] Modersitzki J. Numerical methods for image registration. 1st ed. New York: Oxford University Press Inc., 2003.10.1093/acprof:oso/9780198528418.001.0001Suche in Google Scholar
[17] Moons T, Gool LV, Vergauwen M. 3D Reconstruction from multiple images part 1: principles. Found Trends Comp Graph Vision 2010; 4: 287–404.10.1561/9781601982858Suche in Google Scholar
[18] Oswald-Tranta B, O’Leary P. Fusion of geometric and thermographic data for automated defect detection. J Electron Imaging 2012; 21: 021108-1–021108-8.10.1117/1.JEI.21.2.021108Suche in Google Scholar
[19] Sanches IJ, Gamba HR, De Souza MA, Neves EB, Nohama P. Fusão 3D de imagens de MRI/CT e termografia. Rev Bras Eng Biomed 2013; 29: 298–308.10.4322/rbeb.2013.031Suche in Google Scholar
[20] Saxena A, Schulte J, Ng A. Depth estimation using monocular and stereo cues. Proc 20th Int Joint Conf Artif Intell 2007; 2197–2203.Suche in Google Scholar
[21] Sergeeva O, Uhlemann F, Schackert G, Hergeth C, Morgenstern U, Steinmeier R. Integration of intraoperative 3D-ultrasound in a commercial navigation system. Zentralbl Neurochir 2006; 67: 197–203.10.1055/s-2006-942186Suche in Google Scholar PubMed
[22] Steiner G, Sobottka SB, Koch E, Schackert G, Kirsch M. Intraoperative imaging of cortical cerebral perfusion by time-resolved thermography and multivariate data analysis. J Biomed Opt 2011; 16: 016001.10.1117/1.3528011Suche in Google Scholar PubMed
[23] Tan S, Dale J, Anderson A, Johnston A. Inverse perspective mapping and optic flow: A calibration method and a quantitative analysis. Image Vision Comp 2006; 24: 153–163.10.1016/j.imavis.2005.09.023Suche in Google Scholar
[24] Tao MW, Hadap S, Malik J, Ramamoorthi R. Depth from combining defocus and correspondence using light-field cameras. Proc 2013 IEEE Int Conf Comp Vision 2013; 673–680.10.1109/ICCV.2013.89Suche in Google Scholar
[25] Vedrine P, Gilgrass G, Aubert G, et al. Iseult/INUMAC whole body 11.7 T MRI magnet. IEEE Trans. Appl. Superconductivity 2015; 25.10.1109/TASC.2014.2369233Suche in Google Scholar
[26] Weidner F, Hoffmann N, Radev Y, et al. Entwicklung eines Gehirn-Phantoms zur Perfusions- und Brain Shift Simulation. Reports on Biomed. Eng. – Band 2: 5. Dresdner Medizintechnik-Symposium. 2014; 111–113.Suche in Google Scholar
©2017 Walter de Gruyter GmbH, Berlin/Boston
Artikel in diesem Heft
- Frontmatter
- Review
- Breast sentinel lymph node biopsy with imaging towards minimally invasive surgery
- Research articles
- Tailored interactive sequences for continuous MR-image-guided freehand biopsies of different organs in an open system at 1.0 tesla (T) – Initial experience
- In vitro stent assessment by MRI: visibility of lumen and artifacts for 27 modern stents
- Computer-assisted system on mandibular canal detection
- DCS-SVM: a novel semi-automated method for human brain MR image segmentation
- An image-processing strategy to extract important information suitable for a low-size stimulus pattern in a retinal prosthesis
- Framework for 2D-3D image fusion of infrared thermography with preoperative MRI
- Rapid, automated mosaicking of the human corneal subbasal nerve plexus
- Regular research articles
- Examination of the reliability of an inertial sensor-based gait analysis system
- Response of a physiological controller for ventricular assist devices during acute patho-physiological events: an in vitro study
- Effects of the nasal passage on forced oscillation lung function measurements
- Design of a mechanism for converting the energy of knee motions by using electroactive polymers
Artikel in diesem Heft
- Frontmatter
- Review
- Breast sentinel lymph node biopsy with imaging towards minimally invasive surgery
- Research articles
- Tailored interactive sequences for continuous MR-image-guided freehand biopsies of different organs in an open system at 1.0 tesla (T) – Initial experience
- In vitro stent assessment by MRI: visibility of lumen and artifacts for 27 modern stents
- Computer-assisted system on mandibular canal detection
- DCS-SVM: a novel semi-automated method for human brain MR image segmentation
- An image-processing strategy to extract important information suitable for a low-size stimulus pattern in a retinal prosthesis
- Framework for 2D-3D image fusion of infrared thermography with preoperative MRI
- Rapid, automated mosaicking of the human corneal subbasal nerve plexus
- Regular research articles
- Examination of the reliability of an inertial sensor-based gait analysis system
- Response of a physiological controller for ventricular assist devices during acute patho-physiological events: an in vitro study
- Effects of the nasal passage on forced oscillation lung function measurements
- Design of a mechanism for converting the energy of knee motions by using electroactive polymers