Neural network enabled wide field-of-view imaging with hyperbolic metalenses
-
Joel Yeo
, Deepak K. Sharma
, Saurabh Srivastava
, Aihong Huang
, Emmanuel Lassalle
, Egor Khaidarov
, Keng Heng Lai
, N. Duane Loh
, Arseniy I. Kuznetsov
und Ramon Paniagua-Dominguez
Abstract
The ultrathin form factor of metalenses makes them highly appealing for novel sensing and imaging applications. Amongst the various phase profiles, the hyperbolic metalens stands out for being free from spherical aberrations and having one of the highest focusing efficiencies to date. For imaging, however, hyperbolic metalenses present significant off-axis aberrations, severely restricting the achievable field-of-view (FOV). Extending the FOV of hyperbolic metalenses is thus feasible only if these aberrations can be corrected. Here, we demonstrate that a Restormer neural network can be used to correct these severe off-axis aberrations, enabling wide FOV imaging with a hyperbolic metalens camera. Importantly, we demonstrate the feasibility of training the Restormer network purely on simulated datasets of spatially-varying blurred images generated by the eigen-point-spread function (eigenPSF) method, eliminating the need for time-intensive experimental data collection. This reference-free training ensures that Restormer learns solely to correct optical aberrations, resulting in reconstructions that are faithful to the original scene. Using this method, we show that a hyperbolic metalens camera can be used to obtain high-quality imaging over a wide FOV of 54° in experimentally captured scenes under diverse lighting conditions.
1 Introduction
Metasurfaces have emerged as a transformative technology in optics due to their potential to replace, or even outperform, traditional optical components with ultra-thin, multi-functional ones. Within the field, the metasurface counterparts of traditional lenses (so-called metalenses) are particularly attractive as these are the most ubiquitous elements in optical systems, usually taking the vast majority of space and weight. Unlike traditional bulky lenses, metalenses utilize nanoscale structures to manipulate the fundamental properties of light (typically the phase) locally and abruptly, making them invaluable for applications in imaging, sensing, and optical metrology [1], [2]. The capability of metalenses to replicate complex phase profiles while remaining ultrathin also offers significant advantages over their bulky counterparts, where freeform optics is usually expensive and difficult to manufacture [3], [4], [5].
Within the different metalens designs explored by the community, the one that imparts a hyperbolic phase profile in the incident beam is particularly attractive as it is free from spherical (and any other spatial) aberrations when illuminated on-axis [5]. In addition, the focusing efficiency of these metalenses remains the highest demonstrated to date, making them commonly used for light-focusing applications, including those requiring high-numerical apertures (NA) [6], [7], [8], [9], [10], [11]. However, while the hyperbolic lens is theoretically diffraction-limited along the optical axis, it presents strong off-axis aberrations, translating into a point-spread function (PSF) that rapidly deteriorates as the angle of incident light departs from normal [12]. In an imaging experiment, this causes the resultant image to be aberrated with a spatially varying blur, which traditional deblurring methods such as the Wiener filter [13] cannot remove. As a consequence, these off-axis aberrations severely limit the usable field-of-view (FOV) of hyperbolic metalenses and, therefore, their use in imaging applications.
To circumvent this issue and expand the FOV of metalenses, the community has explored alternative phase profiles, such as the quadratic one [14], [15], [16], [17], [18], [19], [20], or multi-element configurations (doublets, triplets, or other lens arrays) [21], [22], [23], [24], [25]. While these are indeed able to provide a wide FOV (up to even 180° in some cases), they come at the cost of spherical aberrations and poor efficiencies (in the case of quadratic phase profiles) or fabrication complexity and overall system size (in the case of doublets or systems with an aperture).
As a result, part of the community is now turning their attention to the possibility of correcting this issue on the software side rather than the hardware one. In this regard, iterative deconvolution algorithms have been recently introduced to correct for such spatially varying aberrations [26]. These, however, are typically slow and prone to reconstruction artifacts [17]. These algorithms are also sensitive to noise and require precise calibration of the spatially-varying PSFs which is challenging in practical applications. Over recent years, deep-learning algorithms have been increasingly applied to remove aberrations from metalens images [27], [28], [29], [30], [31], [32], [33], [34], [35], [36]. Their fast inference speed, combined with robustness against noise and experimental errors, make them highly appealing and successful for metalens imaging postprocessing. However, many demonstrations of deep-learning deblurring are reference-based, requiring tedious curation of experimental datasets of measurement and ground truth pairs. This could also result in overfitting to specific imaging conditions, such as lighting, magnification, alignment, and other experimental parameters for which the experimental dataset was collected. As such, these trained networks would not be readily extended to deblur images under different imaging conditions.
Here, we present a neural network-enabled, reference-free hyperbolic metalens camera for wide FOV imaging. In particular, we employ a Restormer neural network to correct the severe off-axis aberrations of these type of lenses, ultimately enabling aberration-free imaging over 54°. By reference-free training, we mean that while the network is trained in a supervised manner, it does not require any experimentally acquired reference datasets, whether from external imaging systems or curated target images (e.g., pictures displayed on a screen). Instead, all training data are generated synthetically by simulating spatially varying blurred images using the eigenPSF method [26], which automatically provides the ground truth based on the physics of the imaging system itself. This eliminates the need for time-consuming curation of experimental datasets and also ensures that the trained network only removes optical aberrations without overfitting to specific imaging conditions. We demonstrate that our hyperbolic metalens camera delivers robust imaging performance in low-light conditions, during close-up photography, and under diverse lighting directions and occlusions.
2 Results
2.1 Design, fabrication and optical characterization of the metalens
The hyperbolic metalens used in this work has a phase profile given by the expression
where r is the radial distance from the center of the lens, and λ and f are the design wavelength and focal length, respectively. The fabricated metalens has a diameter of D = 5 mm and f = 1.813 mm, designed at a working wavelength of λ = 850 nm with a numerical aperture of NA = 0.81. The (wrapped) hyperbolic phase profile was mapped using amorphous silicon (a-Si) nanopillars with a circular cross-section (to maintain polarization-insensitive response) on a fused silica substrate. These pillars are arranged in a hexagonal lattice (lattice constant of 350 nm) and have a fixed height of 500 nm and diameters in the range of 140–264 nm (Figure 1a–d). The simulated transmittance and phase as a function of the pillar diameter are also plotted in Figure 1e.

Fabrication of the hyperbolic metalens and the imaging setup. (a) Schematic of the hexagonal unit cell of the metalens with a lattice constant of a = 350 nm, consisting of cylindrical nanopillars with a height of H = 500 nm and diameters, D, ranging from 140 nm to 264 nm. (b) Optical image of a wafer with an array of metalenses patterned using deep UV immersion photolithography. (c, d) Scanning electron micrographs of the metalens depicting the patterned a-Si nanopillars on a glass substrate. (e) The simulated phase and transmittance of uniform a-Si nanopillar arrays with height of 500 nm and diameters ranging from 140 to 264 nm with a step size of 2 nm. (f) Optical image of the fabricated 5 mm diameter metalens. (g) Optical image of the hyperbolic metalens camera used in imaging, where the (h) metalens (white circle) is mounted directly in front of the CMOS detector (red square).
The samples are fabricated using a 12-inch, deep ultraviolet (UV) immersion photolithography scanner (see details in Methods) and optically characterized using a goniometric optical setup (Figure 2a). This characterization setup, which has a calculated magnification of 83.3 (resulting in an effective detector pixel size of 41.4 nm) allows imaging the PSFs of the fabricated hyperbolic metalens at various angles of incidence (AOI). Figure 2b compares these measurements against theoretical PSFs calculated with Fourier optics simulations (details in Supplementary information). As can be seen, there is a close match between the measured and simulated PSFs, with minor discrepancies likely attributed to fabrication errors.

Characterization of the hyperbolic metalens. (a) Optical setup for angles of incidence dependent optical characterization of the hyperbolic metalens PSFs. The laser (850 nm wavelength) output isexpanded using two lenses and an aperture to distribute the intensity uniformly across the metalens. These are mounted on a rotating arm of a goniometer that enables different AOI illumination on the metalens. The metalens focuses the collimated laser at the focal plane, which is then imaged onto the CMOS detector using an objective lens (Olympus, MPLAPON100X) and a tube lens (150 mm focal distance). (b) (top) Experimentally measured PSFs at different angles of incidence compared to (middle) simulated PSFs for the hyperbolic metalens. We use a log-normalized colormap for better visualization of these PSFs. The scalebars (white) have sizes of 10 µm and 2 µm for the main image and its inset, respectively. (bottom) The horizontal line profiles of the measured (red) and simulated (blue) PSFs.
2.2 Hyperbolic metalens camera
The hyperbolic metalens camera comprises only two components (Figure 1g and h): the metalens and a complementary metal oxide semiconductor (CMOS) detector (Thorlabs, Zelux-CS165MU), resulting in an ultra-compact design. The scene is illuminated with a light-emitting diode (LED) with a dominant wavelength of 850 nm (Thorlabs M850L3) and bandwidth of 30 nm (setup figure in Supplementary information). The hyperbolic metalens, mounted at a distance f = 1.813 mm from the detector, focuses the illuminated scene onto the sensor to form the image. In this work, we used only the detector’s central 512 × 512 pixels, corresponding to an angular field-of-view (FOV) of 54°, due to memory constraints during network training. Beyond this computational limitation, the practical FOV of our hyperbolic metalens camera is also restricted by physical factors: at larger angles, the PSFs eventually spread beyond the detector size, and exhibit weaker signal, making them difficult to capture and accurately model the physics of image formation for our eigenPSF method.
2.3 Restormer deblurring
Figure 3 shows the schematic of our computational deblurring approach for hyperbolic metalens imaging. Using our imaging setup, we first measured the PSFs at different AOIs ranging from 0° to 40° (Figure 3a), with denser sampling at smaller angles to capture rapidly varying PSF shapes and sparser sampling at larger angles where the PSFs primarily grow in size (see Supplementary information). Note that these imaging PSFs are different from the measured PSFs depicted in Figure 2b as there is no external magnification in the imaging setup. These imaging PSFs are computationally rotated to populate a PSF map that covers the full extent of an image corresponding to an angular FOV of 54° as shown in Figure 3a. These spatially-varying PSFs are then eigendecomposed into spatially-invariant eigenPSF bases weighted by the corresponding eigencoefficients [26] (see Supplementary information) to simulate spatially-varying blur applied to ground truth images from Google’s Open Images dataset [37], [38]. This enables efficient and accurate generation of large training datasets with spatially varying PSFs, a task that would otherwise be prohibitively slow or experimentally impractical. Each simulated blurred image is corrupted with noise by augmenting a measured flatfield through random rotations and flips (Figure 3b). The total time taken to simulate 3,500 noisy and blurred images on a single NVIDIA L40 GPU was approximately 10 min.
![Figure 3:
Schematic for computational deblurring using the Restormer architecture trained on eigenPSF-simulated images. (a) The hyperbolic metalens is characterized using the measured PSFs, and the flatfield records the noise profile of the imaging system. (b) Simulated images are obtained using the eigenPSF method to apply spatially varying blur on the ground truth dataset, and further corrupted by noise created by augmenting the flatfield measurement. (c) The Restormer architecture is used to deblur and denoise the images. Illustration created using PlotNeuralNet [39].](/document/doi/10.1515/nanoph-2025-0354/asset/graphic/j_nanoph-2025-0354_fig_003.jpg)
Schematic for computational deblurring using the Restormer architecture trained on eigenPSF-simulated images. (a) The hyperbolic metalens is characterized using the measured PSFs, and the flatfield records the noise profile of the imaging system. (b) Simulated images are obtained using the eigenPSF method to apply spatially varying blur on the ground truth dataset, and further corrupted by noise created by augmenting the flatfield measurement. (c) The Restormer architecture is used to deblur and denoise the images. Illustration created using PlotNeuralNet [39].
A Restormer network [40] is trained using these 3,500 simulated images as input, and their corresponding ground truth images as the desired output (Figure 3c). We use the default parameters and loss functions described in the original paper [40] for the Restormer network, except reducing the number of channels to [36, 72, 144, 288] for layers L1 to L4, respectively due to GPU memory constraints. This constitutes a total of 14.8 million trainable parameters in our Restormer network. Using 4 NVIDIA L40 GPUs, training for 200 epochs with a batch size of 1 took approximately 48 h.
Figure 4 shows the raw measurements from our hyperbolic metalens camera and the corresponding results of Restormer deblurring on the images (see Supplementary information for more results). The characteristic aberrations due to the hyperbolic lens phase profile are evident in the sharp features at the center of the image and the increasing coma at larger incidence angles. The images here are not diffraction-limited due to the broadband LED used to illuminate the scenes and photos. The physical size of the detector pixels also limits the resolution of the measurements.

Deblurring images from the hyperbolic metalens camera using the trained, reference-free Restormer network. The (a) measured and (b) deblurred images of scenes taken around a lab. The (c) measured and (d) deblurred images of printed photos placed before the camera. All images have the same angular FOV of 54°.
Despite the spatially-varying aberrations in the measurements, the trained Restormer network is able to deblur the full FOV of the images in real-time (∼50 ms per image), recovering features even toward the edges of the images. By using a reference-free dataset, we avoid overfitting to specific imaging conditions during the training of the Restormer network. This is further demonstrated in Figure 5 where under varying illumination directions and obstructions, our trained Restormer is still able to recover features even with low lighting at various regions of both the scene and printed USAF card.

Deblurring images from the hyperbolic metalens camera with varying illumination direction and obstructions. The (a) measured and (b) deblurred images of the same lab scene under different lighting. The (c) measured and (d) deblurred images of a printed USAF card, where the card was tilted in the last column. All images have the same angular FOV of 54°.
Figure 6 further demonstrates the improved quality of deblurring from our trained Restormer network over other existing state-of-the-art approaches. The first of which is an Autograd implementation of the eigenCWD algorithm [26] which utilizes PyTorch’s [41] inbuilt automatic differentiation engine to perform optimization instead of using analytical gradients (details in Supplementary information). The reconstruction from this iterative approach in Figure 6 is contaminated with noisy artifacts as it only accounts for spatially-varying blur and not the noise characteristics of the sensor. In addition, as an iterative algorithm, the Autograd implementation of eigenCWD is incapable of real-time deblurring (∼1 min per image on a single NVIDIA L40 GPU). Using the same dataset, we also trained a Multiscale neural network architecture [42] which has recently been used in image reconstruction applications for metalenses [31], [32] (details in Supplementary information). However, we observe residual smeared artifacts in the output of the Multiscale network, likely attributed to the presence of noise in the training dataset which the network is unable to fully remove. This suggests that the Restormer network remains robust against noise and demonstrates improved performance in spatially-varying deconvolution over existing state-of-the-art methods.

Comparing deblurring performance using various algorithms. The Restormer network surpasses both Autograd and Multiscale methods in both spatially-varying deblurring capabilities as well as suppressing noise.
3 Conclusions
In this work, we have demonstrated wide FOV imaging with a hyperbolic metalens camera. By using the eigenPSF method as an efficient forward model to simulate the metalens’ spatially-varying blur computationally, we circumvent the need for experimental curation of datasets for training a deblurring neural network. In addition, the Restormer network used for postprocessing the images enables real-time aberration correction (after training) compared to time-consuming iterative algorithms, and additionally remains robust against noise and experimental errors.
Our findings suggest that the FOV in hyperbolic metalens imaging could be further extended by leveraging advances in computational power to train on larger image sizes. Additionally, the diffraction-limited resolution of the hyperbolic lens along the optical axis remains underutilized due to current limitations in detector pixel sizes and the large bandwidth of the illumination source. With future improvements in hardware, this work has the potential to open new pathways toward achieving high-resolution, wide-FOV imaging with hyperbolic metalenses.
4 Methods
4.1 Fabrication of metalens
A 193 nm argon fluoride (ArF) deep-ultraviolet (DUV) immersion photolithography process combined with a dry etching process is used to fabricate the hyperbolic metalenses. The metalens comprises millions of amorphous silicon (a-Si) nanopillars, which are patterned on a 350 nm-thick a-Si film deposited using plasma-enhanced chemical vapor deposition (PECVD) on a 12-inch fused silica wafer. The wafer was then diced into small coupons, and the individual a-Si metalenses were subsequently etched using a dry etching process. Hence, the metalens pattern was transferred from the photoresist to the a-Si film, forming a-Si nanopillars. The 480 nm a-Si pillars were etched in multiple smaller steps instead of a continuous step. This process is recommended, especially when performing deep etching with high aspect ratio pillars. After every etching step, the chamber undergoes a 5-min cooling period before the next step. This not only provides smoother sidewalls but also protects the pillars from undercutting. A residue layer of SiO2 (30 nm) remains on top after a-Si etching as a part of the etching hard mask, but it possesses no hindrance to the optical performance, and therefore, we do not remove it.
Funding source: Faculty of Science, National University of Singapore
Award Identifier / Grant number: Early Career Research Award
Funding source: AME Programmatic Grant, Singapore
Award Identifier / Grant number: A18A7b0058
Funding source: Agency for Science, Technology and Research
Award Identifier / Grant number: A*STAR Graduate Scholarship
Acknowledgements
The authors would like to acknowledge the computational resources provided by the NUS Centre for Bio-Imaging Sciences.
-
Research funding: This work was supported by the A*STAR Graduate Scholarship, the AME Programmatic Grant, Singapore, under Grant A18A7b0058, and the Early Career Research Award from the National University of Singapore (NUS). We also acknowledge funding support from NUS Centre for Bioimaging Sciences (E-154-00-0020-01).
-
Author contributions: RPD and AIK conceived the work. JY developed the theory and numerical simulations. EL designed the metalens. EK designed the nanopillars. SS, AH, KHL, and FYH fabricated the samples. DS and JY performed the experiments. JY and RPD wrote the manuscript with inputs from all authors. RPD, AIK, and NDL supervised the research. All authors have accepted responsibility for the entire content of this manuscript and consented to its submission to the journal, reviewed all the results and approved the final version of the manuscript.
-
Conflict of interest: Authors state no conflict of interest.
-
Data availability: The code and dataset used in this work are available at https://doi.org/10.5281/zenodo.14746073.
References
[1] A. I. Kuznetsov, et al.., “Roadmap for optical metasurfaces,” ACS Photonics, vol. 11, no. 3, pp. 816–865, 2024, https://doi.org/10.1021/acsphotonics.3c00457.Suche in Google Scholar PubMed PubMed Central
[2] T. H. Son, Q. Li, J. K. W. Yang, H. V. Demir, M. L. Brongersma, and A. I. Kuznetsov, “Optoelectronic metadevices,” Science, vol. 386, no. 6725, p. eadm7442, 2024, https://doi.org/10.1126/science.adm7442.Suche in Google Scholar PubMed
[3] A. I. Kuznetsov, A. E. Miroshnichenko, M. L. Brongersma, Y. S. Kivshar, and B. Luk’yanchuk, “Optically resonant dielectric nanostructures,” Science, vol. 354, no. 6314, 2016, Art. no. aag2472. https://doi.org/10.1126/science.aag2472.Suche in Google Scholar PubMed
[4] P. Genevet, F. Capasso, F. Aieta, M. Khorasaninejad, and R. Devlin, “Recent advances in planar optics: From plasmonic to dielectric metasurfaces,” Optica, vol. 4, no. 1, pp. 139–152, 2017. https://doi.org/10.1364/optica.4.000139.Suche in Google Scholar
[5] M. Pan, et al.., “Dielectric metalens for miniaturized imaging systems: Progress and challenges,” Light Sci. Appl., vol. 11, no. 1, p. 195, 2022, https://doi.org/10.1038/s41377-022-00885-7.Suche in Google Scholar PubMed PubMed Central
[6] F. Aieta, et al.., “Aberration-free ultrathin flat lenses and axicons at telecom wavelengths based on plasmonic metasurfaces,” Nano Lett., vol. 12, no. 9, pp. 4932–4936, 2012, https://doi.org/10.1021/nl302516v.Suche in Google Scholar PubMed
[7] R. Paniagua-Dominguez, et al.., “A metalens with near-unity numerical aperture,” Nano Lett., vol. 18, no. 3, pp. 2124–2132, 2017, https://doi.org/10.1021/acs.nanolett.8b00368.Suche in Google Scholar PubMed
[8] M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science, vol. 352, no. 6290, pp. 1190–1194, 2016, https://doi.org/10.1126/science.aaf6644.Suche in Google Scholar PubMed
[9] Z.-B. Fan, et al.., “Silicon nitride metalenses for close-to-one numerical aperture and wide-angle visible imaging,” Phys. Rev. Appl., vol. 10, no. 1, 2018, Art. no. 014005. https://doi.org/10.1103/physrevapplied.10.014005.Suche in Google Scholar
[10] H. Liang, et al.., “Ultrahigh numerical aperture metalens at visible wavelengths,” Nano Lett., vol. 18, no. 7, pp. 4460–4466, 2018, https://doi.org/10.1021/acs.nanolett.8b01570.Suche in Google Scholar PubMed
[11] T.-Y. Huang, et al.., “A monolithic immersion metalens for imaging solid-state quantum emitters,” Nat. Commun., vol. 10, no. 1, p. 2392, 2019, https://doi.org/10.1038/s41467-019-10238-5.Suche in Google Scholar PubMed PubMed Central
[12] H. Liang, et al.., “High performance metalenses: Numerical aperture, aberrations, chromaticity, and trade-offs,” Optica, vol. 6, no. 12, p. 1461, 2019, https://doi.org/10.1364/optica.6.001461.Suche in Google Scholar
[13] N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series: With Engineering Applications, London, England, MIT Press, 2019.Suche in Google Scholar
[14] M. Pu, X. Li, Y. Guo, X. Ma, and X. Luo, “Nanoapertures with ordered rotations: Symmetry transformation and wide-angle flat lensing,” Opt. Express, vol. 25, no. 25, pp. 31471–31477, 2017, https://doi.org/10.1364/oe.25.031471.Suche in Google Scholar PubMed
[15] A. Martins, et al.., “On metalenses with arbitrarily wide field of view,” ACS Photonics, vol. 7, no. 8, pp. 2073–2079, 2020, https://doi.org/10.1021/acsphotonics.0c00479.Suche in Google Scholar
[16] D. K. Sharma, et al.., “Stereo imaging with a hemispherical field-of-view metalens camera,” ACS Photonics, vol. 11, no. 5, pp. 2016–2021, 2024, https://doi.org/10.1021/acsphotonics.4c00087.Suche in Google Scholar
[17] A. V. Baranikov, et al.., “Large field-of-view and multi-color imaging with GaP quadratic metalenses,” Laser Photon. Rev., vol. 18, no. 1, p. 2300553, 2024. https://doi.org/10.1002/lpor.202300553.Suche in Google Scholar
[18] E. Lassalle, et al.., “Imaging properties of large field-of-view quadratic metalenses and their applications to fingerprint detection,” ACS Photonics, vol. 8, no. 5, pp. 1457–1468, 2021, https://doi.org/10.1021/acsphotonics.1c00237.Suche in Google Scholar
[19] X. Luo, F. Zhang, M. Pu, Y. Guo, X. Li, and X. Ma, “Recent advances of wide-angle metalenses: Principle, design, and applications,” Nanophotonics, vol. 11, no. 1, pp. 1–20, 2022, https://doi.org/10.1515/nanoph-2021-0583.Suche in Google Scholar PubMed PubMed Central
[20] F. Yang, et al.., “Wide field-of-view metalens: A tutorial,” Adv. Photonics, vol. 5, no. 3, p. 033001, 2023, https://doi.org/10.1117/1.ap.5.3.033001.Suche in Google Scholar
[21] A. Martins, J. Li, B.-H. V. Borges, T. F. Krauss, and E. R. Martins, “Fundamental limits and design principles of doublet metalenses,” Nanophotonics, vol. 11, no. 6, pp. 1187–1194, 2022, https://doi.org/10.1515/nanoph-2021-0770.Suche in Google Scholar PubMed PubMed Central
[22] A. Arbabi, E. Arbabi, S. M. Kamali, Y. Horie, S. Han, and A. Faraon, “Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations,” Nat. Commun., vol. 7, no. 1, p. 13682, 2016, https://doi.org/10.1038/ncomms13682.Suche in Google Scholar PubMed PubMed Central
[23] A. Wirth-Singh, et al.., “Wide field of view large aperture meta-doublet eyepiece,” Light: Sci. Appl., vol. 14, no. 1, p. 17, 2025, https://doi.org/10.1038/s41377-024-01674-0.Suche in Google Scholar PubMed PubMed Central
[24] M. Y. Shalaginov, et al.., “Single-element diffraction-limited fisheye metalens,” Nano Lett., vol. 20, no. 10, pp. 7429–7437, 2020, https://doi.org/10.1021/acs.nanolett.0c02783.Suche in Google Scholar PubMed
[25] B. Groever, W. T. Chen, and F. Capasso, “Meta-lens doublet in the visible region,” Nano Lett., vol. 17, no. 8, pp. 4902–4907, 2017, https://doi.org/10.1021/acs.nanolett.7b01888.Suche in Google Scholar PubMed
[26] J. Yeo, D. Loh, R. Paniagua-Domínguez, and A. Kuznetsov, “EigenCWD: A spatially-varying deconvolution algorithm for single metalens imaging,” Opt. Express, vol. 33, no. 13, pp. 28481–28492, 2025.10.1364/OE.540831Suche in Google Scholar PubMed
[27] X. Dun, H. Ikoma, G. Wetzstein, Z. Wang, X. Cheng, and Y. Peng, “Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging,” Optica, vol. 7, no. 8, pp. 913–922, 2020, https://doi.org/10.1364/optica.394413.Suche in Google Scholar
[28] E. Tseng, et al.., “Neural nano-optics for high-quality thin lens imaging,” Nat. Commun., vol. 12, no. 1, p. 6493, 2021, https://doi.org/10.1038/s41467-021-26443-0.Suche in Google Scholar PubMed PubMed Central
[29] S. Tan, F. Yang, V. Boominathan, A. Veeraraghavan, and G. V. Naik, “3D imaging using extreme dispersion in optical metasurfaces,” ACS Photonics, vol. 8, no. 5, pp. 1421–1429, 2021, https://doi.org/10.1021/acsphotonics.1c00110.Suche in Google Scholar
[30] Q. Fan, et al.., “Trilobite-inspired neural nanophotonic light-field camera with extreme depth-of-field,” Nat. Commun., vol. 13, no. 1, p. 2130, 2022, https://doi.org/10.1038/s41467-022-29568-y.Suche in Google Scholar PubMed PubMed Central
[31] S. Hu, et al.., “Deep learning enhanced achromatic imaging with a singlet flat lens,” Opt. Express, vol. 31, no. 21, pp. 33873–33882, 2023, https://doi.org/10.1364/oe.501872.Suche in Google Scholar
[32] Y. Zhang, et al.., “Deep-learning enhanced high-quality imaging in metalens-integrated camera,” Opt. Lett., vol. 49, no. 10, pp. 2853–2856, 2024, https://doi.org/10.1364/ol.521393.Suche in Google Scholar PubMed
[33] S. Pinilla, et al.., “Miniature color camera via flat hybrid meta-optics,” Sci. Adv., vol. 9, no. 21, p. eadg7297, 2023, https://doi.org/10.1126/sciadv.adg7297.Suche in Google Scholar PubMed PubMed Central
[34] R. Maman, E. Mualem, N. Mazurski, J. Engelberg, and U. Levy, “Achromatic imaging systems with flat lenses enabled by deep learning,” ACS Photonics, vol. 10, no. 12, pp. 4494–4500, 2023, https://doi.org/10.1021/acsphotonics.3c01349.Suche in Google Scholar
[35] Y. Liu, et al.., “Ultra-wide FOV meta-camera with transformer-neural-network color imaging methodology,” AP, vol. 6, no. 5, p. 056001, 2024, https://doi.org/10.1117/1.ap.6.5.056001.Suche in Google Scholar
[36] W. Cheng, et al.., “Broadband achromatic imaging of a metalens with optoelectronic computing fusion,” Nano Lett., vol. 24, no. 1, pp. 254–260, 2024, https://doi.org/10.1021/acs.nanolett.3c03891.Suche in Google Scholar PubMed
[37] A. Kuznetsova, et al.., “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” IJCV, vol. 128, pp. 1956–1981, 2020. https://doi.org/10.1007/s11263-020-01316-z.Suche in Google Scholar
[38] I. Krasin, et al.., “Openimages: A public dataset for large-scale multi-label and multi-class image classification,” 2017. Dataset available at: https://storage.googleapis.com/openimages/web/index.html.Suche in Google Scholar
[39] H. Iqbal. Harisiqbal88/plotneuralnet v1.0.0, 2018.Suche in Google Scholar
[40] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 5728–5739.10.1109/CVPR52688.2022.00564Suche in Google Scholar
[41] A. Paszke, et al.., “PyTorch: An imperative style, high-performance deep learning library,” Neural Inf. Process. Syst., abs/1912.01703, 2019.Suche in Google Scholar
[42] S. Nah, T. H. Kim, and K. M. Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3883–3891.10.1109/CVPR.2017.35Suche in Google Scholar
Supplementary Material
This article contains supplementary material (https://doi.org/10.1515/nanoph-2025-0354).
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.