Startseite Data-efficient prediction of OLED optical properties enabled by transfer learning
Artikel Open Access

Data-efficient prediction of OLED optical properties enabled by transfer learning

  • Jeong Min Shin , Sanmun Kim , Sergey G. Menabde ORCID logo , Sehong Park , In-Goo Lee , Injue Kim und Min Seok Jang ORCID logo EMAIL logo
Veröffentlicht/Copyright: 10. Februar 2025
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

It has long been desired to enable global structural optimization of organic light-emitting diodes (OLEDs) for maximal light extraction. The most critical obstacles to achieving this goal are time-consuming optical simulations and discrepancies between simulation and experiment. In this work, by leveraging transfer learning, we demonstrate that fast and reliable prediction of OLED optical properties is possible with several times higher data efficiency compared to previously demonstrated surrogate solvers based on artificial neural networks. Once a neural network is trained for a base OLED structure, it can be transferred to predict the properties of modified structures with additional layers with a relatively small number of additional training samples. Moreover, we demonstrate that, with only a few tenths of experimental data sets, a neural network can be trained to accurately predict experimental measurements of OLEDs, which often differ from simulation results due to fabrication and measurement errors. This is enabled by transferring a pre-trained network, built with a large amount of simulated data, to a new network capable of correcting systematic errors in experiment. Our work proposes a practical approach to designing and optimizing OLED structures with a large number of design parameters to achieve high optical efficiency.

1 Introduction

Organic light-emitting diodes (OLED) is a mainstream display technology adopted in a wide range of devices spanning from wearable and mobile devices to large-screen televisions and signages. Both electrical and optical properties of OLED have been constantly improved [1], with a special focus on the external quantum efficiency (EQE) [2], [3], [4], [5], which is a ratio between the number of photons emitted by the device and the number of electrons injected into the device. The EQE is given by the product of the light extraction efficiency (LEE) and the internal quantum efficiency (IQE). With the development of light emitting organic molecules, IQE has reached near unity [6], [7], thus the LEE has become the limiting factor dictating the overall device quantum efficiency [8]. Consequently, improving the LEE by structure optimization has been the subject of many previous studies [9], [10], [11].

Various electromagnetic simulation techniques have been developed and adopted to numerically design the LEE of OLEDs. Conventional full-wave simulations based on the finite-difference time-domain (FDTD) and finite element methods (FEM) provide general and convenient ways to predict optical properties of OLEDs but they are computationally expensive. The computational complexity can be greatly reduced when the device structure possesses a strong symmetry. For example, Chance, Prock, and Silbey developed a model (called CPS model) that analyses the interaction between fluorescent molecules and the nearby metal surface, and enables the analysis of the near field radiation of the emissive layer in planar OLEDs to calculate the light extraction efficiency of the device [12]. This method was later extended to periodically corrugated devices by Park et al. [13] Although these two approaches have alleviated computational burden to some degree and thus enabled optimizations of simple devices with a small number of design variables, they are still not fast enough for global optimization of more general structures with a large parametric space and multiple constraints such as color coordinates and viewing angle, which are crucial for meeting industrial demands.

To address this issue, machine learning has been used to boost the computation speed by using artificial neural networks that produce approximate results rather than exact solutions. Once trained, surrogate solvers based on artificial neural networks predict the optical properties of the devices a few orders of magnitude faster than the rigorous simulations and therefore can tackle the high-complexity optimization problems [14]. However, there exist critical challenges to overcome. First, a large amount of training data, which needs to be generated by rigorous simulations, is required to train a network with sufficient accuracy [15]. Second, even a slight change in the device configuration (e.g., new layers are added) requires the entire network to be retrained with a new set of training samples, which, again, involves time-consuming sample generation via rigorous electromagnetic simulations. Finally, since the network is trained by the simulated data, theory versus experiment discrepancies are inevitable due to the fabrication, material properties, or measurement errors, negatively affecting the performance of fabricated devices designed by a neural network.

In this study, we suggest a pathway to overcome the aforementioned problems by using the transfer learning – a deep learning method based on the migration of certain deep layers from a pre-trained network [16], [17]. This approach allows for efficient network learning even when the training situation changes as reported several times for different optical devices [18], [19], [20], [21]. We apply transfer learning to predict light emitting properties of OLEDs and demonstrate a two-fold enhancement of data efficiency compared to the case of direct learning. Moreover, we show that transfer learning can also be leveraged to lift the simulation-experiment mismatch problem with only a few dozens of experimental measurement data sets.

2 Transfer learning for light extraction efficiency prediction in OLED

The considered OLED structure is a multilayer stack illustrated in Figure 1(a). The organic diode layer is N,N′-Di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (NPB) [22], which is located above the aluminum (Al) anode. To maintain the electrical properties of the cathode and anode layers, their thickness is fixed at 12 nm and 100 nm, respectively.

Figure 1: 
Schematic of LEE prediction network with transfer learning. (a) and (d) Show the 6-design-variable OLED and 8-design-variable OLED, respectively. (b) and (e) Illustrate the LEE prediction networks for 6- and 8-variables OLEDs: the BaseNet and the TransferNet, respectively. The BaseNet is transferred to the TransferNet in two parts: frozen weight parameters (cyan) and tunable (blue) weight parameters. (c) and (f) Show the LEE spectra predicted by the BaseNet and the TransferNet, respectively, including the CPS-calculated LEE spectra (black), used as a ground truth. The blue spectrum in (f) shows the LEE spectrum of the 8-parameter OLED calculated by the BaseNet with 6 parameters ignoring the 2 addtional parameters for the added layer, as shown in (b).
Figure 1:

Schematic of LEE prediction network with transfer learning. (a) and (d) Show the 6-design-variable OLED and 8-design-variable OLED, respectively. (b) and (e) Illustrate the LEE prediction networks for 6- and 8-variables OLEDs: the BaseNet and the TransferNet, respectively. The BaseNet is transferred to the TransferNet in two parts: frozen weight parameters (cyan) and tunable (blue) weight parameters. (c) and (f) Show the LEE spectra predicted by the BaseNet and the TransferNet, respectively, including the CPS-calculated LEE spectra (black), used as a ground truth. The blue spectrum in (f) shows the LEE spectrum of the 8-parameter OLED calculated by the BaseNet with 6 parameters ignoring the 2 addtional parameters for the added layer, as shown in (b).

The training samples for neural networks in our study are generated by rigorous simulations using the CPS model under the assumption of non-absorbing emitting medium, isotropic dipole moment transition, low excitation level, and relatively small excitation zone compared to the cavity length. Such CPS model approximations have been shown to provide a good agreement with experimental results in OLEDs [23], [24], [25], [26], [27], hence we apply the same model to calculate the LEE in this work.

The possible range of the refractive indices of the passivation (n 1) and capping (n 2) layers is 1.6–1.9 and 1.4 to 2.0, respectively, and the thickness of the layers can vary between 500 nm and 1,500 nm for the passivation layer (h 1) and 10 nm and 250 nm for the capping layer (h 2) [14]. Both distances from the light-emitting layer to the top (h 3) and bottom (h 4) boundaries of the NPB layer are in the range of 10 nm and 250 nm, respectively. These six specific parameters can change the optical properties without significantly affecting the electrical characteristics of the device. The dipolar emission from the emission layer is assumed to be randomly oriented. For every given structure, LEE is calculated at 81 wavelengths between 380 nm and 780 nm with a step of 5 nm. Hence, each training sample has 6 structure parameters (h 1, h 2, h 3, h 4, n 1, n 2) as input and the LEE spectrum at 81 wavelengths (380, 385, 390, … , 780 nm) as output.

As a first step, we construct and train the network called BaseNet to predicts the LEE spectrum of the OLED device with 6 design parameters discussed above (Figure 1(b)). The BaseNet consists of 10 fully connected layers with 300 neurons in each layer and is trained with the Adam optimizer on 2000 training samples to minimize the root mean squared error (RMSE) between the predicted LEE spectra and the ground truth spectra. This training sample size is significantly smaller compared similar studies where 260,000, 750,000 samples are used [14], [28]. The model is subsequently trained on the previous 2000 training sample using a batch size of 500 across 2000 iterations. The RMSE of the LEE spectrum predicted by the BaseNet is 0.0168 (calculated from 1,000 test samples). Once trained, the neural network performs feedforward operations on a given input to produce a predicted LEE spectrum, and this process is called inference. The comparison of a rigorously simulated LEE spectrum with the BaseNet prediction for the same device is shown in Figure 1(c). We note that the CPS model run on a single core CPU (SPECIFY CPU model) requires 23 s to compute the LEE of a given OLED structure, while the networks in this paper takes only 0.53 ms on the same CPU, showing a four orders of magnitude faster calculation. Moreover, BaseNet calculation time dramatically reduces to 0.08 μs per structure with NVIDIA® GeForce RTX™ 3080 GPU.

We then create another network, the TransferNet, that predicts the LEE spectrum of OLED structures with one additional layer that adds two new design parameters into the system: thickness (h add) and refractive index (n add). Thus, TransferNet has 8 design parameters as inputs – the original 6 plus n add and h add. The additional layer is located between the cathode and the capping layer as shown in Figure 1(d), and is expected to have a large impact on the LEE spectrum while not affecting the electrical properties of the structure. To consider various materials, the n add is allowed to vary from 1.2 to 2.0, and its h add is in the range from 10 nm to 1,000 nm.

The TransferNet consists of three parts. The first part is transferred from the first M layers of the BaseNet and “frozen” parameters weights (indicated as cyan box in Figure 1(e) for when M = 6). The second part is a new network that takes additional parameters as an input, and the third part connects the two previous networks, which are transferred from the last (10 − M) layers of the BaseNet, but with un-frozen weight parameters (indicated as purple box in Figure 1(e)). Then, the TransferNet is trained on training samples of different sizes N between 200 and 2000 in increments of 200.

To verify the accuracy of BaseNet and TransferNet, Figure 1(f) compares the LEE spectra of the new OLED structure with 8 design parameters calculated by the CPS model (black) with the LEE predicted by the TransferNet with (N, M) = (1,000, 6) (red) and by the BaseNet (blue). The TransferNet shows much better agreement with the ground truth compared to the prediction results by the BaseNet which ignores the added layer.

When the number of training samples for the BaseNet is fixed at 2000, the performance (RMSE value) of the TransferNet depends on the number of frozen layers transferred from the BaseNet (M) and the number of training samples for the TransferNet (N) as shown in Figure 2(a). The RMSE of the TransferNet monotonically decreases as more samples are used for its training. Interestingly, the TransferNet shows the best performance for M = 6 regardless of N. The existence of an optimal number of frozen layers in the transferred network suggests that too few frozen layers cannot carry sufficient information from the pre-trained network, while too many frozen layers cause the network to be less adaptive to the new OLED structure. Therefore, choosing an appropriate number of fixed layers determines the overall performance of the network.

Figure 2: 
The RMSE of the TransferNet as a function of (a) the number of TransferNet training samples (N) and the number of frozen layers (M), with the BaseNet trained on 2,000 samples; and (b) the number of TransferNet training samples (N) and the number of BaseNet training samples, with a fixed number of frozen layers (M = 6).
Figure 2:

The RMSE of the TransferNet as a function of (a) the number of TransferNet training samples (N) and the number of frozen layers (M), with the BaseNet trained on 2,000 samples; and (b) the number of TransferNet training samples (N) and the number of BaseNet training samples, with a fixed number of frozen layers (M = 6).

The performance of the TransferNet also depends on the accuracy of the BaseNet. Figure 2(b) shows the relationship between the RMSE of the TransferNet and the size of the training sets for both the TransferNet and the BaseNet for M = 6. As expected, as the size of both training sets increases, the prediction error decreases. At the same time, the number of samples for the BaseNet training has limited effect as the RMSE saturates after the set size exceeds approximately 2,000 samples. This suggests that there exists a limit to how much the TransferNet performance can be improved by the BaseNet training. The TransferNet performance also saturates as the size of its own training set increases, which is likely due to the existence of the frozen layers because they contain the error of the pre-trained network. What is remarkable, however, is that only several hundred samples are sufficient to obtain RMSE value less than 0.02 using the TransferNet. Compared to previous research that apply the transfer learning in photonics, transfer learning in this paper dramatically improves the accuracy even with relatively inaccurate BaseNet [20], [21].

The effectiveness of transfer learning can be estimated by comparing the TransferNet performance with that of a network with the same architecture, but trained on an independent set of samples starting from randomly initialized weights without transfer learning, which is referred to as the DirectNet. To make the statistical analysis, we analyze 30 networks trained by different sets of 2000 training samples and show the mean value and the standard deviation of RMSE obtained from both networks in Figure 3(a). Notably, TransferNet shows a standard deviation of 4.6 × 10−4 across 500 training data samples, while DirectNet shows an order of magnitude larger standard deviation of 9.9 × 10−3. It can also be seen that transfer learning increases data efficiency by more than 200 % which is very high compared to other studies using transfer learning [19], as DirectNet requires more than 1,000 training samples to reach the prediction accuracy (RMSE = 0.0214) of TransferNet on 500 training data samples. This means that the transfer learning not only makes the network more stable against the training set changes, but also results in much better data efficiency which is particularly prominent at smaller training sets. However, the performance difference between the two networks decreases as the number of training samples (N) increases, and becomes negligible when it surpasses the number of samples used to train the BaseNet.

Figure 3: 
Average prediction error of TransferNet and DirectNet. (a) The average RMSE of TransferNet (red), which uses the BaseNet trained on 2,000 training samples and the average RMSE of DirectNet (black). Both networks were trained on, and the resulting RMSE values were averaged over, 30 independent training sample sets. The shaded area represents the standard deviation. (b) The RMSE of TransferNet and DirectNet with 1, 2, and 3 additional layers in the OLED structure, averaged across 40 independent training sample sets containing 1,000 training samples each. Error bars indicate the standard deviation.
Figure 3:

Average prediction error of TransferNet and DirectNet. (a) The average RMSE of TransferNet (red), which uses the BaseNet trained on 2,000 training samples and the average RMSE of DirectNet (black). Both networks were trained on, and the resulting RMSE values were averaged over, 30 independent training sample sets. The shaded area represents the standard deviation. (b) The RMSE of TransferNet and DirectNet with 1, 2, and 3 additional layers in the OLED structure, averaged across 40 independent training sample sets containing 1,000 training samples each. Error bars indicate the standard deviation.

We proceed with testing the effectiveness of transfer learning when the OLED structure becomes even more complex. While the discussion above has been focused on the LEE prediction in the OLED with a single additional layer, we also statistically analyze the effectiveness of transfer learning when 2 and 3 layers are added to the original OLED structure. Again, we compare the performance of the TransferNet with (N, M) = (1,000, 6) and the DirectNet (N = 1,000) without transfer learning. The number of input parameters increases from 6 in the original OLED up to 8, 10, and 12 in the OLED with 1, 2, and 3 extra layers, respectively. All three new TransferNets are trained with different training sets, and their performance is analyzed by predicting the LEE for 1,000 test structures for each case. To validate the robustness of the network against the BaseNet and training data, we compare the LEE prediction accuracy of 40 different TransferNets trained using different training data sets, while every BaseNet is also trained using an independent dataset of 2000 samples. As shown in Figure 3(b), the TransferNet shows 10–25 % lower RMSE compared to the DirectNet with the same 1,000 training samples. The standard deviation of RMSE, which shows the stability of the network, is also 2.3–4.2 times smaller for the TransferNet. This suggests that transfer learning improves network performance even in the case of more complex structures, which are significantly different from the original OLED structure. We also demonstrate that the TransferNet can be used to optimize 8-layer OLED structure more than 103 times faster than the rigorous CPS model, but the results of the two optimizations were very similar, as shown in Figure S1. In addition, because TransferNet is more than twice as data-efficient, the time to train the network is also more than twice as fast as the existing network. This suggests that the TransferNet could allow for a faster OLED structure optimization compared to not only computational models but also existing machine learning networks.

3 Transfer learning for error prediction

In many cases, experimentally realized devices exhibit different properties compared to those predicted by numerical simulations [29]. This discrepancy can be attributed to various sources of error. Here, we classify the errors into two categories: systematic and random errors. As a representative example of systematic errors, we consider a systematic deviation of the design parameters of a fabricated device from their design values, often caused by miscalibrated fabrication equipment. We also consider random errors in optical measurements due to unidentifiable sources such as detector noise.

In this work, we suggest a way to predict the measured LEE spectra from the intended device structures using extremely small amount of experimental data in combination with a large amount of simulation data. This approach is meaningful because it usually takes significantly more time and effort to obtain experimental measurement data compared to running an optical simulation. Due to the systematic fabrication errors, the intended device structures, Figure 4(a), can be different from the actual fabricated ones, Figure 4(b). Furthermore, random noise makes the measured LEE spectra, Figure 4(d), different from the calculated exact spectra, Figure 4(e). Since the random measurement error is inherently unpredictable, here we focus on analyzing and correcting the systematic errors using transfer learning.

Figure 4: 
Schematic of the error prediction network. (a), (b) The intended and fabricated OLED structure sample, respectively. The intended structure is identical to the condition shown in Figure 1. (a) While the fabricated structure includes fabrication errors in both the refractive index and thickness. (c) The error prediction network structure for the experimental dataset (gray box), which accounts for both systematic and random errors. The ExpNet (yellow box) consists of the transferred and frozen BaseNet (blue box) and the ErrNet (green box) which has two hidden layers, each containing 100 neurons. (d), (e) The LEE spectra calculated from the fabricated structure with and without random measurement error, respectively.
Figure 4:

Schematic of the error prediction network. (a), (b) The intended and fabricated OLED structure sample, respectively. The intended structure is identical to the condition shown in Figure 1. (a) While the fabricated structure includes fabrication errors in both the refractive index and thickness. (c) The error prediction network structure for the experimental dataset (gray box), which accounts for both systematic and random errors. The ExpNet (yellow box) consists of the transferred and frozen BaseNet (blue box) and the ErrNet (green box) which has two hidden layers, each containing 100 neurons. (d), (e) The LEE spectra calculated from the fabricated structure with and without random measurement error, respectively.

To mimic experimental measurements, we generate synthetic experimental data with artificial errors as illustrated in Figure 4. We apply systematic error functions for each input design parameter within 10 % of its design value (see Supplementary 02 for more details), and random Gaussian error is assigned to the output LEE spectra. This synthetic experimental data allows us to verify whether the network can correctly predict the known systematic error function.

A new network called ExpNet is constructed by prepending an additional network (called ErrNet) to the pre-trained and fully transferred BaseNet, which has the same network structure as the one used in the previous sections but trained with 50,000 samples, as shown in Figure 4(c). The ErrNet is responsible for predicting the systematic fabrication errors in the device structure. It takes intended design parameters as an input and produces the parameters of the fabricated structure that contains the systematic errors that occurred during fabrication as shown in Figure 4(a and b). Because of the assumed systematic error function is a function of that parameter alone, we use six different ErrNets for six input design parameters and each ErrNet is composed of two fully connected hidden layers with 100 nodes each as indicated as the green box in Figure 4(c) (see Supplementary 02 for more details).

During the training of the ExpNet on the experimental data, the pre-trained and transferred BaseNet is frozen and only the ErrNets are trained. Since the LEE prediction accuracy of the BaseNet constrains the overall prediction accuracy of the ExpNet and thus affects the error correction capability of the ErrNet, we first train the BaseNet with a much larger amount of data (50,000 samples) compared to the previous case. The ExpNet is then trained using only 60 synthetic experimental samples containing both systematic input errors and random output errors with standard deviation of 0.01. Figure 5(a) shows an example of the resulting LEE spectra predicted by the ExpNet (red) and the BaseNet (blue) together with the synthetic experimental data (black). The ExpNet prediction agrees well with the synthetic experimental data, except for the Gaussian random noise. Indeed, the RMSE of the trained ExpNet reaches down to 0.0112, close to the noise floor defined by the random Gaussian error. In contrast, when the systematic error is ignored (ErrNets are turned off), the RMSE of the BaseNet for synthetic experimental data is 0.1, an order of magnitude higher than that of ExpNet, demonstrating the effectiveness of transfer learning.

Figure 5: 
Performance of ExpNet and ErrNet. (a) The LEE spectrum of the synthetic experimental data containing both systematic and random errors (black), and the predicted LEE spectra by the ExpNet (red) and the BaseNet (blue). (b) Input systematic error functions for the design parameters (black) and the predicted error functions by the ErrNet (red).
Figure 5:

Performance of ExpNet and ErrNet. (a) The LEE spectrum of the synthetic experimental data containing both systematic and random errors (black), and the predicted LEE spectra by the ExpNet (red) and the BaseNet (blue). (b) Input systematic error functions for the design parameters (black) and the predicted error functions by the ErrNet (red).

Even with random Gaussian errors in the output LEE spectra, the ErrNet is able to identify the assigned systematic error functions of input parameters as shown in Figure 5(b) (see Figure S3 for more details). We note that, although the ExpNet can accurately predict the synthetic experimental data, the ErrNet prediction for the parameter h 2 (the capping layer thickness) shows poor prediction accuracy. This can be explained by the different impact of each design parameter on LEE and the limited effect of h2 on the LEE in the considered OLED structure (see Figures S4 and S5) [30]. Thus, the accuracy of the error prediction function can also be used to qualitatively estimate the effect of a given variable on the LEE spectrum.

The RMSE of ExpNet is inherently bounded by the noise floor and the RMSE of the BaseNet. Figure 6(a) shows the average RMSE of the ExpNet for 10 different systematic error functions in the synthetic data as a function of the number of training sets, as well as the theoretical limit for RMSE given by the pre-trained BaseNet (black dashed) and the Gaussian random noise (gray dashed). Notably, the ExpNet approaches the theoretical limit even when only a few dozen samples are used for its training. Additionally, Figure 6(a) shows that the prediction accuracy of the ExpNet saturates as the training sample size increases. At the same time, since the number of training samples is very small, it shows the effectiveness of transfer learning in the analysis of experimental data. However, as the output random error (Gaussian noise) increases, RMSE saturates to its standard deviation because random noise surpasses the network prediction error. In this regime, it is difficult to evaluate the performance of the network based on the RMSE alone. Therefore, we proceed with the separate analysis of ErrNet performance.

Figure 6: 
The RMSE of ExpNet and ErrNet as a function of the noise level and the number of training samples. (a) The RMSE of the ExpNet trained on 10–160 samples as a function of the standard deviation of the random noise in the synthetic experimental LEE and that of the BaseNet. The gray and black dotted lines represent the standard deviation of the random noise and the BaseNet RMSE, respectively. (b) The RMSE of the ErrNet for the parameter h
3 as a function of the number of training samples and the standard deviation of the random noise.
Figure 6:

The RMSE of ExpNet and ErrNet as a function of the noise level and the number of training samples. (a) The RMSE of the ExpNet trained on 10–160 samples as a function of the standard deviation of the random noise in the synthetic experimental LEE and that of the BaseNet. The gray and black dotted lines represent the standard deviation of the random noise and the BaseNet RMSE, respectively. (b) The RMSE of the ErrNet for the parameter h 3 as a function of the number of training samples and the standard deviation of the random noise.

The performance of the ErrNet, which is critical for the ExpNet, can be evaluated by the accuracy of the network’s predictions with respect to the number of training samples and the robustness of the network to random errors. For example, RMSE of the ErrNet’s prediction for parameter h 3 is shown in Figure 6(b) as a function of the number of training samples and the standard deviation of the Gaussian random error. RMSE is statistically calculated based on 10 different error functions for h 3. The RMSE of ErrNet generally increases with the standard deviation of the random noise. In particular, at a high noise level with the standard deviation over 0.01, the network prediction error tends to blow up when it is trained on too few samples. To prevent this error blow-up, one needs 60 or more experimental data sets for network training, but this is still a sufficiently small amount that can be generated by actual experiments. The ErrNets for other input design parameters show similar dependence on the number of training samples and the noise level, which are presented in Figure S5.

4 Conclusion and discussion

In summary, we leverage transfer learning to tackle two prominent problems in predicting optical properties of OLEDs. First, with transfer learning, a neural network that is trained for a certain OLED structure can be reused for modified structures having additional layers, resulting in a two-fold increase in sample efficiency compared to the case of direct learning with the same network architecture. Second, we show that the long-standing problem of simulation-experiment mismatch can also be addressed with transfer learning. By combining an error correction network with an accurate surrogate solver trained with a larger amount of simulation data, the whole combined network can be trained to predict experimental LEE spectra by using only a few dozens of experimental training samples. We note that the experimental data used in this work are synthetically generated to prove the concept. To further validate the practical effectiveness of the proposed approach, it is necessary to apply it with real experimental data. Our work constitutes a stepping stone towards a global structural optimization of OLED structure by enabling a fast and reliable prediction of OLED optical properties.


Corresponding author: Min Seok Jang, School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea, E-mail:

  1. Research funding: This work was supported by LG Display and by the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (RS-2024-00416583, RS-2024-00414119); Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (RS-2024-00412644); Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2024 (RS-2024-00332210).

  2. Author contributions: JMS, SK, and MSJ conceived the ideas. JMS and SK performed neural network computations. JMS analysed the resulting neural network structures. JMS, SK, SP, IL, and IK conducted detailed analysis of OLED structure. MSJ supervised the project. The manuscript was mainly written by JMS, SGM, and MSJ with the contributions of all authors.

  3. Conflict of interest: Authors state no conflicts of interest.

  4. Data availability: All data generated during this study is available from the corresponding author upon reasonable request.

References

[1] J.-H. Jou, S. Kumar, A. Agrawal, T.-H. Li, and S. Sahoo, “Approaches for fabricating high efficiency organic light emitting diodes,” J. Mater. Chem. C, vol. 3, no. 13, pp. 2974–3002, 2015, https://doi.org/10.1039/c4tc02495h.Suche in Google Scholar

[2] J. W. Sun, J. H. Lee, C. K. Moon, K. H. Kim, H. Shin, and J. J. Kim, “A fluorescent organic light-emitting diode with 30% external quantum efficiency,” Adv. Mater., vol. 26, no. 32, pp. 5684–5688, 2014, https://doi.org/10.1002/adma.201401407.Suche in Google Scholar PubMed

[3] K. Tuong Ly, et al.., “Near-infrared organic light-emitting diodes with very high external quantum efficiency and radiance,” Nat. Photonics, vol. 11, no. 1, pp. 63–68, 2017, https://doi.org/10.1038/nphoton.2016.230.Suche in Google Scholar

[4] G. Gu, D. Garbuzov, P. Burrows, S. Venkatesh, S. Forrest, and M. Thompson, “High-external-quantum-efficiency organic light-emitting devices,” Opt Lett., vol. 22, no. 6, pp. 396–398, 1997, https://doi.org/10.1364/ol.22.000396.Suche in Google Scholar PubMed

[5] S. Y. Kim, et al.., “Organic light-emitting diodes with 30% external quantum efficiency based on a horizontally oriented emitter,” Adv. Funct. Mater., vol. 23, no. 31, pp. 3896–3900, 2013, https://doi.org/10.1002/adfm.201300104.Suche in Google Scholar

[6] H. Uoyama, K. Goushi, K. Shizu, H. Nomura, and C. Adachi, “Highly efficient organic light-emitting diodes from delayed fluorescence,” Nature, vol. 492, no. 7428, pp. 234–238, 2012, https://doi.org/10.1038/nature11687.Suche in Google Scholar PubMed

[7] C. Adachi, M. A. Baldo, M. E. Thompson, and S. R. Forrest, “Nearly 100% internal phosphorescence efficiency in an organic light-emitting device,” J. Appl. Phys., vol. 90, no. 10, pp. 5048–5051, 2001, https://doi.org/10.1063/1.1409582.Suche in Google Scholar

[8] A. Salehi, X. Fu, D. H. Shin, and F. So, “Recent advances in OLED optical design,” Adv. Funct. Mater., vol. 29, no. 15, p. 1808803, 2019, https://doi.org/10.1002/adfm.201808803.Suche in Google Scholar

[9] R. Shinar and J. Shinar, “Light extraction from organic light emitting diodes (OLEDs),” J. Phys: Photon., vol. 4, no. 3, p. 032002, 2022, https://doi.org/10.1088/2515-7647/ac6ea4.Suche in Google Scholar

[10] B.-Y. Lin, et al.., “Highly efficient OLED achieved by periodic corrugations using facile fabrication,” J. Lumin., vol. 269, p. 120482, 2024, https://doi.org/10.1016/j.jlumin.2024.120482.Suche in Google Scholar

[11] A. Rostami, M. Noori, S. Matloub, and H. Baghban, “Light extraction efficiency enhancement in organic light emitting diodes based on optimized multilayer structures,” Optik, vol. 124, no. 18, pp. 3287–3291, 2013, https://doi.org/10.1016/j.ijleo.2012.10.053.Suche in Google Scholar

[12] R. Chance, A. Prock, and R. Silbey, “Molecular fluorescence and energy transfer near interfaces,” Adv. Chem. Phys., vol. 37, pp. 1–65, 1978, https://doi.org/10.1002/9780470142561.ch1.Suche in Google Scholar

[13] C. Park, et al.., “Fast and rigorous optical simulation of periodically corrugated light-emitting diodes based on a diffraction matrix method,” Opt. Express, vol. 31, no. 12, pp. 20410–20423, 2023, https://doi.org/10.1364/oe.489758.Suche in Google Scholar

[14] S. Kim, et al.., “Inverse design of organic light-emitting diode structure based on deep neural networks,” Nanophotonics, vol. 10, no. 18, pp. 4533–4541, 2021, https://doi.org/10.1515/nanoph-2021-0434.Suche in Google Scholar

[15] Z. Pan and X. Pan, “Deep learning and adjoint method accelerated inverse design in photonics: a review,” Photonics, vol. 10, no. 7, p. 852, 2023, https://doi.org/10.3390/photonics10070852.Suche in Google Scholar

[16] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2009, https://doi.org/10.1109/tkde.2009.191.Suche in Google Scholar

[17] L. Torrey and J. Shavlik, “Transfer learning,” in Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, Hershey, PA, USA, IGI global, 2010, pp. 242–264.10.4018/978-1-60566-766-9.ch011Suche in Google Scholar

[18] Y. Qu, L. Jing, Y. Shen, M. Qiu, and M. Soljacic, “Migrating knowledge between physical scenarios based on artificial neural networks,” ACS Photonics, vol. 6, no. 5, pp. 1168–1174, 2019, https://doi.org/10.1021/acsphotonics.8b01526.Suche in Google Scholar

[19] Z. Fan, et al.., “Transfer-learning-assisted inverse metasurface design for 30% data savings,” Phys. Rev. Appl., vol. 18, no. 2, p. 024022, 2022, https://doi.org/10.1103/physrevapplied.18.024022.Suche in Google Scholar

[20] J. Zhang, et al.., “Heterogeneous transfer-learning-enabled diverse metasurface design,” Adv. Opt. Mater., vol. 10, no. 17, p. 2200748, 2022, https://doi.org/10.1002/adom.202200748.Suche in Google Scholar

[21] D. Xu, et al.., “Efficient design of a dielectric metasurface with transfer learning and genetic algorithm,” Opt. Mater. Express, vol. 11, no. 7, pp. 1852–1862, 2021, https://doi.org/10.1364/ome.427426.Suche in Google Scholar

[22] S. S. Swayamprabha, et al.., “Hole-transporting materials for organic light-emitting diodes: an overview,” J. Mater. Chem. C, vol. 7, no. 24, pp. 7144–7158, 2019, https://doi.org/10.1039/c9tc01712g.Suche in Google Scholar

[23] M. Furno, R. Meerheim, S. Hofmann, B. Lüssem, and K. Leo, “Efficiency and rate of spontaneous emission in organic electroluminescent devices,” Phys. Rev. B Condens. Matter, vol. 85, no. 11, p. 115205, 2012, https://doi.org/10.1103/physrevb.85.115205.Suche in Google Scholar

[24] S. Nowy, B. C. Krummacher, J. Frischeisen, N. A. Reinke, and W. Brütting, “Light extraction and optical loss mechanisms in organic light-emitting diodes: influence of the emitter quantum efficiency,” J. Appl. Phys., vol. 104, no. 12, 2008, https://doi.org/10.1063/1.3043800.Suche in Google Scholar

[25] J. Song, H. Lee, E. G. Jeong, K. C. Choi, and S. Yoo, “Organic light-emitting diodes: pushing toward the limits and beyond,” Adv. Mater., vol. 32, no. 35, p. 1907539, 2020, https://doi.org/10.1002/adma.201907539.Suche in Google Scholar PubMed

[26] J. Song, et al.., “Lensfree OLEDs with over 50% external quantum efficiency via external scattering and horizontally oriented emitters,” Nat. Commun., vol. 9, no. 1, p. 3207, 2018, https://doi.org/10.1038/s41467-018-05671-x.Suche in Google Scholar PubMed PubMed Central

[27] B. C. Krummacher, S. Nowy, J. Frischeisen, M. Klein, and W. Brütting, “Efficiency analysis of organic light-emitting diodes based on optical simulation,” Org. Electron., vol. 10, no. 3, pp. 478–485, 2009, https://doi.org/10.1016/j.orgel.2009.02.002.Suche in Google Scholar

[28] D. Liu, Y. Tan, E. Khoram, and Z. Yu, “Training deep neural networks for the inverse design of nanophotonic structures,” ACS Photonics, vol. 5, no. 4, pp. 1365–1369, 2018, https://doi.org/10.1021/acsphotonics.7b01377.Suche in Google Scholar

[29] B. Kustowski, J. A. Gaffney, B. K. Spears, G. J. Anderson, J. J. Thiagarajan, and R. Anirudh, “Transfer learning as a tool for reducing simulation bias: application to inertial confinement fusion,” IEEE Trans. Plasma Sci., vol. 48, no. 1, pp. 46–53, 2019, https://doi.org/10.1109/tps.2019.2948339.Suche in Google Scholar

[30] W. Brütting, J. Frischeisen, T. D. Schmidt, B. J. Scholz, and C. Mayr, “Device efficiency of organic light-emitting diodes: progress by improved light outcoupling,” Phys. Status Solidi A, vol. 210, no. 1, pp. 44–65, 2013, https://doi.org/10.1002/pssa.201228320.Suche in Google Scholar


Supplementary Material

This article contains supplementary material (https://doi.org/10.1515/nanoph-2024-0505).


Received: 2024-09-27
Accepted: 2024-12-28
Published Online: 2025-02-10

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Frontmatter
  2. Editorial
  3. Special issue: “Metamaterials and Plasmonics in Asia”
  4. Reviews
  5. All-optical analog differential operation and information processing empowered by meta-devices
  6. Metasurface-enhanced biomedical spectroscopy
  7. Topological guided-mode resonances: basic theory, experiments, and applications
  8. Letter
  9. Ultrasensitive circular dichroism spectroscopy based on coupled quasi-bound states in the continuum
  10. Research Articles
  11. Data-efficient prediction of OLED optical properties enabled by transfer learning
  12. Semimetal–dielectric–metal metasurface for infrared camouflage with high-performance energy dissipation in non-atmospheric transparency window
  13. Deep-subwavelength engineering of stealthy hyperuniformity
  14. Tunable structural colors based on grayscale lithography and conformal coating of VO2
  15. A general recipe to observe non-Abelian gauge field in metamaterials
  16. Free-form catenary-inspired meta-couplers for ultra-high or broadband vertical coupling
  17. Enhanced photoluminescence of strongly coupled single molecule-plasmonic nanocavity: analysis of spectral modifications using nonlocal response theory
  18. Spectral Hadamard microscopy with metasurface-based patterned illumination
  19. Tunneling of two-dimensional surface polaritons through plasmonic nanoplates on atomically thin crystals
  20. Highly sensitive microdisk laser sensor for refractive index sensing via periodic meta-hole patterning
  21. Scaled transverse translation by planar optical elements for sub-pixel sampling and remote super-resolution imaging
  22. Hyperbolic polariton-coupled emission optical microscopy
  23. Broadband perfect Littrow diffraction metasurface under large-angle incidence
  24. Role of complex energy and momentum in open cavity resonances
  25. Are nanophotonic intermediate mirrors really effective in enhancing the efficiency of perovskite tandem solar cells?
  26. Tunable meta-device for large depth of field quantitative phase imaging
  27. Enhanced terahertz magneto-plasmonic effect enabled by epsilon-near-zero iron slot antennas
  28. Baseline-free structured light 3D imaging using a metasurface double-helix dot projector
  29. Nanophotonic device design based on large language models: multilayer and metasurface examples
  30. High-efficiency generation of bi-functional holography with metasurfaces
  31. Dielectric metasurfaces based on a phase singularity in the region of high reflectance
  32. Exceptional points in a passive strip waveguide
Heruntergeladen am 10.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/nanoph-2024-0505/html
Button zum nach oben scrollen