Startseite Technik A novel approach for routing optimization in 5G optical networks
Artikel
Lizenziert
Nicht lizenziert Erfordert eine Authentifizierung

A novel approach for routing optimization in 5G optical networks

  • Amit Kumar Garg EMAIL logo und Piyush Kulshreshtha
Veröffentlicht/Copyright: 10. März 2025
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Optical Networks form the core network of 5G networks. Efficient routing implementation is essential for optimizing the 5G optical core network performance. A new approach to routing is proposed through an RL agent, which is an extended version of the TD3 algorithm with features such as averaging of losses, Prioritized Experience Replay (PER), and annealing of priorities to enhance the trade-off between exploration and exploitation in RL. The approach is validated using experimental evaluation with an OMNeT++ simulator to check the effectiveness of the proposed algorithm in optimizing the latency, throughput, and energy in network routing. It significantly outperforms existing agents based on Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), and State-Action-Rewards-State-Action (SARSA) algorithms. A detailed comparison with the existing algorithms demonstrates that there was a reduction in latency and energy and increase in throughput. The proposed algorithm outperformed DDPG, PPO, and SARSA algorithms, achieving a 76 % reduction in network delays, a 2.6 % reduction in energy consumption, and 1.8 % improvement in throughput and thus contributes to the ongoing efforts of addressing the routing and energy challenges in dynamic 5G Optical networks.


Corresponding author: Amit Kumar Garg, Department of Electronics & Communication Engineering, Deenbandhu Chhotu Ram University of Science & Technology, Murthal 131039, Sonepat (Hr.), India, E-mail:

Acknowledgments

Optical network Simulation Tool.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interests: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

1. Cao, X, Yoshikane, N, Popescu, I, Tsuritani, T, Morita, I. Software-defined optical networks and network abstraction with functional service design. J Opt Commun Netw 2017;9:C65–75. https://doi.org/10.1364/jocn.9.000c65.Suche in Google Scholar

2. Zhi, C, Ji, W, Yin, R, Feng, J, Xu, H, Li, Z, et al.. The flexible resource management in optical data center networks based on machine learning and SDON. Opt Switch Netw 2020;39:100594. https://doi.org/10.1016/j.osn.2020.100594.Suche in Google Scholar

3. Zhao, Y, Yan, B, Liu, D, He, Y, Wang, D, Zhang, J. SOON: self-optimizing optical networks with machine learning. Opt Express 2018;26:28713–26. https://doi.org/10.1364/oe.26.028713.Suche in Google Scholar PubMed

4. Chakraborty, S, Turuk, AK, Sahoo, B. Federated Learning enabled software-defined optical network with intelligent control plane architecture. Comput Electr Eng 2024;118:109329. https://doi.org/10.1016/j.compeleceng.2024.109329.Suche in Google Scholar

5. Zhao, J, Li, F, Ren, D, Hu, J, Yao, Q, Li, W. An intelligent inter-domain routing scheme under the consideration of diffserv QoS and energy saving in multi-domain software-defined flexible optical networks. Opt Commun 2016;366:229–40. https://doi.org/10.1016/j.optcom.2015.12.041.Suche in Google Scholar

6. Yunshan, L. An inter-domain routing scheme for software-defined optical network. In: 2017 29th Chinese control and decision conference (CCDC). Chongqing, China: IEEE; 2017:2584–8 pp.10.1109/CCDC.2017.7978950Suche in Google Scholar

7. Rummery, GA, Niranjan, M. On-line Q-learning using connectionist systems. Cambridge, UK: University of Cambridge Engineering Department; 1994, 37:14 p.Suche in Google Scholar

8. Schulman, J, Wolski, F, Dhariwal, P, Radford, A, Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv: 1707.06347, 2017.Suche in Google Scholar

9. Lillicrap, TP. Continuous control with deep reinforcement learning. arXiv preprint arXiv: 509.02971, 2015.Suche in Google Scholar

10. Fujimoto, S, Hoof, H, Meger, D. Addressing function approximation error in actor-critic methods. In: International conference on machine learning. Stockholm, Sweden; 2018:1587–96 pp.Suche in Google Scholar

11. Varga, A. A practical introduction to the OMNeT++ simulation framework. In: Recent advances in network simulation: the OMNeT++ environment and its ecosystem. Cham: Springer International Publishing; 2019, vol. 2019:3–51 pp.10.1007/978-3-030-12842-5_1Suche in Google Scholar

12. Eimer, T, Lindauer, M, Raileanu, R. Hyperparameters in reinforcement learning and how to tune them. Int Conf Mach Learn 2023:9104–49.Suche in Google Scholar

13. S Pitis, Rethinking the discount factor in reinforcement learning: a decision theoretic approach. Proc AAAI Conf Artif Intell 2019;33:7949–56. https://doi.org/10.1609/aaai.v33i01.33017949.Suche in Google Scholar

Received: 2025-01-04
Accepted: 2025-02-10
Published Online: 2025-03-10

© 2025 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 30.1.2026 von https://www.degruyterbrill.com/document/doi/10.1515/joc-2025-0006/pdf
Button zum nach oben scrollen