On optimal regularization parameters via bilevel learning
-
Matthias J. Ehrhardt
, Silvia Gazzola and Sebastian J. Scott
Abstract
Variational regularization is commonly used to solve linear inverse problems, and involves augmenting a data fidelity by a regularizer. The regularizer is used to promote a priori information and is weighted by a regularization parameter. Selection of an appropriate regularization parameter is critical, with various choices leading to very different reconstructions. Classical strategies used to determine a suitable parameter value include the discrepancy principle and the L-curve criterion, and in recent years a supervised machine learning approach called bilevel learning has been employed. Bilevel learning is a powerful framework to determine optimal parameters and involves solving a nested optimization problem. Whereas previous strategies enjoy various theoretical results, the well-posedness of bilevel learning in this setting is still an open question. In particular, a necessary property is positivity of the determined regularization parameter. In this chapter, we provide a new condition that better characterizes positivity of optimal regularization parameters than the existing theory. Numerical results verify and explore this new condition for both small and high-dimensional problems.
Abstract
Variational regularization is commonly used to solve linear inverse problems, and involves augmenting a data fidelity by a regularizer. The regularizer is used to promote a priori information and is weighted by a regularization parameter. Selection of an appropriate regularization parameter is critical, with various choices leading to very different reconstructions. Classical strategies used to determine a suitable parameter value include the discrepancy principle and the L-curve criterion, and in recent years a supervised machine learning approach called bilevel learning has been employed. Bilevel learning is a powerful framework to determine optimal parameters and involves solving a nested optimization problem. Whereas previous strategies enjoy various theoretical results, the well-posedness of bilevel learning in this setting is still an open question. In particular, a necessary property is positivity of the determined regularization parameter. In this chapter, we provide a new condition that better characterizes positivity of optimal regularization parameters than the existing theory. Numerical results verify and explore this new condition for both small and high-dimensional problems.
Chapters in this book
- Frontmatter I
- Preface V
- Contents VII
-
Part I: Mathematical aspects of data-driven methods in inverse problems
- On optimal regularization parameters via bilevel learning 1
- Learned regularization for inverse problems 39
- Inverse problems with learned forward operators 73
- Unsupervised approaches based on optimal transport and convex analysis for inverse problems in imaging 107
- Learned reconstruction methods for inverse problems: sample error estimates 163
- Statistical inverse learning problems with random observations 201
- General regularization in covariate shift adaptation 245
-
Part II: Applications of data-driven methods in inverse problems
- Analysis of generalized iteratively regularized Landweber iterations driven by data 273
- Integration of model- and learning-based methods in image restoration 303
- Dynamic computerized tomography using inexact models and motion estimation 331
- Deep Bayesian inversion 359
- Utilizing uncertainty quantification variational autoencoders in inverse problems with applications in photoacoustic tomography 413
- Electrical impedance tomography: a fair comparative study on deep learning and analytic-based approaches 437
- Classification with neural networks with quadratic decision functions 471
- Index 495
Chapters in this book
- Frontmatter I
- Preface V
- Contents VII
-
Part I: Mathematical aspects of data-driven methods in inverse problems
- On optimal regularization parameters via bilevel learning 1
- Learned regularization for inverse problems 39
- Inverse problems with learned forward operators 73
- Unsupervised approaches based on optimal transport and convex analysis for inverse problems in imaging 107
- Learned reconstruction methods for inverse problems: sample error estimates 163
- Statistical inverse learning problems with random observations 201
- General regularization in covariate shift adaptation 245
-
Part II: Applications of data-driven methods in inverse problems
- Analysis of generalized iteratively regularized Landweber iterations driven by data 273
- Integration of model- and learning-based methods in image restoration 303
- Dynamic computerized tomography using inexact models and motion estimation 331
- Deep Bayesian inversion 359
- Utilizing uncertainty quantification variational autoencoders in inverse problems with applications in photoacoustic tomography 413
- Electrical impedance tomography: a fair comparative study on deep learning and analytic-based approaches 437
- Classification with neural networks with quadratic decision functions 471
- Index 495