Abstract
Standard regularization methods typically favor solutions which are in, or close to, the orthogonal complement of the null space of the forward operator/matrix
A Proof of Theorem 3.3
Using the notation
the first order optimality condition for (3.7) reads
where
or, alternatively,
Since the entries of the diagonal matrix
provided that
Invoking (3.6) we therefore obtain the requirement
With the choice
it follows that (A.3) holds for
To show uniqueness, we first denote the cost-functional by
Let
Case 1:
𝐲
=
c
𝐱
α
,
c
≠
1
.
By the convexity of the cost-functional in (3.7) and the argument presented above, it follows that
Case 2:
𝐲
≠
c
𝐱
α
.
In this case there must exist at least one component
Also, by the definition of the subdifferential,
for any
Recall that
(A.5)
This implies that
However, choosing
Since the condition (A.6) holds, it follows that
From (A.5) we have
Finally, combining this inequality with (A.4), we obtain
where the final inequality follows from the first-order optimality conditions of the convex functional
i.e.,
This shows that
B Proof of Theorem 3.5
Let
If we can show that (3.8) and (3.9) hold for this choice of
For
which shows that (3.8) holds.
For
Invoking the Cauchy–Schwarz inequality, it follows that
where the strict inequality can be asserted from the non-parallelism assumption (2.5). Inserting this in (B.1) gives
which shows that also condition (3.9) of Theorem 3.4 is satisfied.
On the other hand, if
showing that the condition (3.9) also holds in this case. Thus, we can conclude that
To prove the uniqueness, assume that there exists another minimizer
in the form
Furthermore, we can multiply with
The orthogonality of
References
[1] D. Calvetti, Preconditioned iterative methods for linear discrete ill-posed problems from a Bayesian inversion perspective, J. Comput. Appl. Math. 198 (2007), no. 2, 378–395. 10.1016/j.cam.2005.10.038Search in Google Scholar
[2] D. Calvetti and E. Somersalo, Inverse problems: From regularization to Bayesian inference, Wiley Interdiscip. Rev. Comput. Stat. 10 (2018), no. 3, Article ID e1427. 10.1002/wics.1427Search in Google Scholar
[3] E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (2005), no. 12, 4203–4215. 10.1109/TIT.2005.858979Search in Google Scholar
[4]
E. J. Candès, M. B. Wakin and S. P. Boyd,
Enhancing sparsity by reweighted
[5] R. G. de Peralta Menendez, O. Hauk, S. Gonzalez Andino, H. Vogt and C. Michel, Linear inverse solutions with optimal resolution kernels applied to electromagnetic tomography, Human Brain Mapping 5 (1997), no. 6, 454–467. 10.1002/(SICI)1097-0193(1997)5:6<454::AID-HBM6>3.0.CO;2-2Search in Google Scholar
[6]
D. L. Donoho and M. Elad,
Optimally sparse representation in general (nonorthogonal) dictionaries via
[7] V. Duval and G. Peyré, Sparse regularization on thin grids I: the Lasso, Inverse Problems 33 (2017), no. 5, Article ID 055008. 10.1088/1361-6420/aa5e12Search in Google Scholar
[8] O. L. Elvetun and B. F. Nielsen, A regularization operator for source identification for elliptic PDEs, Inverse Probl. Imaging 15 (2021), no. 4, 599–618. 10.3934/ipi.2021006Search in Google Scholar
[9] O. L. Elvetun and B. F. Nielsen, Weighted sparsity regularization for source identification for elliptic PDEs, J. Inverse Ill-Posed Probl. 31 (2023), no. 5, 687–709. 10.1515/jiip-2021-0057Search in Google Scholar
[10] O. L. Elvetun and B. F. Nielsen, Box constraints and weighted sparsity regularization for identifying sources in elliptic PDEs, Numer. Funct. Anal. Optim. 45 (2024), no. 16, 779–812. 10.1080/01630563.2024.2405489Search in Google Scholar
[11] O. L. Elvetun and B. F. Nielsen, Identifying the source term in the potential equation with weighted sparsity regularization, Math. Comp. 93 (2024), no. 350, 2811–2836. 10.1090/mcom/3941Search in Google Scholar
[12] J.-J. Fuchs, On sparse representations in arbitrary redundant bases, IEEE Trans. Inform. Theory 50 (2004), no. 6, 1341–1344. 10.1109/TIT.2004.828141Search in Google Scholar
[13] M. Fuchs, M. Wagner, T. Köhler and H.-A. Wischmann, Linear and nonlinear current density reconstructions, J. Clin. Neurophys. 16 (1999), no. 3, 267–295. 10.1097/00004691-199905000-00006Search in Google Scholar PubMed
[14] H. Garde and K. Knudsen, 3D reconstruction for partial data electrical impedance tomography using a sparsity prior, Discrete Contin. Dyn. Syst. 2015 (2015), 495–504. 10.3934/proc.2015.0495Search in Google Scholar
[15] I. F. Gorodnitsky, J. S. George and B. D. Rao, Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm, Electroencephalography Clinical Neurophy. 95 (1995), no. 4, 231–251. 10.1016/0013-4694(95)00107-ASearch in Google Scholar PubMed
[16]
M. Grasmair, M. Haltmeier and O. Scherzer,
Necessary and sufficient conditions for linear convergence of
[17] F.-H. Lin, T. Witzel, S. P. Ahlfors, S. M. Stufflebeam, J. W. Belliveau and M. S. Hämäläinen, Assessing and improving the spatial accuracy in MEG source localization by depth-weighted minimum-norm estimates, NeuroImage 31 (2006), no. 1, 160–171. 10.1016/j.neuroimage.2005.11.054Search in Google Scholar PubMed
[18] A. Logg, K.-A. Mardal and G. N. Wells, Automated Solution of Differential Equations by the Finite Element Method, Lect. Notes Comput. Sci. Eng. 84, Springer, Heidelberg, 2012. 10.1007/978-3-642-23099-8Search in Google Scholar
[19] F. Lucka, S. Pursiainen, M. Burger and C. H. Wolters, Hierarchical Bayesian inference for the EEG inverse problem using realistic FE head models: Depth localization and source separation for focal primary currents, NeuroImage 61 (2012), no. 4, 1364–1382. 10.1016/j.neuroimage.2012.04.017Search in Google Scholar PubMed
[20] T. Lyche, Numerical Linear Algebra and Matrix Factorizations, Texts Comput. Sci. Eng. 22, Springer, Cham, 2020. 10.1007/978-3-030-36468-7Search in Google Scholar
[21] R. D. Pasqual-Marqui, Review of methods for solving the EEG inverse problem, Int. J. Bioelectromagnetism 1 (1999), no. 1, 75–86. Search in Google Scholar
[22] R. D. Pasqual-Marqui, Standardized low-resolution brain electromagnetic tomography (sLORETA): Technical details, Methods Find. Exp. Clin. Pharmacol. 24 (2002), 5–12. Search in Google Scholar
[23] R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B 58 (1996), no. 1, 267–288. 10.1111/j.2517-6161.1996.tb02080.xSearch in Google Scholar
[24] J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Inform. Theory 50 (2004), no. 10, 2231–2242. 10.1109/TIT.2004.834793Search in Google Scholar
[25] P. Xu, Y. Tian, H. Chen and D. Yao, Lp norm iterative sparse solution for EEG source localization, IEEE Trans. Biomed. Eng. 54 (2007), no. 3, 400–409. 10.1109/TBME.2006.886640Search in Google Scholar PubMed
© 2025 Walter de Gruyter GmbH, Berlin/Boston