Home Exact solutions for the total variation denoising problem of piecewise constant images in dimension one
Article Open Access

Exact solutions for the total variation denoising problem of piecewise constant images in dimension one

  • Riccardo Cristoferi ORCID logo EMAIL logo
Published/Copyright: December 18, 2020

Abstract

A method for obtaining the exact solution for the total variation denoising problem of piecewise constant images in dimension one is presented. The validity of the algorithm relies on some results concerning the behavior of the solution when the parameter λ in front of the fidelity term varies. Albeit some of them are well-known in the community, here they are proved with simple techniques based on qualitative geometrical properties of the solutions.

MSC 2010: 49K99; 49K21

1 Introduction

When an image is acquired, it comes, unavoidably, with some distortion. Indeed, external conditions, other than defects or limitations of the instruments that are used to obtain them, affect the quality of the acquired data. Thus, in order to perform any task on the image, it is important to be able to recover the clean version in the best possible way, i.e., with optimal fidelity. If we denote it by u:Ω and the acquired, corrupted image by f:Ω, where ΩN is an open set, it is usually assumed that the two are related via

(1.1)f=Au+n.

Here A is a bounded linear operator representing, for instance, the blurring effect and n is the instance of the random noise. One of the aims of image reconstruction is deblurring and denoising f in order to recover u (see [8, 24]).

Here we are interested in the denoising problem, i.e., when the operator A is the identity and we have to remove the noise. Problem (1.1) is, in general, ill-posed (in the sense of Hadamard) and thus we need to regularize it (see [1, 55]). A widely used variational technique for this purpose was introduced by Rudin, Osher and Fatemi in [52], where they proposed to recover u via the minimization problem

(1.2)minuBV(Ω),u-fL22=σ2|Du|(Ω)

for some fixed σ>0, where f is assumed to be in L2(Ω) and |Du|(Ω) denotes the total variation of the function u in Ω. There are some interesting cases though, where the real image is not represented by a function of bounded variation (see [35]). The constrained minimization problem (1.2) has been shown to be equivalent to the following penalized minimization problem (known as the total variation denoising model with L2 fidelity term):

(1.3)minuBV(Ω)|Du|(Ω)+λu-fL2(Ω)2

for some Lagrange multiplier λ>0 (see [19]).

Today’s literature on the study of problem (1.3) is extensive, and here we limit ourselves to recall that properties of the solutions have been studied, for instance, in [2, 3, 4, 9, 10, 16, 17, 22, 25, 29, 31, 38, 37, 36, 42, 51, 56, 57], the analysis of variants of (1.3) that use the generalized total variation have been performed in [13, 45, 47, 50], anisotropic models are undertaken in [26, 30, 32, 41], while the effects of considering high-order models have been investigated in [21, 23, 27, 39, 46]. Finally, other variants of (1.2) have been addressed in [6, 7, 33, 34, 44], and algorithmic considerations may be found in [15, 18, 20, 28, 43].

The seeking for an analytical method to find exact solutions to the minimization problem (1.3) (and some variants of it, both for piecewise constant and general initial data f) has been the topic of many studies. For instance, the interested reader can consult [11, 12, 14, 45, 48, 49, 50, 53] and the references therein. The starting point of our investigation is the work of Strong and Chan (see [54]), where they consider the minimization problem (1.3) (essentially just in one dimension) where the initial data f is a piecewise constant function with noise. Under certain conditions on the amplitude of the noise, they are able to determine the solution of the minimization problem (1.3) in the case λ1, namely when the solution has jumps where f has.

The aim of this paper is to carry on the previous analysis and to determine the solution of the minimization problem (1.3) for f being a piecewise constant function (without noise) for every value of the parameter λ. To allow for more generality in the choice of the fidelity term, we generalize the L2 norm to an Lp one, with p(1,). Some results are also presented in the case p=1. To be precise, we consider the minimization problem

(1.4)minuBV(Ω)𝒢(u),

where Ω:=(a,b) and

𝒢(u):=|Du|(Ω)+λu-fLp(Ω)p

for a given initial piecewise constant data f. Our aim is to provide a method for finding the solution to the minimization problem (1.4) in the case p>1 for all values of λ>0.

The novelty of this paper relies on the fact that our approach is aimed at getting the solution uλ to the minimization problem (1.4) by considering the solution uλ for large values of λ and then obtaining the ones for small values of λ by decreasing the parameter λ, whereas all the above-mentioned papers used a fixed value of λ. In particular, the main result we obtain is Theorem 5.3 about the behavior of neighboring values of the solution uλ close to a jump point when the parameter λ is moving. Albeit some of the results could be proved by using the primal-dual optimality condition (see [14, 51]) and the semigroup property of the total variation flow in dimension one (see [53]), here we prefer to employ more elementary techniques to study the problem.

We next explain the main idea behind the strategy we propose. The rigid structure of the initial data forces the solution to be piecewise constant itself, with jump set contained in the one of f (see Corollary 3.2). Moreover, a simple truncation argument shows that the solution takes values between the minimum and the maximum of f. Hence, the minimization problem (1.4) with f of the form

f(x)=i=1kfiχ(xi-1,xi)(x),fi,

is equivalent to the following minimization problem:

(1.5)minvQG(v),

where Q:=[minf,maxf]k and G:k is the function defined by

G(v):=i=2k|vi-vi-1|+λi=1kLi|fi-vi|p,

with v=(v1,,vk) and Li:=xi-xi-1. The function G is convex, but it lacks differentiability on the hyperplanes where {vi-1=vi}. Thus, in principle, one should minimize the function G over several compact regions and then compare all the minimum values in order to find the global minimizer.

Our method aims at overcoming this difficulty. We will be able, for each λ, to predict a priori – that is, without knowing explicitly uλ (the minimizer of G corresponding to the parameter λ) – what the relative position of each uiλ with respect to ui-1λ and fi will be. Knowing that, it is possible to look for the minimizer uλ only in a specific region of k, where the absolute values present in the expression of G can be written explicitly. Hence, uλ can be found by solving the appropriate Euler–Lagrange equation.

We give a more detailed description of our method: the function λuλ is continuous and uλf as λ (see Lemma 5.1). Hence, for λ1, we have that uiλ is very close to fi, and this allows us to predict the relative position of uiλ with respect to ui-1λ. Moreover, thanks to the qualitative properties of the solutions we will prove in Lemma 5.2 and Proposition 5.5, we will also be able to tell the relative position of each ui with respect to fi. These information allows us to write explicitly the absolute values present in the expression of G, as well as to write explicitly the Euler–Lagrange equation, whose solution will give us the minimizer uλ. With this reasoning, we find the minimizers for λ large (how large λ has to be will be determined a posteriori).

The idea now is to let λ decrease. Since uλ is constant for small values of λ (see Lemma 3.6), by continuity of λuλ eventually two neighboring values uiλ and ui-1λ will happen to be the same. The main technical result (Theorem 5.3) tells us that the same will be true for all smaller values of λ. As a result we now have to consider the function G restricted to the subspace {vi-1=vi}, thus reducing the number of variables. By continuity of λuλ, it is then possible to predict the relative position of every ui with respect to ui-1, while the qualitative properties of the solutions will give us the relative position of ui with respect to fi. As a consequence, also in this case, we are able to write explicitly the Euler–Lagrange equation.

We observe that the price to pay for applying this method is that, in order to determine the solution of the minimization problem (1.5) for a certain value λ¯, we first need to know it for all λ>λ¯. This, in the end, boils down to solve some equations (linear if p=2), whose number can be roughly bounded above by k(k+1)/2.

Finally, we would like to comment on the case p=1. The reason why the strategy described above fails for p=1 is because we cannot use the continuity of the map λuλ. Indeed, even if for p=1 there is no uniqueness for the solution of the minimization problem (1.5) (see an example in Proposition 4.1), there is always a solution taking only the values that f takes (see Corollary 3.3). But this jumping behavior of the solution prevents us to use continuity arguments, which are at the core of the strategy sketched above. Although it could be possible to obtain a solution of the minimization problem (1.5) in the case p=1 by comparing the value of the functional G over all the vectors vk of the form vi=fσ(i), where σ:{1,,k}{1,,k} (see Corollary 3.3), we are currently investigating the possibility to obtain a more efficient analytic method to fulfill the task.

The paper is organized as follows. After a brief recalling of the main properties of one-dimensional functions of bounded variation in Section 2, we devote Section 3 to stating and proving basic results we will need in the sequel concerning the solutions of our minimization problem. In Section 4, we illustrate with a simple case the different behaviors of the solution in the cases p=1 and p>1. Section 5 contains the main technical results needed to justify the strategy to determine the solution of the minimization problem (1.5) we describe. In Section 6, we conclude with an explicit example.

2 Preliminaries

In this section, we review basic definitions of one-dimensional functions of bounded variation. For more details, see [5, 40]. Here we assume a,b with a<b.

Definition 2.1.

Let u:(a,b). The pointwise variation of u in (a,b) is defined by

pV(u;a,b):=sup{i=1n-1|u(xi+1)-u(xi)|:a<x1<<xn<b}.

Definition 2.2.

For uL1((a,b)), its total variation in (a,b) is given by

|Du|((a,b)):=sup{abφudx:φC0((a,b)),|φ|1}.

If |Du|((a,b))<, we say that u belongs to the space BV((a,b)) of functions of bounded variation in (a,b).

In this case, Du is a finite Radon measure on (a,b).

Definition 2.3.

Let uBV((a,b)). We define the jump set of u by

Ju:={x(a,b):|Du|({x})0}.

The relation between the total and the pointwise variation is given by the following result. In the following, 1 will denote the one-dimensional Lebesgue measure on .

Theorem 2.4.

Let uL1((a,b)) and define the essential variation of u by

(2.1)eV(u;a,b):=inf{pV(v;a,b):v=u1-a.e. in (a,b)}.

The infimum defining eV(u;a,b) in (2.1) is achieved and it coincides with |Du|((a,b)).

Theorem 2.4 allows us to single out some well behaving representative of a BV function.

Definition 2.5.

Let uBV((a,b)). Any v with v=u1-a.e. in (a,b) such that

pV(v;a,b)=eV(u;a,b)=|Du|((a,b))

is called a good representative of u.

3 The general structure of the solutions

This section is devoted to stating and proving some basic results concerning the solution of the minimization problem (1.4). Albeit some of these properties may be known, we present here the proofs for the reader’s convenience.

We start by stating a well-known result about the jump set of the solution (for a proof see, for instance, [14]).

Theorem 3.1.

Let fL1((a,b)) and let uBV((a,b)) be a solution of (1.4). If f is constant in (c,d)(a,b), then u is constant in (c,d).

In higher dimension, the inclusion JuJf has been proved in [16, 56] in the case p>1, while it is known to be not always true if p=1 (see [22, 31]). The above result allows us to study an equivalent finite-dimensional minimization problem in the case in which f is a piecewise constant function.

Corollary 3.2.

Let f be a piecewise constant function in (a,b), i.e.,

f(x)=i=1kfiχ(xi-1,xi)(x),fi.

Then any solution u of the minimization problem (1.4) is of the form

(3.1)u(x)=i=1kuiχ(xi-1,xi)(x)

for some (ui)i=1kR, not necessarily distinct from each other.

In particular, a function u of the form (3.1) is a solution of (1.4) if and only if u¯:=(u1,,uk)Rk is a solution of the minimization problem

(3.2)minvkG(v),

where G:RkR is the function defined by

G(v):=i=2k|vi-vi-1|+λi=1kLi|fi-vi|p,

where v=(v1,,vk) and Li:=xi-xi-1.

Thus, we now concentrate on the study of the minimization problem (3.2).

The cases p=1 and p>1 turn out to be quite different. Heuristically, the difference lies in the fact that, in the first case, the two terms of the energy are of the same order while, for p>1, the fidelity term is of higher order than the total variation. This leads to very different behavior of the solutions in the two cases.

Because of the strict convexity of the functional G for p>1, the solution of the minimization problem (3.2) is unique, while for p=1 we have lack of uniqueness (see Proposition 4.1). Nevertheless, it is possible to identify a solution with a particular structure.

Corollary 3.3.

For p=1, there exists a solution u of problem (3.2) such that one has ui{f1,,fk} for every i=1,,k.

Proof.

For any given quadruple of functions

s1:{2,,k}{0,1},s2:{1,,k}{0,1},
t1:{2,,k}{0,1},t2:{1,,k}{0,1},

let us consider the set 𝒜s1,s2t1,t2k such that

G(u)=i=2k(-1)s1(i)t1(i)(ui-ui-1)+λi=1k(-1)s2(i)t2(i)Li(fi-ui)
=vλs1,s2,t1,t2u+cλs1,s2,t1,t2

for all u𝒜s1,s2t1,t2, where

cλs1,s2,t1,t2andvλs1,s2,t1,t2k.

The result then follows by noticing that G restricted to any 𝒜s1,s2t1,t2k is always minimized by a vector uk with

ui=fσ(i)

for some function σ:{1,,k}{1,,k} and that

minkG=mins1,s2,t1,t2min𝒜s1,s2t1,t2G|𝒜s1,s2t1,t2,

as desired. ∎

We conjecture that, in the case p=1, non-uniqueness of the solution of the minimization problem (1.4) happens only for a finite number of critical values of λ, where a continuum of solutions is present.

Definition 3.4.

We will denote by uλ a solution of the minimization problem (3.2) corresponding to the value λ. This will be the solution if p>1, while, for p=1, it will be understood as a solution whose structure is those given by the previous result.

Remark 3.5.

It is easy to see that ui[minf,maxf] for every solution u and that, in the case where p>1 and f is not constant, it holds that uλ(minf,maxf) for all λ>0. In particular, for p>1, the solution uλ can never be equal to the initial data f.

In the rest of this section, we seek to understand the behavior of the solution uλ in the limiting cases for λ, i.e., when λ1 and when λ1. In the former case the predominant term of the energy is given by the total variation, thus we expect uλ to minimizes it. We first treat the case p>1.

Lemma 3.6.

Fix p>1, positive numbers (Li)i=1k and two constants m<M. Let

λ¯:=1p(M-m)p-1maxiLi.

Then, for any x0<x1<<xk with xi-xi-1=Li and for any piecewise constant function f:=i=1kfiχ(xi-1,xi) such that fi[m,M] for all i=1,,k, there exists a constant cR such that uλc for all λ(0,λ¯].

Proof.

Assume that uλ is not constant and let i{1,,k} be such that uiλ=min{ujλ:j=1,,k}. Let

r:=inf{ji:us=ui for all jsi},
t:=sup{ji:us=ui for all isj}.

By hypothesis, either r>1 or t<k. Consider, for ε>0, the vector uεk defined by ujε:=uj+ε for j=r,,t and ujε:=ujλ for all the other j’s. Then, recalling that uj[m,M] for all j=1,,k, we have that

limε0+G(uε)-G(uλ)ε=a+pλ(-1)siLi|ui-fi|p-1
(3.3)a+pλ(M-m)p-1maxi=1,,kLi,

where a{-1,-2} (in particular, a=-1 if r=1 or t=k and a=-2 otherwise), and si{0,1}. If λ<λ¯, from (3.3) we get that G(uε)<G(uλ). This means that uλ has to be constant for λ<λ¯. Moreover, it is easy to see that the function G restricted to the set {(u1,,uk)k:u1==uk} admits a unique minimizer that is independent of λ.

We now have to prove that uλ¯ is constant. Assume that uiλc for all λ(0,λ¯) and all i=1,,k. Let c¯k be the vector given by c¯i:=c. Then Gλ(c)<Gλ(v) for all vk with vc¯ and all λ(0,λ¯), where the subscript λ is to underline the dependence of G on λ. By letting λλ¯, we get Gλ¯(c)<Gλ¯(v) for all vk and thus uλ¯=c¯. ∎

In the case p>1, the only thing we can say about the case λ1 is that

uλfas λ.

Indeed, from

λLi|uiλ-fi|pG(uλ)G(f)<,

we conclude that

|uiλ-fi|p<1λG(f)Li.

We now consider the asymptotic behavior of the solution in the case p=1.

Lemma 3.7.

Let p=1 and let f:=i=1kfiχ(xi-1,xi). Set Li:=xi-xi-1 for i=1,,k. Let

λ¯1:=mini|fi-fi-1|k(maxiLi)(maxf-minf)

and

λ¯2:=G(f)(miniLi)(mini|fi-fi-1|).

Then, for all λ(0,λ¯1], every solution uλ of the minimization problem (3.2) is constant, while for all λλ2 the solution uλ is unique and is given by f itself.

Proof.

We prove this lemma in two steps. Step 1: the case λ1. Suppose that uλ is not constant. Recalling that uiλ{f1,,fk}, we have that

|Duλ|(Ω)mini|fi-fi-1|.

On the other hand, for any function v such that vc[minf,maxf] in (x0,xk), it holds that

G(v)λk(maxiLi)(maxf-minf).

Then, for λ<λ¯, the above estimates show that uλ must be constant.

Finally, in order to prove that also uλ¯ is constant, we reason as follows: we know that uλ=c¯λ for λ(0,λ¯), for some c¯λ=(cλ,,cλ)k. Take λnλ¯. Since cλn[minf,maxf], up to a not relabeled subsequence, we have that cλnc. We conclude that

Gλ¯1((c,,c))Gλ¯(v)

for all vk.

Step 2: the case λ1. Suppose that there exists a sequence λj for which uiλjfi for all j’s (this is possible since k is finite). By recalling that uiλj{f1,,fk}, we have, for λj>λ¯2, that

G(uλj)λjLi|uiλj-fi|>G(f),

contradicting the minimality of uλj. ∎

Remark 3.8.

Note that, unlike the case p>1, when p=1 and λ1, we cannot conclude that there exists a unique constant c such that uλc for all λ(0,λ¯1].

4 Explicit solutions in a simple case

Here we study the case where k=2. This analysis, albeit its simplicity, is important to underline some features that distinguish the behavior of the solution of the minimization problem (1.4) in the cases p=1 and p>1.

Proposition 4.1.

Let f1<f2. Then the solutions uλ of the minimization problem (3.2) in the case p=1 are the following:

  1. If L1>L2, set λT1:=1L2. Then

    {u1λ=u2λ=f1for λ<λT1,u1λ=f1,u2λ[f1,f2]for λ=λT1,u1λ=f1,u2λ=f2for λ>λT1.
  2. If L1=L2, set λT1:=1L1. Then

    {u1λ[f1,f2],u2λu1for λλT1,u1λ=f1,u2λ=f2for λ>λT1.
  3. If L1<L2, set λT1:=1L1. Then

    {u1λ=u2λ=f2for λ<λT1,u1λ[f1,f2],u2λ=f2for λ=λT1,u1λ=f1,u2λ=f2for λ>λT1.

Proof.

It is easy to see that we must have f1u1u2f2. Thus, we consider the region

(4.1)𝒯:={(u1,u2)2:f1u1u2f2},

and we rewrite the function G in 𝒯 as

G(u)=[λL1-1]u1+[1-λL2]u2+λ[f2L2-f1L1]=vλu+cλ.

When minimizing G in 𝒯, we can drop the term cλ. Note that vλ=0 if and only if L1=L2 and λ=1/L1. In this case we have G(u)=f2-f1 for all u with u1u2. Moreover, v0=(-1,1) and vλv as λ, where

v:=(L1L12+L22,-L2L12+L22).

The minimizers of G over the triangle 𝒯 are thus simply given by considering the intersection of

𝒯{(x,y)2:(x,y)uλ0}

with a line orthogonal to vλ. In the case L1<L2, the vector vλ|vλ| spans the two colored regions in Figure 1, for λ[0,). When in the grey region, namely when λ<1/L1, the minimizer is given by (f2,f2), while for λ>1/L1 the minimizer is given by (f1,f2). Note that the non-uniqueness happens only when the vector vλ is orthogonal to {x=y}2. ∎

Figure 1 The case L1<L2.{L_{1}<L_{2}.}
Figure 1

The case L1<L2.

In the case p>1 the landscape of the solutions is quite different.

Proposition 4.2.

Let f1<f2 and let p>1. Define

λTp:=1p(L11p-1+L21p-1)p-1L1L2(f2-f1)p-1.

The solution uλ of the minimization problem (3.2) is the following:

  1. For λλTp, we have

    (4.2)u1λ=u2λ=L11p-1L11p-1+L21p-1f1+L21p-1L11p-1+L21p-1f2,
  2. For λ>λTp, we have

    (4.3)u1λ=f1+1(pλL1)1p-1,u2λ=f2-1(pλL2)1p-1.

Proof.

Recalling that f1u1u2f2, we just have to consider the region 𝒯 defined in (4.1) and to rewrite the function G in that region as

G(u1,u2):=u2-u1+λL1(u1-f1)p+λL2(f2-u2)p.

The critical point of G is given by

u1=f1+1(pλL1)1p-1,u2=f2-1(pλL2)1p-1,

and it belongs to the interior of 𝒯, i.e., u1λ<u2λ, only for λ>λTp. Since G is strictly convex, this critical value turns out to be the global minimizer of G for λ>λTp. In the case λλTp, the point of minimum has to be on 𝒯. Instead of performing all the computations for finding the minimum point in all of the three edges of 𝒯 and to compare them, we will use the following argument based on the continuity of the minimizer uλ with respect to λ (see Lemma 5.1), i.e., we invoke the fact that the function λuλ is continuous. Note that for λλTp we have

uλ(u¯,u¯),

where

u¯:=L11p-1L11p-1+L21p-1f1+L21p-1L11p-1+L21p-1f2

is independent of λ. By using the continuity of the solution, we can conclude that, for λ=λTp, the solution of the minimization problem is given by (u¯,u¯). The conclusion for λ<λTp follows from the result of Theorem 5.3. ∎

Remark 4.3.

We remark a couple of facts:

  1. We have that λTpλT1 as p1+ (in each of the cases for the definition of the second one). Indeed, suppose that L1<L2. Then

    limp1+λTp=limp1+(L11p-1+L21p-1)p-1L1L2
    =1L1limp1+(1+(L1L2)1p-1)p-1
    =1L1limt0+exp[tlog[(L1L2)1t+1]]
    =1L2=λT1.

    Similar reasonings lead to the claimed result in the other two cases. In particular, note that λTp>λT1.

  2. The solutions converge to a solution for p=1, as p1. Indeed, suppose λ>λT1. Then for p sufficiently close to 1, from the above bullet point, we have that λ>λTp. Thus, the solution of the minimization problem for p is given by (4.3). In this case, it is easy to see that the solution converges to (f1,f2), as p1. In the case λ<λT1, we can assume as above that p is so close to 1 that the solution of the minimization problem for p is given by (4.2).

    If L1>L2, then

    L11p-1L11p-1+L21p-1=1(L2L1)1p-1+11as p1,
    L21p-1L11p-1+L21p-1=1(L1L2)1p-1+10as p1.

    In the case L1=L2, both coefficients are equal to 12.

    Finally, in the case λ=λT1, since λTp>λT1, we have that the solution of the minimization problem is given by (4.2). The result follows by arguing as before.

5 The behavior of the solution for p>1

This section contains the main result of this paper, namely Theorem 5.3, that is derived from the qualitative properties of the solutions proved in the following two lemmas and in Proposition 5.5. Although the same result can be deduced by using the semigroup property of the total variation in dimension one [53], we prefer to get it from more elementary observations of qualitative nature on the behavior of the solution uλ when the parameter λ varies. These observations allow to predict how the solution behaves when the parameter λ varies.

We start by proving the continuity (with respect to the Euclidean topology of k) of the solution uλ with respect to λ.

Lemma 5.1.

Let p>1. Then λuλ is continuous and limλuλ=f.

Proof.

Fix λ¯>0 and let λnλ. Then G(uλn)G(v) for all vk, where equality holds if and only if v=uλn. Since

|uλn|kf,

up to a (not relabeled) subsequence, we have that uλnv¯. Using the continuity of G in both v and λ, we have that G(v¯)G(v) for all vk. By the uniqueness of the solution, we deduce that v¯=uλ, and that uλnuλ for all sequences λnλ.

To prove the second part of the lemma, we reason as follows. Assume that uλ does not converge to f as λ. Since ui[minf,maxf], by compactness, we obtain (up to a not relabeled subsequence) uλv for some vf. In particular, there exists an index i such that |uiλ-fi|>ε for λ1, for some ε>0. So that

+>G(f)G(uλ)λ|uiλ-fi|

as λ. This is the desired contradiction. ∎

We now prove several qualitative properties regarding the behavior of the solution uλ as λ varies. Some of the following results could be stated in a more inclusive way, but since they can be used to deduce qualitative properties of the solutions when no direct analysis can be performed, for clarity of exposition we opt to present each of them separately.

Lemma 5.2.

Let p>1. Then the following properties hold:

  1. Assume that, for λ(λ1,λ2), there exists a function λu¯λ such that, for some r0,

    {uiλ=ui+1λ==ui+rλ=u¯λ,ui-1λ<u¯<ui+r+1λ𝑜𝑟ui-1λ>u¯>ui+r+1λ;

    see Figure 2.

    Then u¯λ is the solution of

    minc(ui-1λ,ui+r+1λ)j=ii+rLj|c-fj|p.

    In particular, u¯λ is constant in (λ1,λ2).

  2. Assume that, for λ(λ1,λ2), there exists a function λu¯λ such that, for some r0,

    {uiλ=ui+1λ==ui+rλ=u¯λ,ui-1λ,ui+r+1λ<u¯λ;

    see Figure 3.

    Then λu¯λ is increasing.

    In particular, in the case r=0, we have

    uiλ=fi-(2pλLi)1p-1.
  3. Assume that, for λ(λ1,λ2), there exists a function λu¯λ such that, for some r0,

    {uiλ=ui+1λ==ui+rλ=u¯λ,ui-1λ,ui+r+1λ>u¯λ;

    see Figure 4.

    Then λu¯λ is decreasing.

    In particular, in the case r=0, we have

    uiλ=fi+(2pλLi)1p-1.
  4. Assume that, for λ(λ1,λ2), there exists a function λu¯λ such that, for some r0,

    {u1λ=u2λ==urλ=u¯λ,ur+1λ<u¯λ;

    see Figure 5.

    Then λu¯λ is increasing.

    In particular, in the case r=0, we have

    uiλ=f1-(1pλL1)1p-1.
  5. Assume that, for λ(λ1,λ2), there exists a function λu¯λ such that, for some r0,

    {u1λ=ui+1λ==urλ=u¯λ,ur+1λ>u¯λ;

    see Figure 6.

    Then λu¯λ is decreasing.

    In particular, in the case r=0, we have

    uiλ=f1+(1pλL1)1p-1.
  6. Assume that, for λ(λ1,λ2), there exists a function λu¯λ such that, for some r0,

    {uk-rλ==ukλ=u¯λ,uk-r-1λ>u¯λ;

    see Figure 7.

    Then λu¯λ is decreasing.

    In particular, in the case r=0, we have

    ukλ=fk+(1pλLk)1p-1.
  7. Assume that, for λ(λ1,λ2), there exists a function λu¯λ such that, for some r0,

    {uk-rλ==ukλ=u¯λ,uk-r-1λ<u¯λ;

    see Figure 8.

    Then λu¯λ is increasing.

    In particular, in the case r=0, we have

    ukλ=fk-(1pλLk)1p-1.

Figure 2 The situation of case (i) of Lemma 5.2.
Figure 2

The situation of case (i) of Lemma 5.2.

Figure 3 The situation of case (ii) of Lemma 5.2.
Figure 3

The situation of case (ii) of Lemma 5.2.

Figure 4 The situation of case (iii) of Lemma 5.2.
Figure 4

The situation of case (iii) of Lemma 5.2.

Figure 5 The situation of case (iv) of Lemma 5.2.
Figure 5

The situation of case (iv) of Lemma 5.2.

Figure 6 The situation of case (v) of Lemma 5.2.
Figure 6

The situation of case (v) of Lemma 5.2.

Figure 7 The situation of case (vi) of Lemma 5.2.
Figure 7

The situation of case (vi) of Lemma 5.2.

Figure 8 The situation of case (vii) of Lemma 5.2.
Figure 8

The situation of case (vii) of Lemma 5.2.

Proof.

We start by proving property (i). Suppose that ui-1λ<u¯λ<ui+r+1λ. In the other case we argue in a similar way. By hypothesis, the vector uλ minimizes the function G in the set

{(u1,,uk)k:ui-1<ui==ui+r<ui+r+1},

and in this set the function G can be written as

G(u)=G~(u1,ui-1,ui+r+1,,uk)+λj=ii+rLj|u¯-fj|p.

By keeping u1,,ui-1 and ui+r+1,,uk fixed, the claim follows by minimizing the above quantity with respect to u¯.

Since all the other properties can be proved with an argument whose general lines are similar, we just prove property (ii), leaving the details of the others proofs to the reader.

In the hypothesis of (ii), it holds that uλ is a minimizer of G in the set

{(u1,,uk)k:ui-1,ui+r+1<ui==ui+r}.

Restricted to this set, the function G can be written as

G(u)=G~(u1,ui-1,ui+r+1,,uk)+2u¯+λj=ii+rLj|u¯-fj|p.

So, for λ(λ1,λ2) and u1,ui-1,ui+r+1,,uk fixed, u¯λ is the minimizer of the strictly convex function

H(c):=2c+λj=ii+rLj|c-fj|p

in the set (max{uiλ,ui+rλ},maxf).

To study the minimizer of H, we can assume without loss of generality that fi<fi+1<<fi+r. Indeed, we note that the order of the fj’s does not matter. Moreover, in the case in which fp=fq for some pq, we can simply collect the two terms in a single one and use Lp+Lq as a corresponding factor in the above summation. We now want to prove that λu¯ is decreasing. Note that the function H can be written as

H(c)=2c+λj=imLj(c-fj)p+λj=m+1i+rLj(fj-c)p

if c(fm,fm+1], for some m{i,,i+r-1}, and

H(c)=2c+λj=ii+rLj(c-fj)p

if c[fi+r,maxf). Consider the function H in the interval (fm,fm+1). We have that

H(c)=2+pλ[j=imLj(c-fj)p-1-j=m+1i+rLj(fj-c)p-1].

Here H(c)=0 has a solution only if the term in the parenthesis is negative and if so, then let λcλ be such a solution. It is easy to see that this function is regular in (fm,fm+1). By differentiating the expression Hm(cλ) with respect to λ, we obtain

p[j=imLj(c-fj)p-1-j=m+1i+rLj(fj-c)p-1]+λdcλdλp(p-1)[j=imLj(c-fj)p-2+j=m+1i+rLj(fj-c)p-2]=0.

Thus, by recalling that the term in the first parenthesis is negative, we get dcλdλ<0, as desired.

In the case in which the minimizer of the function H is reached at a point c=fm+1, we simply consider the function H and we apply the argument above.

Finally, the same reasoning applies when c[fi+r,maxf). ∎

We are now in position to prove the fundamental result we will use to develop our strategy for finding the solution.

Theorem 5.3.

For each i=1,,k-1 there exists λi(0,) such that uiλ=ui+1λ for λλi, while uiλui+1λ for λ>λi.

Proof.

We prove this theorem in two steps. Step 1. We claim that if uiλ~=ui+1λ~ for some λ~>0, then uiλ=ui+1λ for all λ(0,λ~]. Indeed, let

λ¯:=min{λ:uiμ=ui+1μ for all μ[λ,λ~]},

and assume that λ¯>0. By continuity of λuλ, there exists ε>0 such that uiλui+1λ for λ(λ¯-ε,λ¯). Consider the case in which uiλ<ui+1λ in (λ¯-ε,λ¯) (the other case can be treated similarly).

If i=1, then property (v) of Lemma 5.2 tells us that λuiλ is decreasing in (λ¯-ε,λ¯) and thus it is not possible to have uiλ¯=ui+1λ¯.

If i>1, we can focus, without loss of generality, only on the following two cases: ui-1λ>uiλ and ui-1λ<uiλ in (λ¯-ε,λ¯).

In the first case, we get a contradiction since by property (iii) of Lemma 5.2 the map λuiλ is decreasing in (λ¯-ε,λ¯) and thus, as above, we cannot have uiλ¯=ui+1λ¯.

In the other case, we have ui-1λ<uiλ<ui+1λ in (λ¯-ε,λ¯). By using property (i) of Lemma 5.2, we see that this is possible only if uiλ=fi for all λ(λ¯-ε,λ¯). This yields the desired contradiction.

Step 2. Let us define

λi:=max{λ:uiμ=ui+1μ for all μλ}.

Step 1 and the continuity of λuλ ensure that λi is well defined. Moreover, by Lemma 3.6, we also get that λi>0 for all i=i,,k-1. Finally, the fact that uλf as λ tells us that λi< for all i=1,,k-1. This concludes the proof. ∎

Remark 5.4.

It is possible to see that the map λuλ is smooth (with respect to the Euclidean topology of k) for all λ(0,)S, where S:={λ1,,λk-1}T, where the λi’s are given by Theorem 5.3, and T:={μ1,,μk} where μi:=inf{λ:uiλ=fi}.

Finally, we derive another consequence of Lemma 5.2 that will ensure that the solution is monotone where f is and with the same monotonicity.

Proposition 5.5.

Suppose that fi<fi+1<<fi+r. Then the solution u of the minimization problem (3.2) is such that uiui+1ui+r.

In particular, u has the following structure:

  1. If uifi+r, then uj=ui for all j=i,,i+r.

  2. If ui+rfi, then uj=ui+r for all j=i,,i+r.

  3. Otherwise, u is of the form

    uj={uifor j=i,,j1,fjfor j=j1+1,,j2-1,ui+rfor j=j2,,k,

    for some fj1ui<fj1+1 and fj2ui+r<fj2+1, where j1<j2.

A similar statement holds in the case fi>fi+1>>fi+r.

Proof.

We prove this proposition in two steps. Step 1. We claim that uiui+1ui+r.

Suppose that uj-1>uj for some j{i+1,,i+r}. We have to treat three cases: uj<fj, uj=fj and uj>fj.

In the first case, we get a contradiction with the minimality of uλ since it is easy to see that

G(u1λ,,uj-1λ,ujλ+ε,uj+1λ,,uk)<G(uλ)

for ε>0 small.

Now, suppose uj>fj and that uj>uj+1. Then, for ε>0 small,

G(u1λ,,uj-1λ,ujλ-ε,uj+1λ,,uk)<G(uλ),

yielding the desired contradiction.

Finally, we can treat all the remaining cases (namely when uj=fj or the case where uj>fj and uj+1>uj) simultaneously as follows: let us denote by jm{i,,j} the minimum index r such that ur>ur+1. In both cases we have ujm>fjm, and thus

G(u1λ,,ujm-1λ,ujmλ-ε,ujm+1λ,,uk)<G(uλ)

for ε>0 small.

Step 2. Using Step 1, we have that

j=i+1i+r|ujλ-uj-1λ|=ui+rλ-uiλ.

Since this value is invariant under modification of ujλ for j=i+1,,i+r-1, if we keep ui and ui+r fixed, the minimality of uλ implies that

j=ii+r|uj-fj|p=min𝒜j=ii+r|vi-fi|p,

where

𝒜:={(vi+1,,vi+r-1)i+r-2:uivi+1vi+r-1ui+r}.

This proves the second part of the statement of the proposition. ∎

6 A method for finding the solution

In this section, we describe the method we propose in order to identify the solution of the minimization problem (3.2). The general idea is, for every λ>0, to be able to tell a priori the relative position of each uiλ with respect to ui-1λ and fi. Knowing that allows us to

  1. know if the minimization of G has to take place in some subspace

    {vi1-1=vi1}{vir-1=vir},

    and hence if we have to reduce the number of variables G depends on;

  2. write explicitly the absolute values present in the expression of G.

If we are able to do that, we can reduce the problem of minimizing the functional G to the problem of minimizing a strictly convex functional of class C1, and thus the minimizer can be found by solving the appropriate Euler–Lagrange equation.

Let f:=i=1kfiχ(xi-1,xi) and set Li:=xi-xi-1 for i=1,,k. Fix p>1 and λ¯>0.

Step 0: initialization. For every i{2,,k} set

r¯i:=sgn(fi-fi-1),
t¯i:=0,
T:=i=2kti.

Finally, set

s¯1:=sgn(f2-f1),
s¯k:=sgn(fk-fk-1)

and, for every i{2,,k-1},

s¯i:=sgn(fi-fi-1)+sgn(fi-fi+1)2.

Step 1: Solving the Euler–Lagrange equations. Consider the functional G~:V[0,) defined by

G~(v):=i=2kr¯i(vi-vi-1)+λi=1kLis¯i(fi-vi)p,

where

V:={v=(v1,,vk)k:vi=vi-1 if t¯i=1}.

Find a solution uiλ of the i-th Euler–Lagrange equation of G. Note that in the case p=2 this is a set of k-T linear equations. In the case s¯i=0, set uiλ:=fi. It can happen that sgn(fi-uiλ) changes when varying λ. In that case we have to change s¯i.

Step 2: Critical threshold. Find λ~ as the greatest value of λ for which there exists i{2,,k} such that uiλ=ui-1λ.

Step 3: Determination of the new functional. For every i=2,,k set

r¯i:=sgn(ui+1λ~-uiλ~),
t¯i:={1if uiλ~=ui-1λ~,0otherwise,
T:=i=2kti,
s¯i:=sgn(fi-uiλ~).

Step 4: Cycle. Repeat Steps 1, 2 and 3 until T=k or λλ¯.

The algorithm terminates after, at most, k iterations. Since at every step we have to solve k-T equations (linear in the case p=2), the complexity of the algorithm is O(k2).

Example.

We illustrate the above strategy with a concrete example.

Let p=2, k=6, L1=L3=L5=1 and L2=L4=L6=2. Consider the initial data f given by

f1=2,f2=1,f3=3,f4=5,f5=6,f6=4;

see Figure 9.

Let us apply the algorithm.

Step 0. We have

r¯2=-1,r¯3=1,r¯4=1,r¯5=1,r¯6=-1

and

s¯1=1,s¯2=-1,s¯3=0,s¯4=0,s¯5=1,s¯6=-1.

Moreover, t¯1=0 for every i=2,,k and T=0.

Iteration 1, step 1. We have to consider the functional

G~(v1,v2,v3,v4,v5,v6):=v1-2v2+2v5-v6+λ[(2-v1)2+2(1-v2)2+|v2-3|2(6-v5)2+2(v6-4)2].

The solution uλ of the Euler–Lagrange equation of G~ is given by

(u1λ,u2λ,u3λ,u4λ,u5λ,u6λ)=(2-12λ,1+12λ,3,5,6-1λ,4+14λ).

Iteration 1, step 2. For λ~=1, we get u1λ~=u2λ~ and u4λ~=u5λ~.

Iteration 1, step 3. We have

(u1λ~,u2λ~,u3λ~,u4λ~,u5λ~,u6λ~)=(32,32,3,5,5,174).

Thus,

r¯2=0,r¯3=1,r¯4=1,r¯5=1,r¯6=-1

and

s¯1=1,s¯2=-1,s¯3=0,s¯4=-0,s¯5=1,s¯6=-1.

Moreover,

t¯2=1,t¯3=0,t¯4=0,t¯5=1,t¯6=0

and T=2; see Figure 10.

Figure 9 The initial data f.
Figure 9

The initial data f.

Figure 10 The behavior of the solution for λ≫1{\lambda\gg 1} as λ decreases.
Figure 10

The behavior of the solution for λ1 as λ decreases.

Iteration 2, step 1. We have to consider the functional

G~(v1,v2,v3,v4)
:=2v3-v1-v4+λ[(2-v1)2+2(v1-1)2+|v2-3|2+2(5-v3)2+(6-v3)2+2(v4-4)2].

The solution uλ of the Euler–Lagrange equation of G~ is given by

(u1λ,u2λ,u3λ,u4λ,u5λ,u6λ)=(43+16λ,43+16λ,3,163-13λ,163-13λ,4+14λ).

Iteration 2, step 2. For λ~=716, we get u6λ~=u5λ~.

Iteration 2, step 3. We have

(u1λ~,u2λ~,u3λ~,u4λ~,u5λ~,u6λ~)=(127,127,3,327,327,327).

Thus,

r¯2=0,r¯3=1,r¯4=1,r¯5=0,r¯6=0

and

s¯1=1,s¯2=-1,s¯3=0,s¯4=1,s¯5=1,s¯6=-1.

Moreover,

t¯2=1,t¯3=0,t¯4=0,t¯5=1,t¯6=1

and T=3; see Figure 11.

Iteration 3, step 1. We have to consider the functional

G~(v1,v2,v3):=v3-v1+λ[(2-v1)2+2(v1-1)2+|v2-3|2+2(5-v3)2+(6-v3)2+2(v3-4)2].

The solution uλ of the Euler–Lagrange equation of G~ is given by

(u1λ,u2λ,u3λ,u4λ,u5λ,u6λ)=(43+16λ,43+16λ,3,163-13λ,163-13λ,163-13λ).

Note that for λ=14 we have u1λ=f1. Thus, for λ<14, we have to consider the functional

G(v1,v2,v3):=v3-v1+λ[(v1-2)2+2(v1-1)2+|v2-3|2+2(5-v3)2+(6-v3)2+2(v3-4)2].

Hence, the solutions of the Euler–Lagrange equations remain equal to the previous ones.

Iteration 3, step 2. For λ~=17, we get u2λ~=u3λ~.

Iteration 3, step 3. We have

(u1λ~,u2λ~,u3λ~,u4λ~,u5λ~,u6λ~)=(156,156,3,3,3,3).

Thus,

r¯2=0,r¯3=0,r¯4=1,r¯5=0,r¯6=0

and

s¯1=-1,s¯2=-1,s¯3=-1,s¯4=1,s¯5=1,s¯6=-1.

Moreover,

t¯2=1,t¯3=1,t¯4=0,t¯5=1,t¯6=1

and T=4; see Figure 12.

Figure 11 The behavior of the solution for λ∈(716,1]{\lambda\in(\frac{7}{16},1]} as λ decreases.
Figure 11

The behavior of the solution for λ(716,1] as λ decreases.

Figure 12 The behavior of the solution for λ∈(17,716]{\lambda\in(\frac{1}{7},\frac{7}{16}]} as λ decreases.
Figure 12

The behavior of the solution for λ(17,716] as λ decreases.

Iteration 4, step 1. We have to consider the functional

G(v1,v2):=v2-v1+λ[(2-v1)2+2(v1-1)2+|v2-3|2+2(5-v2)2+(6-v2)2+2(v2-4)2].

The solution uλ of the Euler–Lagrange equation of G~ is given by

(u1λ,u2λ,u3λ,u4λ,u5λ,u6λ)=(74+18λ,74+18λ,74+18λ,245-110λ,245-110λ,245-110λ).

Iteration 4, step 2. For λ~=9122, we get that all the components are equal to each other.

Iteration 4, step 3. We have

(u1λ~,u2λ~,u3λ~,u4λ~,u5λ~,u6λ~)=(319,319,319,319,319,319).

Thus,

r¯2=0,r¯3=0,r¯4=0,r¯5=0,r¯6=0

and

s¯1=-1,s¯2=-1,s¯3=-1,s¯4=1,s¯5=1,s¯6=1.

Moreover,

t¯2=1,t¯3=1,t¯4=1,t¯5=1,t¯6=1

and T=5; see Figure 13.

Iteration 5. Finally, for λ9122 we have that the solution is given by

u1λ=u2λ=u3λ=u4λ=u5λ=u6λ:=319;

see Figure 14.

Remark 6.1.

The previous example allows us to draw some conclusions on properties of the solution uλ:

  1. It is not true that if uiλ¯=fi, then uiλ=fi for all λλ¯.

  2. The function λuiλ is not monotone in general. Nevertheless, a change in the monotonicity can happen only if λ=λi or λ=λi-1.

Figure 13 The behavior of the solution for λ∈(9122,110]{\lambda\in(\frac{9}{122},\frac{1}{10}]} as λ decreases.
Figure 13

The behavior of the solution for λ(9122,110] as λ decreases.

Figure 14 The behavior of the solution for λ<9122{\lambda<\frac{9}{122}}.
Figure 14

The behavior of the solution for λ<9122.

Remark 6.2.

Let us denote by uλ,p the solution of problem (3.2) corresponding to p and λ. Although we know that, for every λ fixed, uλ,pv as p1, where v is a solution of problem (3.2) corresponding to λ and p=1, we cannot apply directly our method to find v since analytic computations are difficult to perform in the case p(1,2). Nevertheless, a finer analysis of the behavior of the solution uλ,p for p(1,2) is currently under investigation.

Award Identifier / Grant number: DMS-1411646

Funding statement: The research was funded by National Science Foundation under Grant No. DMS-1411646.

Acknowledgements

The author wishes to thank Irene Fonseca for having introduced him to the study of this problem and for helpful discussions during the preparation of the paper. The author warmly thanks the Center for Nonlinear Analysis at Carnegie Mellon University for its support during the preparation of the manuscript.

References

[1] R. Acar and C. R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Problems 10 (1994), no. 6, 1217–1229. 10.1088/0266-5611/10/6/003Search in Google Scholar

[2] W. K. Allard, Total variation regularization for image denoising. I. Geometric theory, SIAM J. Math. Anal. 39 (2007/08), no. 4, 1150–1190. 10.1137/060662617Search in Google Scholar

[3] W. K. Allard, Total variation regularization for image denoising. II. Examples, SIAM J. Imaging Sci. 1 (2008), no. 4, 400–417. 10.1137/070698749Search in Google Scholar

[4] W. K. Allard, Total variation regularization for image denoising. III. Examples, SIAM J. Imaging Sci. 2 (2009), no. 2, 532–568. 10.1137/070711128Search in Google Scholar

[5] L. Ambrosio, N. Fusco and D. Pallara, Functions of Bounded Variation and Free Discontinuity Problems, Oxford Math. Monogr., The Clarendon, New York, 2000. 10.1093/oso/9780198502456.001.0001Search in Google Scholar

[6] L. Ambrosio and S. Masnou, A direct variational approach to a problem arising in image reconstruction, Interfaces Free Bound. 5 (2003), no. 1, 63–81. 10.4171/IFB/72Search in Google Scholar

[7] L. Ambrosio and S. Masnou, On a variational problem arising in image reconstruction, Free Boundary Problems (Trento 2002), Internat. Ser. Numer. Math. 147, Birkhäuser, Basel (2004), 17–26. 10.1007/978-3-0348-7893-7_2Search in Google Scholar

[8] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing, 2nd ed., Appl. Math. Sci. 147, Springer, New York, 2006, 10.1007/978-0-387-44588-5Search in Google Scholar

[9] G. Bellettini, V. Caselles and M. Novaga, The total variation flow in N, J. Differential Equations 184 (2002), no. 2, 475–525. 10.1006/jdeq.2001.4150Search in Google Scholar

[10] G. Bellettini, V. Caselles and M. Novaga, Explicit solutions of the eigenvalue problem -div(Du|Du|)=u in 𝐑2, SIAM J. Math. Anal. 36 (2005), no. 4, 1095–1129. 10.1137/S0036141003430007Search in Google Scholar

[11] M. Benning, C. Brune, M. Burger and J. Müller, Higher-order TV methods—enhancement via Bregman iteration, J. Sci. Comput. 54 (2013), no. 2–3, 269–310. 10.1007/s10915-012-9650-3Search in Google Scholar

[12] M. Benning and M. Burger, Ground states and singular vectors of convex variational regularization methods, Methods Appl. Anal. 20 (2013), no. 4, 295–334. 10.4310/MAA.2013.v20.n4.a1Search in Google Scholar

[13] K. Bredies, K. Kunisch and T. Pock, Total generalized variation, SIAM J. Imaging Sci. 3 (2010), no. 3, 492–526. 10.1137/090769521Search in Google Scholar

[14] K. Bredies, K. Kunisch and T. Valkonen, Properties of L1-TGV2: The one-dimensional case, J. Math. Anal. Appl. 398 (2013), no. 1, 438–454. 10.1016/j.jmaa.2012.08.053Search in Google Scholar

[15] A. Buades, B. Coll and J. M. Morel, A review of image denoising algorithms, with a new one, Multiscale Model. Simul. 4 (2005), no. 2, 490–530. 10.1137/040616024Search in Google Scholar

[16] V. Caselles, A. Chambolle and M. Novaga, The discontinuity set of solutions of the TV denoising problem and some extensions, Multiscale Model. Simul. 6 (2007), no. 3, 879–894. 10.1137/070683003Search in Google Scholar

[17] V. Caselles, A. Chambolle and M. Novaga, Regularity for solutions of the total variation denoising problem, Rev. Mat. Iberoam. 27 (2011), no. 1, 233–252. 10.4171/RMI/634Search in Google Scholar

[18] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision 20 (2004), 89–97. 10.1023/B:JMIV.0000011320.81911.38Search in Google Scholar

[19] A. Chambolle and P.-L. Lions, Image recovery via total variation minimization and related problems, Numer. Math. 76 (1997), no. 2, 167–188. 10.1007/s002110050258Search in Google Scholar

[20] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision 40 (2011), no. 1, 120–145. 10.1007/s10851-010-0251-1Search in Google Scholar

[21] T. Chan, A. Marquina and P. Mulet, High-order total variation-based image restoration, SIAM J. Sci. Comput. 22 (2000), no. 2, 503–516. 10.1137/S1064827598344169Search in Google Scholar

[22] T. F. Chan and S. Esedoḡlu, Aspects of total variation regularized L1 function approximation, SIAM J. Appl. Math. 65 (2005), no. 5, 1817–1837. 10.1137/040604297Search in Google Scholar

[23] T. F. Chan, S. H. Kang and J. Shen, Euler’s elastica and curvature-based inpainting, SIAM J. Appl. Math. 63 (2002), no. 2, 564–592. 10.1137/S0036139901390088Search in Google Scholar

[24] T. F. Chan and J. Shen, Image Processing and Analysis, Society for Industrial and Applied Mathematics, Philadelphia, 2005. 10.1137/1.9780898717877Search in Google Scholar

[25] R. Choksi, I. Fonseca and B. Zwicknagl, A few remarks on variational models for denoising, Commun. Math. Sci. 12 (2014), no. 5, 843–857. 10.4310/CMS.2014.v12.n5.a3Search in Google Scholar

[26] R. Choksi, Y. van Gennip and A. Oberman, Anisotropic total variation regularized L1 approximation and denoising/deblurring of 2D bar codes, Inverse Probl. Imaging 5 (2011), no. 3, 591–617. 10.3934/ipi.2011.5.591Search in Google Scholar

[27] G. Dal Maso, I. Fonseca, G. Leoni and M. Morini, A higher order model for image restoration: The one-dimensional case, SIAM J. Math. Anal. 40 (2009), no. 6, 2351–2391. 10.1137/070697823Search in Google Scholar

[28] J. Darbon and M. Sigelle, A fast and exact algorithm for total variation minimization, Pattern Recognition and Image Analysis, Springer, Berlin (2005), 351–359. 10.1007/11492429_43Search in Google Scholar

[29] D. C. Dobson and F. Santosa, Recovery of blocky images from noisy and blurred data, SIAM J. Appl. Math. 56 (1996), no. 4, 1181–1198. 10.1137/S003613999427560XSearch in Google Scholar

[30] M. Droske and A. Bertozzi, Higher-order feature-preserving geometric regularization, SIAM J. Imaging Sci. 3 (2010), no. 1, 21–51. 10.1137/090751694Search in Google Scholar

[31] V. Duval, J.-F. Aujol and Y. Gousseau, The TVL1 model: A geometric point of view, Multiscale Model. Simul. 8 (2009), no. 1, 154–189. 10.1137/090757083Search in Google Scholar

[32] S. Esedoḡlu and S. J. Osher, Decomposition of images by the anisotropic Rudin–Osher–Fatemi model, Comm. Pure Appl. Math. 57 (2004), no. 12, 1609–1626. 10.1002/cpa.20045Search in Google Scholar

[33] A. Flinth and P. Weiss, Exact solutions of infinite dimensional total-variation regularized problems, Inf. Inference 8 (2019), no. 3, 407–443. 10.1093/imaiai/iay016Search in Google Scholar

[34] I. Fonseca, G. Leoni, F. Maggi and M. Morini, Exact reconstruction of damaged color images using a total variation model, Ann. Inst. H. Poincaré Anal. Non Linéaire 27 (2010), no. 5, 1291–1331. 10.1016/j.anihpc.2010.06.004Search in Google Scholar

[35] Y. Gousseau and J.-M. Morel, Are natural images of bounded variation?, SIAM J. Math. Anal. 33 (2001), no. 3, 634–648. 10.1137/S0036141000371150Search in Google Scholar

[36] A. Haddad and Y. Meyer, An improvement of Rudin–Osher–Fatemi model, Appl. Comput. Harmon. Anal. 22 (2007), no. 3, 319–334. 10.1016/j.acha.2006.09.001Search in Google Scholar

[37] M. Hintermüller, K. Papafitsoros and C. N. Rautenberg, Analytical aspects of spatially adapted total variation regularisation, J. Math. Anal. Appl. 454 (2017), no. 2, 891–935. 10.1016/j.jmaa.2017.05.025Search in Google Scholar

[38] M. Hintermüller, C. N. Rautenberg and J. Hahn, Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction, Inverse Problems 30 (2014), no. 5, Paper No. 055014. 10.1088/0266-5611/30/5/055014Search in Google Scholar

[39] J. Lellmann, K. Papafitsoros, C. Schönlieb and D. Spector, Analysis and application of a nonlocal Hessian, SIAM J. Imaging Sci. 8 (2015), no. 4, 2161–2202. 10.1137/140993818Search in Google Scholar

[40] G. Leoni, A First Course in Sobolev Spaces, Grad. Stud. Math. 105, American Mathematical Society, Providence, 2009. 10.1090/gsm/105Search in Google Scholar

[41] Y. Lou, T. Zeng, S. Osher and J. Xin, A weighted difference of anisotropic and isotropic total variation model for image processing, SIAM J. Imaging Sci. 8 (2015), no. 3, 1798–1823. 10.21236/ADA610252Search in Google Scholar

[42] Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, Univ. Lecture Ser. 22, American Mathematical Society, Providence, 2001. 10.1090/ulect/022Search in Google Scholar

[43] M. Nikolova, Minimizers of cost-functions involving nonsmooth data-fidelity terms. Application to the processing of outliers, SIAM J. Numer. Anal. 40 (2002), no. 3, 965–994. 10.1137/S0036142901389165Search in Google Scholar

[44] S. Osher, A. Solé and L. Vese, Image decomposition and restoration using total variation minimization and the H-1 norm, Multiscale Model. Simul. 1 (2003), no. 3, 349–370. 10.1137/S1540345902416247Search in Google Scholar

[45] K. Papafitsoros and K. Bredies, A study of the one dimensional total generalised variation regularisation problem, Inverse Probl. Imaging 9 (2015), no. 2, 511–550. 10.3934/ipi.2015.9.511Search in Google Scholar

[46] K. Papafitsoros and C. B. Schönlieb, A combined first and second order variational approach for image reconstruction, J. Math. Imaging Vision 48 (2014), no. 2, 308–338. 10.1007/s10851-013-0445-4Search in Google Scholar

[47] K. Papafitsoros and T. Valkonen, Asymptotic behaviour of total generalised variation, Scale Space and Variational Methods in Computer Vision, Lecture Notes in Comput. Sci. 9087, Springer, Cham (2015), 720–714. 10.1007/978-3-319-18461-6_56Search in Google Scholar

[48] C. Pöschl, Tikhonov Regularization with General Residual Term, PhD thesis, University of Innsbruck, Innsbruck, 2008. Search in Google Scholar

[49] C. Pöschl and O. Scherzer, Characterization of minimizers of convex regularization functionals, Frames and Operator Theory in Analysis and Signal Processing, Contemp. Math. 451, American Mathematical Society, Providence (2008), 219–248. 10.1090/conm/451/08784Search in Google Scholar

[50] C. Pöschl and O. Scherzer, Exact solutions of one-dimensional total generalized variation, Commun. Math. Sci. 13 (2015), no. 1, 171–202. 10.4310/CMS.2015.v13.n1.a9Search in Google Scholar

[51] W. Ring, Structural properties of solutions to total variation regularization problems, M2AN Math. Model. Numer. Anal. 34 (2000), no. 4, 799–810. 10.1051/m2an:2000104Search in Google Scholar

[52] L. I. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D 60 (1992), no. 1–4, 259–268. 10.1016/0167-2789(92)90242-FSearch in Google Scholar

[53] O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier and F. Lenzen, Variational Methods in Imaging, Appl. Math. Sci. 167, Springer, New York, 2009. Search in Google Scholar

[54] D. Strong and T. Chan, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Problems 19 (2003), no. 6, S165–S187. 10.1088/0266-5611/19/6/059Search in Google Scholar

[55] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, V. H. Winston & Sons, Washington, 1977. Search in Google Scholar

[56] T. Valkonen, The jump set under geometric regularization. Part 1: Basic technique and first-order denoising, SIAM J. Math. Anal. 47 (2015), no. 4, 2587–2629. 10.1137/140976248Search in Google Scholar

[57] L. Vese, A study in the BV space of a denoising-deblurring variational problem, Appl. Math. Optim. 44 (2001), no. 2, 131–161. 10.1007/s00245-001-0017-7Search in Google Scholar

Received: 2018-11-13
Accepted: 2020-03-01
Published Online: 2020-12-18
Published in Print: 2021-06-01

© 2021 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 26.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jaa-2020-2031/html
Scroll to top button