Home Mathematics Data smoothing with applications to edge detection
Article Open Access

Data smoothing with applications to edge detection

  • Mohammad F. Al-Jamal EMAIL logo , Ahmad Baniabedalruhman and Abedel-Karrem Alomari
Published/Copyright: June 24, 2022

Abstract

The aim of this paper is to present a new stable method for smoothing and differentiating noisy data defined on a bounded domain Ω R N with N 1 . The proposed method stems from the smoothing properties of the classical diffusion equation; the smoothed data are obtained by solving a diffusion equation with the noisy data imposed as the initial condition. We analyze the stability and convergence of the proposed method and we give optimal convergence rates. One of the main advantages of this method lies in multivariable problems, where some of the other approaches are not easily generalized. Moreover, this approach does not require strong smoothness assumptions on the underlying data, which makes it appealing for detecting data corners or edges. Numerical examples demonstrate the feasibility and robustness of the method even with the presence of a large amounts of noise.

MSC 2010: 65F22; 47A52; 35K20; 65D19

1 Introduction

It is very common in many scientific applications, such as biological imaging, image processing and computer vision, inverse source and identification problems, to have to differentiate functions specified by data. In many cases, the data is experimental and so substantial noise or imprecision may be present. Since differentiation is an ill-posed problem [1], naive numerical differentiation techniques will amplify the noise resulting in inaccurate and mostly useless results. A concrete example is given by the function

f δ ( x ) = f ( x ) + δ sin ( ω x ) , x ( 0 , 1 ) ,

where f δ is the noisy version of the true data f , and δ is a small number representing the noise level. The computed derivative is then

d f δ d x ( x ) = d f d x ( x ) + ω δ cos ( ω x ) , x ( 0 , 1 ) .

Evidently, the maximum error in the data is δ , which is small, while the maximum error in the computed derivative can be arbitrarily large, depending on the value of the frequency ω .

The last example showcases the instability associated with the differentiation process: small errors in the data might induce large errors in the computed derivative. To overcome the instability issue of the differentiation operator, several regularization techniques have been developed. In [2], a perturbation based method is used for single variable functions. In [3], a mollification method is given. Smoothing splines is adopted in [4], and results for single variable functions are given. Other methods such as the Lanczos’ generalized derivative [5], and the differentiation by integration using Jacobi polynomials [6], are known for single variable functions only. For a summary of other popular methods, see [7]. Despite its importance in real life applications, only few methods dealing with higher dimensional cases have reported [8,9, 10,11]. A side from limited applicability due to dimension restriction, the aforementioned approaches are relatively slow to suit real life applications, particularly with big data.

In this paper, the smoothing property of the classical diffusion equation is utilized to reconstruct the gradient f ( x ) of a multivariable function f ( x ) from given noise-corrupted data f δ L 2 ( Ω ) satisfying

f δ f L 2 ( Ω ) δ ,

where Ω R N is a bounded domain. For the time being, we shall assume that f Ω = 0 ; see the Appendix for the general case. More precisely, the proposed method amounts to take the function x u as stable approximation to f where u solves the diffusion equation:

(1) t u ( x , t ) = Δ x u ( x , t ) , x Ω , t > 0 ,

subject to the initial and boundary conditions:

(2) u ( x , 0 ) = f δ ( x ) , x Ω , u ( x , t ) = 0 , x Ω , t > 0 ,

where Δ is the Laplacian operator. The time t can be considered as a design parameter, and its rule is to control the smoothness of the desired approximation; in the context of regularization theory, t represents the regularization parameter. The precise formulation of the suggested method as well as the motivation behind it will be given in more detail in the sequel.

Under reasonable assumptions on true data f , we prove stability results and we propose optimal convergence rate for the proposed method. Particularly, we show that the method gives an optimal rate of convergence of order O ( δ 2 / 3 ) . Several examples and applications in computer vision demonstrating the validity and efficiency of the proposed approach are also provided.

The rest of this paper is composed of five sections. Section 2 recalls some preliminary and auxiliary results. In Sections 3, we state the precise formulation of the proposed method and give the motivation behind it, and then we prove the main convergence results in Section 4. We provide several numerical experiments and applications in Section 5. More detailed discussion on the general case is provided in the Appendix.

2 Preliminaries

First, we recall some definitions and results from the theory of Sobolev spaces. Readers may refer to [12,13] for more detailed discussions. Throughout the rest of the paper, we assume that Ω is a bounded domain in R N , N 1 , with sufficiently smooth boundary Ω .

We use the notation L 2 ( Ω ) to denote the usual Lebesgue space of square-integrable functions, which is a Hilbert space with the inner product

( u , v ) L 2 ( Ω ) = Ω u v , u , v L 2 ( Ω ) .

The induced norm will be denoted by L 2 ( Ω ) .

Let α = ( α 1 , , α N ) be a multi-index, and set α = α 1 + + α N . For a positive integer m , the Sobolev space H m ( Ω ) is defined by

H m ( Ω ) = { u L 2 ( Ω ) : α u L 2 ( Ω ) α m } ,

where the partial derivatives are taken in the distributional sense. It is a Hilbert space under the inner product

( u , v ) H m ( Ω ) = α m ( α u , α v ) L 2 ( Ω ) ,

and a Banach space under the the corresponding induced norm. In particular, the norm on H 1 ( Ω ) is given by

u H 1 ( Ω ) 2 = u L 2 ( Ω ) 2 + u L 2 ( Ω ) 2 .

The Hilbert space H 0 1 ( Ω ) is defined as the closure of C 0 ( Ω ) in H 1 ( Ω ) , where C 0 ( Ω ) is the set of all functions that are infinitely differentiable on Ω and compactly supported in Ω . We have the characterization

H 0 1 ( Ω ) = { u H 1 ( Ω ) u Ω = 0 } ,

where the boundary value is understood in the trace sense.

By H 1 / 2 ( Ω ) , we mean the space of all restrictions on Ω from functions in H 1 ( Ω ) . It is a Hilbert space with the norm

u H 1 / 2 ( Ω ) = inf v H 1 ( Ω ) { v H 1 ( Ω ) : v Ω = u } .

From the Sturm-Liouville theory [14], the eigenvalues of the eigenvalue problem

(3) Δ φ = λ φ in Ω , φ = 0 on Ω

form a nondecreasing sequence of positive numbers { λ n } tending to infinity, and the corresponding (normalized) eigenfunctions { φ n } H 0 1 ( Ω ) H 2 ( Ω ) form an orthonormal basis for the space L 2 ( Ω ) . Moreover,

(4) u L 2 ( Ω ) 2 = n = 1 ( u , φ n ) L 2 ( Ω ) 2 λ n , u H 0 1 ( Ω ) .

Associated with the eigenvalue problem (3), the Hilbert space X β is defined as follows:

X β = u L 2 ( Ω ) : u X β 2 = n = 1 ( u , φ n ) L 2 ( Ω ) 2 λ n 2 β < .

The parameter β characterizes the regularity properties of the space X β . We have X 0 = L 2 ( Ω ) , X 1 / 2 = H 0 1 ( Ω ) , X 1 = H 2 ( Ω ) H 0 1 ( Ω ) , and X β H 2 β ( Ω ) , see [14].

3 Motivation of the proposed algorithm

To motivate the proposed regularization approach, let u be the solution of the parabolic initial-boundary value problem given by equations (1) and (2). Let us set y δ ( x ) = u ( x , t ) for x Ω , and think of t as a parameter. Then, on separating variables, we have the following representation

(5) y δ ( x ) = n = 1 ( f δ , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) .

A standard result [14] is that y δ H 2 ( Ω ) H 0 1 ( Ω ) for t > 0 . From the expansion given in (5), we deduce that

( y δ , φ n ) L 2 ( Ω ) = exp ( λ n t ) ( f δ , φ n ) L 2 ( Ω ) , n = 1 , 2 ,

and so, due to the exponential term, the Fourier coefficients of y δ decay faster than those of f δ . It is also evident that y δ f δ for small values of the parameter t . Thus, from this brief analysis, we infer that y δ is an approximation of f δ that is smoother than the input data f δ .

As a consequence of the aforementioned discussion, it would be reasonable to take y δ ( x ) , t > 0 , as smooth approximations to f ( x ) for x Ω . Therefore, we suggest the following numerical differentiation algorithm:

Step 1. Compute the smoothed version of the noisy data by

y δ ( x ) = n = 1 ( f δ , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ;

Step 2. Take the gradient y δ ( x ) as an approximation of f ( x ) .

Few remarks are in order. First, from expansion (5), we see that the parameter t plays the role of a smoothing parameter, that is, sufficiently small or large values of t result in undersmoothing or oversmoothing, respectively. Second, we point out that the gradient in the second step can be computed exactly or using an appropriate difference scheme depending on the nature of the available data.

4 Stability and consistency analysis

Our goal next is to examine y δ ( x ) computed by the proposed algorithm as a stable approximation to f ( x ) . In particular, we study the stability and consistency of the aforementioned approach and prove a convergence result. Throughout the sequel, we assume t > 0 , and for convenience, we also let

y ( x ) = n = 1 ( f , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ,

that is, y is the solution of (1) and (2) corresponding to the noiseless data f .

We start off with the following stability result.

Lemma 4.1

We have

y δ y L 2 ( Ω ) δ 2 t .

Proof

Set g = f δ f and u = y δ y . Since g L 2 ( Ω ) , we have u H 0 1 ( Ω ) . Then, it follows from (4) and the uniqueness of the Fourier coefficients that

u L 2 ( Ω ) 2 = n = 1 λ n m = 1 ( g , φ m ) L 2 ( Ω ) exp ( λ m t ) φ m , φ n L 2 ( Ω ) 2 = n = 1 ( g , φ n ) L 2 ( Ω ) 2 exp ( 2 λ n t ) λ n .

By using the fact that 4 t exp ( t ) 1 and the Parseval’s identity, we see that

u L 2 ( Ω ) 2 1 4 t g L 2 ( Ω ) 2 δ 2 4 t ,

which concludes the proof.□

Throughout the sequel, we shall assume that β > 1 / 2 , and we set η = β 1 / 2 if β < 3 / 2 , and η = 1 if β 3 / 2 . We have the following consistency result:

Lemma 4.2

If f X β , then

y f L 2 ( Ω ) C β t η ,

where C β = 2 max { 1 , λ 1 3 / 2 β } f X β .

Proof

Set u ( x ) = y ( x ) f ( x ) , and so u H 0 1 ( Ω ) . Moreover, from (4), the definition of y , and the uniqueness of the Fourier coefficients, we have

u L 2 ( Ω ) 2 = n = 1 [ ( y , φ n ) L 2 ( Ω ) ( f , φ n ) L 2 ( Ω ) ] 2 λ n = n = 1 ( f , φ n ) L 2 ( Ω ) 2 ( 1 exp ( λ n t ) ) 2 λ n .

The last equality together with the fact that ( 1 exp ( t ) ) 2 t / ( t + 1 ) imply

u L 2 ( Ω ) 2 4 n = 1 ( f , φ n ) L 2 ( Ω ) 2 λ n t 1 + λ n t 2 λ n = 4 n = 1 ( f , φ n ) L 2 ( Ω ) 2 λ n 2 β λ n 3 2 β t 2 ( 1 + λ n t ) 2 .

Now, it is an easy matter to show that

t 2 λ 3 2 β ( 1 + t λ ) 2 t 2 β 1 , β < 3 / 2 , λ 1 3 2 β t 2 , β 3 / 2 ,

for all λ λ 1 , from which we see

u L 2 ( Ω ) 2 4 max { 1 , λ 1 3 2 β } f X β 2 t 2 β 1 , β < 3 / 2 , t 2 , β 3 / 2 ,

which ends the proof.□

Combining Lemmas 4.1, 4.2, and the triangle inequality, we obtain the main convergence result:

Theorem 4.1

Suppose that f X β for some β > 1 / 2 , then there exists a constant K independent of t and δ such that

y δ f L 2 ( Ω ) K ( t η + δ t ) .

Remark 4.1

If we choose the parameter t so that t δ γ for some γ ( 0 , 2 ) , then we obtain the convergence result

y δ f L 2 ( Ω ) 0

as δ 0 . The convergence rate is optimal when γ = 2 / ( 2 η + 1 ) ; in this case, we have

y δ f L 2 ( Ω ) = O ( δ 2 η / ( 2 η + 1 ) ) .

We obtain the fastest convergence when β 3 / 2 , in which case, η = 1 and

y δ f L 2 ( Ω ) = O ( δ 2 / 3 ) ,

provided that t = C δ 2 / 3 for some positive constant C .

5 Numerical experiments

Since data acquisition results in only discrete data in many scientific applications, we assume Ω = ( 0 , L ) N and that the noisy data, which we denote by f δ , is sampled at n = ( m + 1 ) N regular grid points X = { x i = ( i 1 h , , i N h ) : i I } using the formula

f i δ = f ( x i ) + e i , i I = { ( i 1 , , i N ) : i 1 , , i N = 1 , , m 1 } ,

where e is some Gaussian noise with mean 0, and h = L / m . Let us further assume that f Ω = 0 (refer to the attached appendix for general treatment). We will designate the symbol δ to denote the relative noise level in the data measured in the l 2 norm, that is,

f δ f = δ f ,

where is the usual Euclidean norm. We shall assess the quality of the recovered gradient by the relative l 2 error given by

y δ f f .

Regarding the choice of the parameter t , we employ the Morozov’s discrepancy principle (e.g., [1]), which amounts to choose t such that

y δ f δ = δ f .

For such domain Ω , the eigenvalues and eigenfunctions of (3) are

λ i = i 1 π L 2 + + i N π L 2 , φ i ( x ) = sin i 1 π x 1 L sin i N π x N L .

Therefore, the Fourier discrete sine transform f δ ^ of the data f δ is expressed as follows:

f δ ^ i = 2 m N / 2 k I f k δ φ i ( x k ) , i I .

Using the trapezoidal rule over the regular grid X , the smoothed data given by (5) can be computed as follows:

y δ ( x j ) = i I 2 l N Ω f δ ( x ) φ i ( x ) d x e λ i t φ i ( x j ) i I 2 m N k I f k δ φ i ( x k ) e λ i t φ i ( x j ) = 2 m N / 2 i I ( E i f δ ^ i ) φ i ( x j ) , j I ,

where E i = e λ i t . Thus, the smoothed data can be computed by

y δ = f δ ^ E ^ ,

where the symbol ( ) denotes the elementwise multiplication operator, and the hat ( ^ ) stands for the Fourier discrete sine transform.

In the following examples, the derivatives of the smoothed data are computed via the central difference formula:

y δ x k ( x i ) y i 1 , , i k + 1 , , i N δ y i 1 , , i k 1 , , i N δ 2 h , i I , k = 1 , , N

and similarly for the true and noisy data.

Now, we examine examples in several space dimensions. We point out that the computations are extremely quick owing to the fast Fourier transform (FFT) routine, which is available in many computer algebra systems.

Example 5.1

The true data are given by the function

f ( x ) = exp ( 3 ( 2 x 1 ) 2 ) sin ( 5 π x ) , 0 x 1 .

For noise level δ = 20 % and sample size n = 500 , the true, noisy, and smoothed data, and their corresponding derivatives are shown in Figure 1. Error results are summarized in Table 1.

Figure 1 
               Data (left) and the corresponding derivative (right) for Example 5.1.
Figure 1

Data (left) and the corresponding derivative (right) for Example 5.1.

Table 1

l 2 -relative errors in the estimated derivatives for Example 5.1 for several sample sizes n and noise levels δ . Numbers in parenthesis represent the relative errors computed from the noisy data

δ \ n 50 500 5,000
0.2 0.190 (0.809) 0.085 (9.084) 0.043 (89.13)
0.02 0.045 (0.080) 0.030 (0.908) 0.020 (8.913)
0.002 0.007 (0.008) 0.012 (0.090) 0.008 (0.891)

Remark 5.1

The results in Example 5.1 suggest a convergence rate of O ( ε 0.48 ) . Since f X β for all β < 1.25 , from Remark 4.1, we deduce the theoretic optimal order O ( ε 0.6 ) . The deterioration in the convergence rate is mainly due to discretization errors and the nonoptimal choice of the smoothing parameter t .

Example 5.2

We consider the recovery of the gradient of the function

f ( x , y ) = 5 ( 6 y 3 ) exp ( ( 6 x 3 ) 2 ( 6 y 3 ) 2 ) ,

where ( x , y ) [ 0 , 1 ] 2 . For noise level δ = 10 % and sample size n = 50 × 50 , the noisy and smoothed data are depicted in Figure 2. The error results in the estimated gradients are summarized in Table 2.

Figure 2 
               True data surface along with the noisy data (left) and smoothed data (right) for Example 5.2.
Figure 2

True data surface along with the noisy data (left) and smoothed data (right) for Example 5.2.

Table 2

l 2 -relative errors in the recovered gradients for Example 5.2 for several sample sizes n and noise levels δ . Numbers in parenthesis represent the relative errors computed from the noisy data

δ \ n 50 × 50 500 × 500 750 × 750
0.2 0.138 (1.671) 0.056 (16.64) 0.047 (24.97)
0.02 0.051 (0.167) 0.023 (1.664) 0.021 (2.497)
0.002 0.013 (0.016) 0.011 (0.166) 0.009 (0.249)

Example 5.3

The true data are given by the function

f ( x , y , z ) = sin ( ( x x 2 ) ( y y 2 ) ) ( z z 2 ) ,

where ( x , y , z ) [ 0 , 1 ] 3 . A slice contour plot for the noisy and the corresponding smoothed data are shown in Figure 3. Relative errors in the estimated gradients are summarized in Table 3.

Figure 3 
               Sliced contours of noisy data (left) and smoothed data (right) for Example 5.3.
Figure 3

Sliced contours of noisy data (left) and smoothed data (right) for Example 5.3.

Table 3

l 2 -relative errors in the recovered gradients for Example 5.3 for several sample sizes n and noise levels δ . Numbers in parenthesis represent the relative errors computed from the noisy data

δ \ n 50 × 50 × 50 100 × 100 × 100 150 × 150 × 150
0.2 0.061 (4.705) 0.038 (9.174) 0.038 (13.64)
0.02 0.030 (0.470) 0.021 (0.917) 0.020 (1.364)
0.002 0.015 (0.047) 0.011 (0.091) 0.009 (0.136)

Remark 5.2

Contrary to the naive method, the error results in Tables 13 show that the proposed method is stable with respect to the sample size n , that is, as the grid is refined, the error gets smaller.

Now we present an application on edge detection (emphasizing) of digital images.

Example 5.4

Image gradient is an essential ingredient in common image processing and computer vision applications, particularly in edge detection algorithms. Since the gradient is sensitive to noise, noisy images must be denoised or smoothed prior to the gradient estimation. In common edge detection algorithms, such as the Canny method, a Gaussian filter is typically used for this purpose. In the following demonstration, we employ our method to smooth out the noisy images, and then the image gradient is approximated using the central difference formula. Consequently, the magnitude of the image gradient at pixel ( x , y ) is approximated by

f ( x , y ) 1 2 [ f ( x + 1 , y ) f ( x 1 , y ) ] 2 + [ f ( x , y + 1 ) f ( x , y 1 ) ] 2 ,

where f is the underlying image intensity function.

In this demonstration, we consider three noisy images that are obtained by adding a Poisson noise to the noise free images. The original, noisy, and smoothed images by the diffusion method are shown in Figure 4. To assess the quality of the smoothed (denoised) images, we computed the peak signal-to-noise ratio (PSNR) of the noisy and the smoothed images, given by

PSNR = 10 log 10 ( 255 ) 2 MSE ( dB ) ,

MSE = 1 M N i = 1 M j = 1 N [ f ( i , j ) f ˜ ( i , j ) ] 2 ,

where f M × N and f ˜ M × N are the original and denoised images, respectively. The PSNR results are summarized in Table 4.

The gradient magnitudes of the noisy and smoothed images are plotted in Figure 5. For the sake of comparison, we also plot the edges of the noisy images obtained by the popular Canny method [15], which consists of five main steps: Gaussian filtering, measuring intensity gradient, edge thinning, double thresholding to determine strong and weak edges, and edge tracking via blob analysis to extract weak edges.

From Table 4, we see that the PSNR of the denoised images are significantly higher than those of noisy images, which indicates the denoised images by the proposed smoothing method have better qualities than the noisy images. Moreover, as it can be observed from Figures 4 and 5, the proposed technique can significantly enhance the quality of the recovered edges, and it seems that the smoothing stage has no bias toward certain direction; this is a major advantage since edge orientation is mostly unknown in real-life applications. It is evident from this experiment that diffusion smoothing gives plausible results, and it can be well suited for such type of applications.

Table 4

PSNR results for noisy and denoised images in Example 5.4

Image Noisy image Denoised image
House 15.129 21.530
Brain 18.092 25.216
Fish 15.568 21.164
Figure 4 
               Original images (left), noisy images (middle), and smoothed images (right).
Figure 4

Original images (left), noisy images (middle), and smoothed images (right).

Figure 5 
               Detected edges: from noisy images (left), by Canny method (middle), and from smoothed images (right).
Figure 5

Detected edges: from noisy images (left), by Canny method (middle), and from smoothed images (right).

From the presented examples, we see that the proposed diffusion technique is very suitable for data smoothing and numerical differentiation, and it can excel in various scientific applications due to its speed (in particular, when utilizing the fast Fourier transform), supporting theory behind it, and further, it does not pose any dimension limitations; the method can be extended easily to problems in any dimension, contrary to the most existing methods in which they are limited to one-dimensional domains.

6 Conclusion

The smoothing property of the diffusion equation is utilized to estimate the gradient of functions specified by noisy data. We proved stability and consistency results and carried out the convergence analysis for the proposed method. Several examples in one and higher dimensional domains demonstrate the feasibility and robustness of the proposed approach.

The main advantage of the method lies in multivariable problems, where some of the other smoothing approaches do not easily generalize. Moreover, the method shows decent results under mild assumptions on the true data even in the presence of a large amount of noise, while these could be limitations for other methods. We believe the method can excel in various scientific applications due to its speed, relative ease of implementation, and the supporting theory behind it; we look forward to extending this approach and the theoretical results for higher derivatives estimations. Furthermore, appropriate a priori and a posteriori parameter choice strategies for choosing t are under investigation.

Acknowledgements

The authors would like to thank the Scientific Research and Graduate Studies at Yarmouk University for supporting this paper, and they would like express their sincere gratitude to the anonymous referees for their helpful comments that helped us to improve the quality of the manuscript.

  1. Conflict of interest: The authors state no conflict of interest.

Appendix

If the true boundary data h = f Ω is known approximately by the function h ε satisfying

h ε h H 1 / 2 ( Ω ) ε ,

then smoothed data can be defined as follows:

y δ , ε ( x ) = W ε ( x ) + n = 1 ( f δ W ε , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ,

where the function W ε satisfies (in the weak sense) the boundary-value problems:

Δ u + u = 0 in Ω , u = h ε on Ω .

To derive stability and convergence results, we set

y ( x ) = W ( x ) + n = 1 ( f W , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ,

where W solves the boundary-value problems

Δ u + u = 0 in Ω , u = h on Ω .

From the definition of H 1 / 2 ( Ω ) (e.g., [12]), we see that

W ε W H 1 ( Ω ) = h ε h H 1 / 2 ( Ω ) ε .

Using the aforementioned inequality and similar arguments as in the proofs of Lemmas 4.1 and 4.2, it is easy to conclude that

y δ , ε y L 2 ( Ω ) δ + ε 2 t + ε , y f L 2 ( Ω ) C β t η ,

where C β depends on f and W , provided f W X β . The last inequalities together with triangle inequality imply the main error result:

y δ , ε f L 2 ( Ω ) K δ + ε 2 t + ε + t η .

Moreover, if we let ε = max { δ , ε } , and choose t ε γ for some γ ( 0 , 2 ) , then we obtain the convergence result

y δ , ε f L 2 ( Ω ) 0

as ε 0 . The convergence rate is optimal when γ = 2 / ( 2 η + 1 ) , and in this case, we have

y δ f L 2 ( Ω ) = O ( ε 2 η / ( 2 η + 1 ) ) .

We obtain the fastest convergence when f W X β for some β 3 / 2 , in which case η = 1 and

y δ f L 2 ( Ω ) = O ( ε 2 / 3 ) ,

provided that t = C ε 2 / 3 for some positive constant C .

References

[1] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht, 1996. 10.1007/978-94-009-1740-8Search in Google Scholar

[2] L. Yang, A perturbation method for numerical differentiation, Appl. Math. Comput. 199 (2008), no. 1, 368–374, https://doi.org/10.1016/j.amc.2007.09.066. Search in Google Scholar

[3] D. A. Murio, C. E. Mejia, and S. Zhan, Discrete mollification and automatic numerical differentiation, Comput. Math. Appl. 35 (1998), no. 5, 1–16, DOI: https://doi.org/10.1016/S0898-1221(98)00001-7. 10.1016/S0898-1221(98)00001-7Search in Google Scholar

[4] M. Hanke and O. Scherzer, Inverse problems light: numerical differentiation, Amer. Math. Monthly 108 (2001), no. 6, 512–521, https://doi.org/10.2307/2695705. Search in Google Scholar

[5] C. W. Groetsch, Lanczos’ generalized derivative, Amer. Math. Monthly 105 (1998), no. 4, 320–326, DOI: https://doi.org/10.2307/2589707. 10.1080/00029890.1998.12004888Search in Google Scholar

[6] D. Y. Liu, O. Gibaru, and W. Perruquetti, Differentiation by integration with Jacobi polynomials, J. Comput. Appl. Math. 235 (2011), no. 9, 3015–3032, https://doi.org/10.1016/j.cam.2010.12.023. Search in Google Scholar

[7] I. Knowles and R. J. Renka, Methods for numerical differentiation of noisy data, Electron. J. Differ. Equ. Conf. 21 (2014), 235–246. Search in Google Scholar

[8] M. F. Al-Jamal, A. K. Alomari, and M. S. Gockenbach, Smoothing via elliptic operators with application to edge detection, Inverse Probl. Sci. Eng. 26 (2018), no. 5, 657–676, https://doi.org/10.1080/17415977.2017.1336552. Search in Google Scholar

[9] Z. Meng, Z. Zhao, D. Mei, and Y. Zhou, Numerical differentiation for two-dimensional functions by a Fourier extension method, Inverse Probl. Sci. Eng. 28 (2020), no. 1, 126–143, https://doi.org/10.1080/17415977.2019.1661410. Search in Google Scholar

[10] A. A. Yahya, J. Tan, B. Su, K. Liu, and A. N. Hadi, Image edge detection method based on anisotropic diffusion and total variation models, J. Eng. 2019 (2019), no. 2, 455–460, https://doi.org/10.1049/joe.2018.5345. Search in Google Scholar

[11] C. Brito-Loeza, R. Legarda-Saenz, and A. Martin-Gonzalez, A fast algorithm for a total variation based phase demodulation model, Numer. Methods Partial Differential Equations 36 (2020), no. 3, 617–636, DOI: https://doi.org/10.1002/num.22444. 10.1002/num.22444Search in Google Scholar

[12] K. Atkinson and W. Han, Theoretical Numerical Analysis: A Functional Analysis Framework, Springer-Verlag, New York, 2009. 10.1007/978-1-4419-0458-4Search in Google Scholar

[13] D. Braess, Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics, Cambridge University Press, Cambridge, 2007. 10.1017/CBO9780511618635Search in Google Scholar

[14] R. C. McOwen, Partial Differential Equations: Methods and Applications, Prentice Hall, New Jersy, 2003. Search in Google Scholar

[15] J. Canny, A Computational Approach To Edge Detection, IEEE Tran. Pattern Anal. Machine Intell. 8 (1986), no. 6, 679–698, https://doi.org/10.1109/TPAMI.1986.4767851. Search in Google Scholar

Received: 2021-11-11
Revised: 2022-04-07
Accepted: 2022-04-09
Published Online: 2022-06-24

© 2022 Mohammad F. Al-Jamal et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. A random von Neumann theorem for uniformly distributed sequences of partitions
  3. Note on structural properties of graphs
  4. Mean-field formulation for mean-variance asset-liability management with cash flow under an uncertain exit time
  5. The family of random attractors for nonautonomous stochastic higher-order Kirchhoff equations with variable coefficients
  6. The intersection graph of graded submodules of a graded module
  7. Isoperimetric and Brunn-Minkowski inequalities for the (p, q)-mixed geominimal surface areas
  8. On second-order fuzzy discrete population model
  9. On certain functional equation in prime rings
  10. General complex Lp projection bodies and complex Lp mixed projection bodies
  11. Some results on the total proper k-connection number
  12. The stability with general decay rate of hybrid stochastic fractional differential equations driven by Lévy noise with impulsive effects
  13. Well posedness of magnetohydrodynamic equations in 3D mixed-norm Lebesgue space
  14. Strong convergence of a self-adaptive inertial Tseng's extragradient method for pseudomonotone variational inequalities and fixed point problems
  15. Generic uniqueness of saddle point for two-person zero-sum differential games
  16. Relational representations of algebraic lattices and their applications
  17. Explicit construction of mock modular forms from weakly holomorphic Hecke eigenforms
  18. The equivalent condition of G-asymptotic tracking property and G-Lipschitz tracking property
  19. Arithmetic convolution sums derived from eta quotients related to divisors of 6
  20. Dynamical behaviors of a k-order fuzzy difference equation
  21. The transfer ideal under the action of orthogonal group in modular case
  22. The multinomial convolution sum of a generalized divisor function
  23. Extensions of Gronwall-Bellman type integral inequalities with two independent variables
  24. Unicity of meromorphic functions concerning differences and small functions
  25. Solutions to problems about potentially Ks,t-bigraphic pair
  26. Monotonicity of solutions for fractional p-equations with a gradient term
  27. Data smoothing with applications to edge detection
  28. An ℋ-tensor-based criteria for testing the positive definiteness of multivariate homogeneous forms
  29. Characterizations of *-antiderivable mappings on operator algebras
  30. Initial-boundary value problem of fifth-order Korteweg-de Vries equation posed on half line with nonlinear boundary values
  31. On a more accurate half-discrete Hilbert-type inequality involving hyperbolic functions
  32. On split twisted inner derivation triple systems with no restrictions on their 0-root spaces
  33. Geometry of conformal η-Ricci solitons and conformal η-Ricci almost solitons on paracontact geometry
  34. Bifurcation and chaos in a discrete predator-prey system of Leslie type with Michaelis-Menten prey harvesting
  35. A posteriori error estimates of characteristic mixed finite elements for convection-diffusion control problems
  36. Dynamical analysis of a Lotka Volterra commensalism model with additive Allee effect
  37. An efficient finite element method based on dimension reduction scheme for a fourth-order Steklov eigenvalue problem
  38. Connectivity with respect to α-discrete closure operators
  39. Khasminskii-type theorem for a class of stochastic functional differential equations
  40. On some new Hermite-Hadamard and Ostrowski type inequalities for s-convex functions in (p, q)-calculus with applications
  41. New properties for the Ramanujan R-function
  42. Shooting method in the application of boundary value problems for differential equations with sign-changing weight function
  43. Ground state solution for some new Kirchhoff-type equations with Hartree-type nonlinearities and critical or supercritical growth
  44. Existence and uniqueness of solutions for the stochastic Volterra-Levin equation with variable delays
  45. Ambrosetti-Prodi-type results for a class of difference equations with nonlinearities indefinite in sign
  46. Research of cooperation strategy of government-enterprise digital transformation based on differential game
  47. Malmquist-type theorems on some complex differential-difference equations
  48. Disjoint diskcyclicity of weighted shifts
  49. Construction of special soliton solutions to the stochastic Riccati equation
  50. Remarks on the generalized interpolative contractions and some fixed-point theorems with application
  51. Analysis of a deteriorating system with delayed repair and unreliable repair equipment
  52. On the critical fractional Schrödinger-Kirchhoff-Poisson equations with electromagnetic fields
  53. The exact solutions of generalized Davey-Stewartson equations with arbitrary power nonlinearities using the dynamical system and the first integral methods
  54. Regularity of models associated with Markov jump processes
  55. Multiplicity solutions for a class of p-Laplacian fractional differential equations via variational methods
  56. Minimal period problem for second-order Hamiltonian systems with asymptotically linear nonlinearities
  57. Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound
  58. Non-binary quantum codes from constacyclic codes over 𝔽q[u1, u2,…,uk]/⟨ui3 = ui, uiuj = ujui
  59. On the general position number of two classes of graphs
  60. A posteriori regularization method for the two-dimensional inverse heat conduction problem
  61. Orbital stability and Zhukovskiǐ quasi-stability in impulsive dynamical systems
  62. Approximations related to the complete p-elliptic integrals
  63. A note on commutators of strongly singular Calderón-Zygmund operators
  64. Generalized Munn rings
  65. Double domination in maximal outerplanar graphs
  66. Existence and uniqueness of solutions to the norm minimum problem on digraphs
  67. On the p-integrable trajectories of the nonlinear control system described by the Urysohn-type integral equation
  68. Robust estimation for varying coefficient partially functional linear regression models based on exponential squared loss function
  69. Hessian equations of Krylov type on compact Hermitian manifolds
  70. Class fields generated by coordinates of elliptic curves
  71. The lattice of (2, 1)-congruences on a left restriction semigroup
  72. A numerical solution of problem for essentially loaded differential equations with an integro-multipoint condition
  73. On stochastic accelerated gradient with convergence rate
  74. Displacement structure of the DMP inverse
  75. Dependence of eigenvalues of Sturm-Liouville problems on time scales with eigenparameter-dependent boundary conditions
  76. Existence of positive solutions of discrete third-order three-point BVP with sign-changing Green's function
  77. Some new fixed point theorems for nonexpansive-type mappings in geodesic spaces
  78. Generalized 4-connectivity of hierarchical star networks
  79. Spectra and reticulation of semihoops
  80. Stein-Weiss inequality for local mixed radial-angular Morrey spaces
  81. Eigenvalues of transition weight matrix for a family of weighted networks
  82. A modified Tikhonov regularization for unknown source in space fractional diffusion equation
  83. Modular forms of half-integral weight on Γ0(4) with few nonvanishing coefficients modulo
  84. Some estimates for commutators of bilinear pseudo-differential operators
  85. Extension of isometries in real Hilbert spaces
  86. Existence of positive periodic solutions for first-order nonlinear differential equations with multiple time-varying delays
  87. B-Fredholm elements in primitive C*-algebras
  88. Unique solvability for an inverse problem of a nonlinear parabolic PDE with nonlocal integral overdetermination condition
  89. An algebraic semigroup method for discovering maximal frequent itemsets
  90. Class-preserving Coleman automorphisms of some classes of finite groups
  91. Exponential stability of traveling waves for a nonlocal dispersal SIR model with delay
  92. Existence and multiplicity of solutions for second-order Dirichlet problems with nonlinear impulses
  93. The transitivity of primary conjugacy in regular ω-semigroups
  94. Stability estimation of some Markov controlled processes
  95. On nonnil-coherent modules and nonnil-Noetherian modules
  96. N-Tuples of weighted noncommutative Orlicz space and some geometrical properties
  97. The dimension-free estimate for the truncated maximal operator
  98. A human error risk priority number calculation methodology using fuzzy and TOPSIS grey
  99. Compact mappings and s-mappings at subsets
  100. The structural properties of the Gompertz-two-parameter-Lindley distribution and associated inference
  101. A monotone iteration for a nonlinear Euler-Bernoulli beam equation with indefinite weight and Neumann boundary conditions
  102. Delta waves of the isentropic relativistic Euler system coupled with an advection equation for Chaplygin gas
  103. Multiplicity and minimality of periodic solutions to fourth-order super-quadratic difference systems
  104. On the reciprocal sum of the fourth power of Fibonacci numbers
  105. Averaging principle for two-time-scale stochastic differential equations with correlated noise
  106. Phragmén-Lindelöf alternative results and structural stability for Brinkman fluid in porous media in a semi-infinite cylinder
  107. Study on r-truncated degenerate Stirling numbers of the second kind
  108. On 7-valent symmetric graphs of order 2pq and 11-valent symmetric graphs of order 4pq
  109. Some new characterizations of finite p-nilpotent groups
  110. A Billingsley type theorem for Bowen topological entropy of nonautonomous dynamical systems
  111. F4 and PSp (8, ℂ)-Higgs pairs understood as fixed points of the moduli space of E6-Higgs bundles over a compact Riemann surface
  112. On modules related to McCoy modules
  113. On generalized extragradient implicit method for systems of variational inequalities with constraints of variational inclusion and fixed point problems
  114. Solvability for a nonlocal dispersal model governed by time and space integrals
  115. Finite groups whose maximal subgroups of even order are MSN-groups
  116. Symmetric results of a Hénon-type elliptic system with coupled linear part
  117. On the connection between Sp-almost periodic functions defined on time scales and ℝ
  118. On a class of Harada rings
  119. On regular subgroup functors of finite groups
  120. Fast iterative solutions of Riccati and Lyapunov equations
  121. Weak measure expansivity of C2 dynamics
  122. Admissible congruences on type B semigroups
  123. Generalized fractional Hermite-Hadamard type inclusions for co-ordinated convex interval-valued functions
  124. Inverse eigenvalue problems for rank one perturbations of the Sturm-Liouville operator
  125. Data transmission mechanism of vehicle networking based on fuzzy comprehensive evaluation
  126. Dual uniformities in function spaces over uniform continuity
  127. Review Article
  128. On Hahn-Banach theorem and some of its applications
  129. Rapid Communication
  130. Discussion of foundation of mathematics and quantum theory
  131. Special Issue on Boundary Value Problems and their Applications on Biosciences and Engineering (Part II)
  132. A study of minimax shrinkage estimators dominating the James-Stein estimator under the balanced loss function
  133. Representations by degenerate Daehee polynomials
  134. Multilevel MC method for weak approximation of stochastic differential equation with the exact coupling scheme
  135. Multiple periodic solutions for discrete boundary value problem involving the mean curvature operator
  136. Special Issue on Evolution Equations, Theory and Applications (Part II)
  137. Coupled measure of noncompactness and functional integral equations
  138. Existence results for neutral evolution equations with nonlocal conditions and delay via fractional operator
  139. Global weak solution of 3D-NSE with exponential damping
  140. Special Issue on Fractional Problems with Variable-Order or Variable Exponents (Part I)
  141. Ground state solutions of nonlinear Schrödinger equations involving the fractional p-Laplacian and potential wells
  142. A class of p1(x, ⋅) & p2(x, ⋅)-fractional Kirchhoff-type problem with variable s(x, ⋅)-order and without the Ambrosetti-Rabinowitz condition in ℝN
  143. Jensen-type inequalities for m-convex functions
  144. Special Issue on Problems, Methods and Applications of Nonlinear Analysis (Part III)
  145. The influence of the noise on the exact solutions of a Kuramoto-Sivashinsky equation
  146. Basic inequalities for statistical submanifolds in Golden-like statistical manifolds
  147. Global existence and blow up of the solution for nonlinear Klein-Gordon equation with variable coefficient nonlinear source term
  148. Hopf bifurcation and Turing instability in a diffusive predator-prey model with hunting cooperation
  149. Efficient fixed-point iteration for generalized nonexpansive mappings and its stability in Banach spaces
Downloaded on 7.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2022-0036/html
Scroll to top button