Home Iterative rational least squares fitting
Article Publicly Available

Iterative rational least squares fitting

  • Umberto Amato ORCID logo and Biancamaria Della Vecchia ORCID logo EMAIL logo
Published/Copyright: February 19, 2019
Become an author with De Gruyter Brill

Abstract

A progressive iterative approximation technique for rational least squares fitting curves is developed. The format is interesting in CAGD (Computer Aided Geometric Design) and improves the recent algorithms. An improved chord method for the root finding based on rational operators is also presented.

1 Introduction

In the last decades, many papers have been devoted to Shepard-type operators, which are rational, positive operators interesting in classical approximation theory and in scattered data interpolation problems (see, e.g., [1, 3, 8, 10, 11, 12, 18]). Recently, in [4], Shepard-type operators have been studied to construct Shepard-type curves useful in CAGD. Such curves overcome some of the original Shepard operator’s drawbacks and have some advantages compared to the Bézier case. A weighted progressive iterative approximation (WPIA in short) technique for Shepard-type curves was also developed in [4]. The key idea was to iteratively change the control points of the active curve to deform towards the target shape represented by the points data. So, by adjusting the control points and by a proper weight, the WPIA process generates sequences of Shepard-type curves converging to the global Shepard-type interpolating curve at the original control points. Similar techniques for Bézier and B-spline curves were examined, for example, in [2, 9, 14, 15, 17]. However, in the WPIA method, the number of control points is equal to that of the data points, which is a drawback for a very large number of data points.

On the other hand, the least squares fitting (LSF in short) is one of the most commonly used mathematical tools in practice, particularly useful to fit a large number of data points by a polynomial or an approximation tool of lower degree. However, the corresponding system of linear equations can be ill-conditioned or even singular (see, e.g., [13, 16]).

The aim of the present paper is to overcome the above drawback of the WPIA technique by a progressive iterative approximation process for Shepard-type least squares fitting (SLSPIA in short) (Section 2). In each iteration, every control point is adjusted by a weighted sum of some difference vectors between the data points and their corresponding points on the fitting curve. By adjusting the control points iteratively, the SLSPIA technique generates a sequence of Shepard-type curves, and the limit curve is the Shepard-type least squares curve fitting the given data points. The convergence result for such a procedure, an explicit expression for the error and the corresponding decay rate are shown in Theorem 2.2. An optimal value of the weight parameter giving the fastest convergence rate is presented in Theorem 2.4. Moreover, a simple method to compute a practical value of the weight is discussed giving the convergence rate comparable to the optimal one. The SLSPIA technique is efficient since it avoids the computational cost of solving a large system of linear equations corresponding to the Shepard-type LSF method. We remark that such system is well-conditioned (Lemma 4.3). The above technique is interesting in CAGD because, by playing on a shape parameter – the iteration level –, the designer can draw several curves modeling the given data. Thanks to the above properties, our technique improves the algorithms in [4, 6, 9].

Then, in Section 3, starting from a rational operator of Shepard-type, we deduce a numerical procedure for the roots finding improving the chord method. Section 4 contains the proofs of the main results, which are based on some results on the eigenstructure of Shepard-type operators and direct estimates for our operators. Finally, numerical experiments confirming our theoretical results are shown in Section 5.

2 The progressive iterative approximation for Shepard-type least squares fitting

In this section, we introduce the SLSPIA technique. First we recall the definition of parametric Shepard-type curves [4]. Let An(t)=[An,0(t),An,1(t),,An,n(t)]T, where

(2.1)An,i(t)=1(t-ti)s+λi=0n1(t-ti)s+λ

for 0in, nN, t[0,1], ti=in, i=0,1,,n, 0<nsλC with C any fixed positive constant and s>2 even.

Given the control vector P=[P0,P1,,Pn]T, PiRd, i=0,1,,n, d2, the parametric Shepard-type curve Sn[P,t] is defined by

Sn[P,t]=i=0nAn,i(t)Pi=An(t)P.

The properties of such curves interesting in CAGD were studied in [4]. In particular, we recall that Sn[P] is a rational curve, preserving points, lying in the convex hull of the control polygon defined by P and satisfying the pseudo-local control property, i.e., each function An,j(t), 0jn, attains its maximum value close to 1 at t=tj; in other words, the point Pj influences strongly the shape of the curve in a neighborhood of t=tj.

Generalizations of such curves were discussed in [6, 5, 7]. An efficient WPIA technique based on Sn curves for data fitting was also introduced and studied in [4]. However, this requires that the number of control points be equal to that of data points.

On the other hand, especially for points sets of large size, LSF is one the most commonly used mathematical tools. Therefore, a natural answer to the above restriction of the WPIA method for a large number of data is the construction of a weighted progressive iterative approximation technique for Shepard-type least squares fitting (SLSPIA) curves. Similarly to the classical WPIA, the SLSPIA process starts with an initial Shepard-type curve and constructs a sequence of fitting curves by adjusting the control points iteratively. The limiting curve is the Shepard-type least squares fitting curve to the data set. More precisely, assume that {Qi}i=0m is an ordered points set to be fitted. At the beginning of the iteration, we select {Pi}i=0n, nm, from {Qi}i=0m as the control points set and construct a piece of the blending curve 𝒫(0)(t), i.e.,

𝒫(0)(t)=i=0nAn,i(t)Pi(0),t[0,1],

with Pi(0)=Pi, i=0,1,,n. The choice of the initial points will be discussed later. Letting

δi0=Qi-𝒫(0)(t¯j),j=0,1,,m,

we construct the first adjusting vector for the i-th control point as

Δi0=μj=0mAn,i(t¯j)δj0,

with μ a constant satisfying the condition

(2.2)0<μ<2λn.

Here λn is the largest eigenvalue of ATA, AT being the transpose of A, with A the collocation matrix of the blending basis An,i on {0=t¯0,t¯1,,t¯m=1} defined by (cf. [4])

(2.3)A=(An,0(t¯0)An,1(t¯0)An,n(t¯0)An,0(t¯1)An,1(t¯1)An,n(t¯1)An,0(t¯m)An,1(t¯m)An,n(t¯m)),

with t¯i=i/m, i=0,,m (cf. Lemma 4.2). Then we can get the new control points by

Pi(1)=Pi(0)+Δi0,i=0,1,,n,

and the new curve

𝒫(1)(t)=j=0nAn,j(t)Pj(1).

Similarly, if 𝒫(k)(t) is the k-th curve after the k-th iteration, we put

(2.4)δjk=Qj-𝒫(k)(t¯j),j=0,1,,m,Δik=μj=0mAn,i(t¯j)δjk,Pi(k+1)=Pi(k)+Δik,i=0,1,,n,

and the (k+1)-th curve is defined as

(2.5)𝒫(k+1)(t)=j=0nAn,j(t)Pj(k+1).

In this way, we have a curve sequence {𝒫(k)(t)}k=0.

Remark 2.1.

From (2.4), we can see that the adjusting vector Δik is a weighted sum of the difference vectors δjk corresponding to Pj(k).

Now we prove that the limit of 𝒫(k)(t), for k tending to , is just the Shepard-type least squares fitting to the data points {Qi}i=0m. Indeed, denoting by Bk the k-th iteration of any matrix B, we have the following theorem.

Theorem 2.2.

The SLSPIA method defined by (2.4) and (2.5) is convergent, and the limit curve is the Shepard-type LSF curve of the initial data {Qi}i=0m. Moreover, we can give an explicit expression for the error between P(k+1) and (ATA)-1ATQ, i.e.,

(2.6)P(k+1)-(ATA)-1ATQ=Dk+1[P(0)-(ATA)-1ATQ],

with D=I-μATA.

Remark 2.3.

From (2.6), we deduce the convergence rate of P(k+1) to (ATA)-1ATQ (solution of Shepard-type LSF for the given data) is regulated by the decay rate of Dk+1 (cf. (4.3)). Thanks to the LSF-type character of this method, the case of large sets of data points can be treated efficiently, improving [4]. We remark that the system of linear equations for Shepard-type LSF method is well-conditioned (see Lemma 4.3), unlike other LSF processes having the corresponding system ill-conditioned or even singular [13, 16]. Similar progressive iterative approximation processes for the least squares fitting by B-spline curves were studied in [13, 19].

The SLSPIA process makes it possible to construct a sequence of control polygons converging to the control polygon of a Shepard-type LSF curve. Therefore, the parameter k can be used as a shape parameter in order to model different shapes, obtaining, as extreme cases, the Shepard-type curve at some starting points Pi(0), i=0,1,,n, and the Shepard-type LSF curve fitting the original control points.

For brevity, only the SLSPIA method for Shepard-type curves is handled, and the results for LSPIA process in the Shepard-type tensor product surfaces case can be deduced similarly [4].

In equation (2.4), a weight is required to generate the adjusting vector. The following theorem shows how to select an appropriate weight to improve the convergence rate of the SLSPIA method.

Theorem 2.4.

The SLSPIA method (2.4), (2.5) has the fastest convergence rate when

(2.7)μ=2λ0+λn,

with λ0 the smallest eigenvalue of ATA, and, in such a case,

ρ(D)=λ0-λnλ0+λn,

where ρ(D) is the spectral radius of D.

Although μ=2/(λ0+λn) provides the fastest convergence rate in theory (Theorem 2.4), it needs a large amount of computation to calculate the greatest and smallest eigenvalues. To avoid the computation of the eigenvalues, we follow [13] and propose a simple method to determine the weight μ. Indeed, we set

(2.8)μ¯=2C,

with C:=maxici, i=0,1,,n, ci being the sum of the i-th row elements of ATA (see Section 5 for the discussion on μ¯).

3 An improved chord length method

In this section, we discuss a numerical method for solving a nonlinear equation by a Shepard-type operator.

Let fC2([a,b]), with a,bR, such that

(3.1)f′′(x)0for allx[a,b],

and

(3.2)f(a)f(b)<0.

From this assumption, we deduce that there is a unique root x¯(a,b) for f, i.e., f(x¯)=0. Under assumptions (3.1) and (3.2), only one of the following two cases may occur:

  1. f(a)f′′(a)>0,

  2. f(b)f′′(b)>0.

Then consider the Shepard-type operator defined by

T(f;x)=f(a)(x-a)s+f(b)(b-x)s1(x-a)s+1(b-x)s,x[a,b],

with s odd. Obviously,

T(f;a)=f(a),T(f;b)=f(b),|T(f,x)|max{|f(a)|,|f(b)|}for allx[a,b].

In other words, T(f) can be considered as a rational approximation of f. Hence, letting ab if and only if |a/b|±1C, it is reasonable that T(f)f in a suitable interval and that we can use the root of T(f) (easily computable) to approximate the root of f.

Then, in case (a), consider the iterative procedure

(3.3)xn+1=g(xn),

with

(3.4)g(x)=x+a(-f(x)f(a))1/s1+(-f(x)f(a))1/s,

with x0=b and s odd.

In case (b), consider the iterative procedure

(3.5)xn+1=g(xn),

with

(3.6)g(x)=x+b(-f(x)f(b))1/s1+(-f(x)f(b))1/s,

with x0=a and s odd.

Geometrically, xn+1 is the abscissa of the intersection of the graph of T(f) with the x-axis.

We remark that if s=1, then the iteration procedure (3.3)–(3.6) gives back the classical chord method. It is well known that the chord method is converging linearly to x¯ under the above assumptions, while if fC([a,b]) or f(x¯)=0, the convergence is not guaranteed or the order is no longer linear.

Now we prove that the procedure (3.3)–(3.6) overcomes such drawbacks.

Theorem 3.1.

Let fC([a,b]), having a unique root x¯(a,b), with f(a)f(b)<0. Assume that fC([a,b]), but

(3.7)f1/s(x¯)C([a,b]),f(x¯)f1/s-1(x¯)0

for some s odd. If

(3.8)|1+(a-x¯)f(x¯)f1/s-1(x¯)s(-f(a))1/s|<1,

then the iterative procedure (3.3), (3.4) linearly converges to x¯.

If

|1+(b-x¯)f(x¯)f1/s-1(x¯)s(-f(b))1/s|<1,

then the iterative procedure (3.5), (3.6) linearly converges to x¯.

Theorem 3.2.

Let fC([a,b]), having a multiple root at x¯(a,b), with f(a)f(b)<0. Assume

f1/s(x¯)C([a,b]),f(x¯)f1/s-1(x¯)0

for some s odd. If

|1+(a-x¯)f(x¯)f1/s-1(x¯)s(-f(a))1/s|<1,

then the iterative procedure (3.3), (3.4) linearly converges to x¯.

If

|1+(b-x¯)f(x¯)f1/s-1(x¯)s(-f(b))1/s|<1,

then the iterative procedure (3.5), (3.6) linearly converges to x¯.

Remark 3.3.

From Theorems 3.1 and 3.2, it follows that the iterative procedure (3.3)–(3.6) improves the chord method in the sense that, under the above assumptions, the linear convergence of our procedure is guaranteed, unlike the chord method case.

4 Proofs of the main results

Lemma 4.1.

The matrix ATA is symmetric, not singular and positive definite.

Proof.

It is easy to see that the matrix ATA is symmetric. Moreover, it is not singular. Indeed, if this is not true, then the system ATAz=0 should have also a solution z¯=(z¯0,z¯1,,z¯n)T different from the trivial one. So z¯TATAz¯=0, which implies Az¯=0. Therefore,

j=0nz¯jAn,j(ti)=0,i=0,1,,m.

So, in the subspace generated by An,j(t),j=0,1,,n, we obtain a rational function of degree (sn,sn) having m+1 zeros, which is impossible, being snm (cf. [4, 6]). Now we prove ATA is positive definite, i.e., zTATAz0 for all z0. Indeed, it is not possible that zTATAz=0 for z0; otherwise, as before, we have a contradiction. ∎

Lemma 4.2.

The eigenvalues of ATA are real and positive.

Proof.

From Lemma 4.1, the statement immediately follows. ∎

Lemma 4.3.

The matrix ATA is well-conditioned.

Proof.

From Gerschgorin’s theorem, we know that if γ is an eigenvalue of ATA, then, for some i,0im,

|γ-k=0mAn,i2(t¯k)|lij=0mAn,i(t¯j)An,l(t¯j):=liSi.

Working as in [4], recalling the definition of the Riemann zeta function

SiC¯

with C¯ a positive constant, we have

liSiCn

with C a positive constant.

Consequently,

γCn+k=0nAn,i2(t¯k)C′′n

with C′′ a positive constant. Hence the matrix ATA is well-conditioned. ∎

Proof of Theorem 2.2.

Let P(k)=[P0(k),P1(k),,Pn(k)]T and Q=[Q0,Q1,,Qm]T. According to equation (2.2), we have

Pi(k+1)=Pi(k)+μj=0mAn,i(t¯j)(Qj-P(k)(t¯j))=Pi(k)+μj=0mAn,i(t¯j)(Qj-l=0nAn,l(t¯j)Pl(k)).

Then we get

(4.1)P(k+1)=P(k)+μAT(Q-AP(k)),

where A is the collocation matrix (2.3). Letting I be the (n+1)-rank identity matrix and D=I-μATA, by (4.1), we have

(4.2)P(k+1)-(ATA)-1ATQ=(I-μATA)[P(k)-(ATA)-1ATQ]=(I-μATA)2[P(k-1)-(ATA)-1ATQ]==Dk+1[P(0)-(ATA)-1ATQ].

Supposing {λi(D)}, i=0,1,,n, are the eigenvalues of D sorted in nondecreasing order, we get λi(D)=1-μλi, where λi, i=0,1,,n, are the eigenvalues of ATA sorted in nondecreasing order. From Lemma 4.2, since 0<μ<2λn, we have

0<μλi<2and-1<λi(D)<1,i=0,1,,n.

This leads to 0<ρ(D)<1, where ρ(D) denotes the spectral radius of D. Therefore,

(4.3)limkDk=(0)n+1,

where (0)n+1 is the (n+1)-rank zero matrix. By (4.2), it follows

P()=limkP(k)=(ATA)-1ATQ+limkD(k)[P(0)-(ATA)-1ATQ]=(ATA)-1ATQ.

This is equivalent to

(ATA)P()=ATQ,

i.e., the SLSPIA method is convergent, and the limit curve is the Shepard-type LSF result for the initial data. ∎

Proof of Theorem 2.4.

We can work as in the proof of [13, Theorem 3.1, p. 35]. ∎

Proof of Theorem 3.1.

Assume (3.7) holds true. Then, from (3.3), we have

(4.4)xn+1=xn(-f(a))1/s+af1/s(xn)(-f(a))1/s+f1/s(xn).

Letting εn=xn-x¯, by f(x¯)=0 and (4.4), we can write

εn+1=xn(-f(a))1/s+af1/s(xn)-x¯(-f(a))1/s-x¯f1/s(xn)(-f(a))1/s+f1/s(xn)=εn(-f(a))1/s+(a-x¯)f1/s(xn)-(a-x¯)f1/s(x¯)(-f(a))1/s+f1/s(xn),

and, from the assumption f1/sC([a,b]) and by the Taylor formula, we have

εn+1=εn(-f(a))1/s+1s(a-x¯)f(x¯)f1/s-1(x¯)εn+O(εn2)(-f(a))1/s+1sf(x¯)f1/s-1(x¯)εn+O(εn2).

Hence, from (3.8), the method linearly converges. ∎

Proof of Theorem 3.2.

Working as in the proof of Theorem 3.1, the assertion follows. ∎

5 Implementation and examples

5.1 Numerical experiments on Section 3

Though the SLSPIA method can be started with arbitrary initial control points, an appropriate selection of initial control points makes the LSPIA convergent quickly. In an implementation, following [13], we select the initial control points {Pi}i=0n as

(5.1)P0=Q0,Pn=Qm,Pi=Qf(i),i=1,2,,n-1,f(i)=[(n+1)in].

In this subsection, we present an example to show the efficiency and validity of our SLSPIA-type techniques.

5.1.1 Example 1

Consider a helix of radius 5 given by (see, e.g., [4, 15])

(x(t),y(t),z(t))=(5cost,5sint,t),t[0,6π].

A sequence of 301 control points Pi, i=0,,300, is sampled from the helix as

(x(si),y(si),z(si)),si=π50i,i=0,1,,300.

The above point set is fitted using SLSPIA curves given by (2.4) and (2.5) with 31 initial control points chosen as in (5.1), where s=4, λ=1.08×10-6 (see (2.1)) and μ¯ is given by (2.8). The original point set and initial Shepard-type curves are shown in Figure 1; the Shepard-type curves after the first and second iteration steps are shown in Figures 2 and 3.

Figure 1 SLSPIA curve at iteration level 0.
Figure 1

SLSPIA curve at iteration level 0.

Figure 2 SLSPIA curve at iteration level 1.
Figure 2

SLSPIA curve at iteration level 1.

Figure 3 SLSPIA curve at iteration level 2.
Figure 3

SLSPIA curve at iteration level 2.

Figure 4 Limit SLSPIA fitting curve.
Figure 4

Limit SLSPIA fitting curve.

Moreover, denoting by the usual Euclidean norm, we consider the fitting errors

Ek=j=0mQj-i=0nAn,i(t)Pi(k)2

(scaled so that E0=1); if |Ek+1-Ek|<10-4, we stop the iteration process and let the last Ek be E. The limit fitting curve is presented in Figure 4. From Figures 14, we can see that LSPIA constructs a series of curves approximating the given data points progressively.

We also plot Ek, k=0,1,,20, in Figure 5. From Figure 5, we can see that all Ek decrease very fast in some starting steps and {Ek} is a monotone decreasing sequence.

Figure 5 Plot of Ek{E_{k}} at different iterations.
Figure 5

Plot of Ek at different iterations.

Figure 6 Plot of Dk{D_{k}} (dotted line, left axis) and dk{d_{k}} (continuous line, right axis).
Figure 6

Plot of Dk (dotted line, left axis) and dk (continuous line, right axis).

Furthermore, let Dk=i=0nPi(k)-Pi() (scaled so that D0=1). We plot Dk (k=1,2,) in Figure 6, when we choose the weight μ¯=2/C. We use the same stop criterion as above. Figure 6 also shows the difference dk between the above Dk and Dk computed when the weight μ=2/(λ0+λn) (see (2.7)). Note that the scale for Dk and dk is different. We can see that the convergence rates corresponding to the above two weights are similar, as announced in Section 2.

5.1.2 Example 2

Consider the curve (x(t),y(t),z(t)), t[0,6π], where

x(t)=(10.52π(exp(-(t-2π)2)+exp(-(t-4π)2))),
y(t)=(10.52π(exp(-(t-2π-1)2)+exp(-(t-4π)2-1))),
z(t)=t.

Figure 7 SLSPIA fitting curve at iteration level 0.
Figure 7

SLSPIA fitting curve at iteration level 0.

Figure 8 SLSPIA fitting curve at iteration level 1.
Figure 8

SLSPIA fitting curve at iteration level 1.

Figure 9 SLSPIA fitting curve at iteration level 2.
Figure 9

SLSPIA fitting curve at iteration level 2.

Figure 10 Limit SLSPIA fitting curve.
Figure 10

Limit SLSPIA fitting curve.

Figure 11 Plot of Ek{E_{k}} at different iterations.
Figure 11

Plot of Ek at different iterations.

A sequence of 301 control points Pi, i=0,,300, is sampled from the curve as

(x(si),y(si),z(si)),si=π50i,i=0,1,,300.

The above point set is fitted using SLSPIA curves given by (2.4) and (2.5) with 31 initial control points chosen as in (5.1), where s=4, λ=1.08×10-5 (see (2.1)) and μ¯ is given by (2.8). The original point set and initial Shepard-type curves are shown in Figure 7, the Shepard-type curves after the first and second iteration steps are shown in Figures 8 and 9, while the limiting fitting curve is shown in Figure 10. Figure 11 shows the fast monotone decrease of Ek analogously to the helix case.

5.2 Numerical experiments on Section 3

In this section, we compare the numerical procedure given in Section 3 with the classical chord method.

5.2.1 Example 1

Let f1(x)=x1/3exp(-x/200). Obviously, f1(x¯)=0 at x¯=0, and f1C([-1,1]). Hence the convergence of the chord method is not guaranteed theoretically. Since, for f1, the assumptions of Theorem 3.1 are satisfied for s=1/3, we consider the numerical procedure (3.3), (3.4). In Table 1, we present the corresponding errors and ratios between two consecutive errors at various iterations. The table refers to the case of starting value x0=0.1, giving the slowest convergence. The numerical results confirm that the convergence rate is linear, as expected from Theorem 3.1. The approximate solution approaches the root of the equation with double precision after 8 iterations. By construction, for f1, the choice of x0=1 gives the exact solution at the first iteration. We mention that the chord method does not converge for this function – actually, the solution flips between two values.

Table 1

Error (second column) of the procedure (3.3), (3.4) for the function f1 at the first ten iterations. The corresponding ratios between two consecutive errors are shown in the third column. The root is x¯=0, and the starting value is x0=0.1.

nεn+1εn+1/εn
11.49×10-30.014899
22.22×10-50.014888
33.30×10-70.014888
44.92×10-90.014888
57.32×10-110.014888
61.09×10-120.014888
71.62×10-140.014888
82.42×10-160.014888
93.60×10-180.014888
105.35×10-200.014888
Table 2

Error (second column) of the procedure (3.3), (3.4) for the function f2 at various iterations. The ratios between two consecutive errors are shown in the third column. The root is x¯=0, and the starting value is x0=0.1. The fourth column presents the error for the chord method.

nεn+1εn+1/εnεn+1 (chord)
15.08×10-20.10158503.91×10-1
21.00×10-20.19764293.45×10-1
32.05×10-30.20464223.15×10-1
44.23×10-40.20596252.93×10-1
58.73×10-50.20623022.75×10-1
103.26×10-80.20629942.22×10-1
151.22×10-110.20629951.93×10-1
204.55×10-150.20629951.74×10-1
251.70×10-180.20629951.60×10-1
1006.58×10-700.20629958.95×10-2
10003.06×10-2
100009.90×10-3

5.2.2 Example 2

Let f2(x)=x5+x3. Here f2(x¯)=0=f2(x¯) at x¯=0. Hence the linear convergence of the chord method does not hold any longer. Since, for f2, the assumptions of Theorem 3.2 are satisfied for s=3, we consider the numerical procedure (3.3), (3.4). In Table 2, we present the corresponding errors and ratios between two consecutive errors at various iterations; in addition, we show the errors for the chord method at various iterations. As in Example 5.1.1, the table refers to the case x0=0.1. Numerical results confirm that the convergence rate is linear, as expected from Theorem 3.2, and the approximate solution approaches the root of the equation with double precision after 20 iterations. We mention that the chord method very slowly converges for this function (the last column of Table 2) – actually, after 10 000 iterations, we have the same error obtained with 2 iterations of the procedure (3.3), (3.4).

References

[1] G. Allasia, A class of interpolating positive linear operators: Theoretical and computational aspects, Approximation Theory, Wavelets and Applications (Maratea 1994), NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci. 454, Kluwer Academic, Dordrecht (1995), 1–36. 10.1007/978-94-015-8577-4_1Search in Google Scholar

[2] U. Amato and B. Della Vecchia, Bridging Bernstein and Lagrange polynomials, Math. Commun. 20 (2015), no. 2, 151–160. Search in Google Scholar

[3] U. Amato and B. Della Vecchia, New results on rational approximation, Results Math. 67 (2015), no. 3–4, 345–364. 10.1007/s00025-014-0420-4Search in Google Scholar

[4] U. Amato and B. Della Vecchia, Modelling by Shepard-type curves and surfaces, J. Comput. Anal. Appl. 20 (2016), no. 4, 611–634. Search in Google Scholar

[5] U. Amato and B. Della Vecchia, Rational operators based on q-integers, Results Math. 72 (2017), no. 3, 1109–1128. 10.1007/s00025-017-0682-8Search in Google Scholar

[6] U. Amato and B. Della Vecchia, Weighting Shepard-type operators, Comput. Appl. Math. 36 (2017), no. 2, 885–902. 10.1007/s40314-015-0263-ySearch in Google Scholar

[7] U. Amato and B. Della Vecchia, Inequalities for Shepard-type operators, J. Math. Inequal. 12 (2018), no. 2, 517–530. 10.7153/jmi-2018-12-38Search in Google Scholar

[8] K. Anjyo, J. P. Lewis and F. Pighin, Scattered data interpolation for graphics, Proceeding SIGGRAPH ’14 ACM SIGGRAPH 2014 Courses, ACM, New York (2014), Article No. 27. 10.1145/2614028.2615425Search in Google Scholar

[9] J. Delgado and J. M. Peña, Progressive iterative approximation and bases with the fastest convergence rates, Comput. Aided Geom. Design 24 (2007), no. 1, 10–18. 10.1016/j.cagd.2006.10.001Search in Google Scholar

[10] B. Della Vecchia, Direct and converse results by rational operators, Constr. Approx. 12 (1996), no. 2, 271–285. 10.1007/s003659900013Search in Google Scholar

[11] B. Della Vecchia and G. Mastroianni, Pointwise simultaneous approximation by rational operators, J. Approx. Theory 65 (1991), no. 2, 140–150. 10.1016/0021-9045(91)90099-VSearch in Google Scholar

[12] B. Della Vecchia, G. Mastroianni and P. Vértesi, Direct and converse theorems for Shepard rational approximation, Numer. Funct. Anal. Optim. 17 (1996), no. 5–6, 537–561. 10.1080/01630569608816709Search in Google Scholar

[13] C. Deng and H. Lin, Progressive and iterative approximation for least squares B-spline curve and surface fitting, Comput.-Aided Des. 47 (2014), 32–44. 10.1016/j.cad.2013.08.012Search in Google Scholar

[14] H.-W. Lin, H.-J. Bao and G.-J. Wang, Totally positive bases and progressive iteration approximation, Comput. Math. Appl. 50 (2005), no. 3–4, 575–586. 10.1016/j.camwa.2005.01.023Search in Google Scholar

[15] L. Lu, Weighted progressive iteration approximation and convergence analysis, Comput. Aided Geom. Design 27 (2010), no. 2, 129–137. 10.1016/j.cagd.2009.11.001Search in Google Scholar

[16] A. Marco and J.-J. Martínez, Polynomial least squares fitting in the Bernstein basis, Linear Algebra Appl. 433 (2010), no. 7, 1254–1264. 10.1016/j.laa.2010.06.031Search in Google Scholar

[17] D. Occorsio and A. C. Simoncelli, How to go from Bézier to Lagrange curves by means of generalized Bézier curves, Facta Univ. Ser. Math. Inform. (1996), no. 11, 101–111. Search in Google Scholar

[18] J. Szabados, On a problem of R. Devore, Acta Math. Acad. Sci. Hungar. 27 (1976), no. 1–2, 219–223. 10.1007/BF01896777Search in Google Scholar

[19] L. Zhang, X. Ge and J. Tan, Least square geometric iterative fitting method for generalized B-spline curves with two different kinds of weights, Visual Comput. 32 (2016), no. 9, 1109–1120. 10.1007/s00371-015-1170-3Search in Google Scholar

Received: 2017-01-19
Accepted: 2017-02-02
Published Online: 2019-02-19
Published in Print: 2021-02-01

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 23.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/gmj-2019-2010/html
Scroll to top button