Startseite Stable Implementation of Adaptive IGABEM in 2D in MATLAB
Artikel Open Access

Stable Implementation of Adaptive IGABEM in 2D in MATLAB

  • Gregor Gantner ORCID logo EMAIL logo , Dirk Praetorius ORCID logo und Stefan Schimanko
Veröffentlicht/Copyright: 9. Juni 2022

Abstract

We report on the Matlab program package IGABEM2D which provides an easily accessible implementation of adaptive Galerkin boundary element methods in the frame of isogeometric analysis and which is available on the web for free download. Numerical experiments with IGABEM2D underline the particular importance of adaptive mesh refinement for high accuracy in isogeometric analysis.

MSC 2010: 65D07; 65N38; 65N50; 65Y20

1 Introduction

1.1 Isogeometric Analysis

The central idea of isogeometric analysis (IGA) is to use the same ansatz functions for the discretization of a partial differential equation (PDE) as is used for the representation of the corresponding problem geometry in computer-aided design (CAD), namely (rational) splines. This concept, originally invented in [33] for finite element methods (IGAFEM), has proved very fruitful in applications; see also the monograph [11].

1.2 Isogeometric Boundary Element Method

The idea of the boundary element method (BEM) is to reformulate the considered PDE on a domain Ω as an equivalent integral equation on the boundary Ω . The solution of the latter is the missing Cauchy data, i.e., the Neumann or the Dirichlet data, which can be plugged into the so-called representation formula to obtain the PDE solution itself. Since CAD usually provides only a parametrization of Ω , this makes BEM the most attractive numerical scheme if applicable (i.e., provided that the fundamental solution of the differential operator is explicitly known); see [40, 41] for the first works on isogeometric BEM (IGABEM) for 2D resp. 3D. Compared to standard BEM, which uses discontinuous or continuous piecewise polynomials as ansatz functions, IGABEM typically saves degrees of freedom by using smooth splines instead.

We refer to [45, 39, 44, 2, 38] for numerical experiments, to [47, 35, 15, 14, 16, 13] for fast IGABEM based on fast multipole ℋ-matrices resp. H 2 -matrices, and to [31, 6, 34, 1, 1, 17] for some quadrature analysis. For further references, we also mention the recent monograph [7].

1.3 Adaptive IGABEM

On the one hand, IGA naturally leads to high-order ansatz functions. On the other hand, however, optimal convergence behavior with higher-order discretizations is only observed in simulations if the (given) data as well as the (unknown) solution are smooth. Therefore, a posteriori error estimation and related adaptive strategies are mandatory to exploit the full potential of IGA. Rate-optimal adaptive strategies for IGAFEM (using hierarchical splines) have been proposed and analyzed independently in [10, 25] for IGAFEM, while the earlier work [9] proves only linear convergence. Meanwhile, these results have been extended to T-splines [27].

As far as IGABEM is concerned, available results focus on weakly singular integral equations with energy space H - 1 / 2 ( Γ ) ; see [20, 18] for a posteriori error estimation, as well as [19] for the analysis of a rate-optimal adaptive IGABEM in 2D, and [24, 26, 28] and [8] for corresponding results for IGABEM in 3D with hierarchical splines and T-splines, respectively. Recently, [21] investigated optimal preconditioning for IGABEM in 2D with locally refined meshes. Moreover, [29] proved rate-optimality of an adaptive IGABEM in 2D for hyper-singular integral equations with energy space H 1 / 2 ( Γ ) . The recent work [8] provides an overview of adaptive IGAFEM as well as IGABEM.

1.4 Model Problems

The present paper reports on our Matlab implementation of adaptive IGABEM in 2D, which is available online [30]. On a bounded Lipschitz domain Ω R 2 with diam ( Ω ) < 1 , we consider the Laplace model problem - Δ P = 0 with Dirichlet/Neumann boundary conditions. Rewriting this PDE as boundary integral equation involves (up to) four integral operators, namely the weakly singular operator 𝔙, the hyper-singular operator 𝔚, the double-layer operator 𝔎, and the adjoint double-layer operator K . Let Γ Ω be a connected part of the boundary. Given boundary densities ϕ , u : Γ R and x Γ , the operators are formally defined by

(1.1) [ V ϕ ] ( x ) := - 1 2 π Γ log | x - y | ϕ ( y ) d y ,
[ W u ] ( x ) := 1 2 π x ν ( x ) Γ y ν ( y ) log | x - y | u ( y ) d y ,
(1.2) [ K u ] ( x ) := - 1 2 π Γ y ν ( y ) log | x - y | u ( y ) d y ,
[ K ϕ ] ( x ) := - 1 2 π Γ ϕ ( y ) x ν ( x ) log | x - y | d y .
For our implementation, we consider the corresponding weakly singular integral equation

(1.3) [ V ϕ ] ( x ) := f ( x ) for x Γ ,

where f : Γ R is given and the integral density ϕ : Γ R is sought. Moreover, we consider the hyper-singular integral equation

(1.4) [ W u ] ( x ) = g ( x ) for x Γ .

where 𝜈 denotes the outer unit normal vector on Ω , g : Γ R is given, and the integral density u : Γ R is sought. For Γ = Ω , these integral equations are equivalent formulations of the 2D Laplace problem - Δ P = 0 in Ω, where (1.3) with f = ( 1 2 + K ) ( P | Γ ) is equivalent to the Dirichlet problem and (1.4) with g = ( 1 2 - K ) ( P ν ) is equivalent to the Neumann problem. Knowing both Dirichlet as well as Neumann boundary data allows to compute 𝑃 via the representation formula P = V ~ ( P ν ) - K ~ ( P | Γ ) , where V ~ and K ~ are defined as in (1.1) and (1.2) for x Ω . While this approach is called direct, an alternative indirect approach is to make the ansatz P = V ~ ϕ or P = K u and solve V ~ ϕ = f := P | Γ or W u = g := P ν , respectively. In all cases, the singularities of the involved integral kernels pose a significant challenge, which is overcome in the presented stable IGABEM implementation.

1.5 Matlab Library IGABEM2D and Contributions

The library IGABEM2D, which provides an easily accessible Matlab implementation in the spirit of, e.g., [3, 5, 22], is available online [30]. It is distributed as a single file igabem2d.zip. To install the library, download and unpack the zip archive, start Matlab, and change to the root folder igabem2d/. You can directly run the main files IGABEMWeak.m and IGABEMHyp.m, where the adaptive algorithms, Algorithms 3.2 and 4.1, for the problems of Section 1.4 are implemented. There, you may choose out of a variety of different parameters. These functions both automatically run mexIGABEM.m, which adds all required paths, compiles the C files of source/ and stores the resulting mex-files in MEX-files/. The Matlab functions provided by IGABEM2D are contained in the folder functions/. Further details are also provided by README.txt, which is found in the root folder igabem2d/.

1.6 Outline

Section 2 recalls the definition of B-splines and NURBS and discusses necessary assumptions on parametrized NURBS geometries. Section 3 recalls the weakly singular integral equation along with the required Sobolev spaces on the boundary and formulates an adaptive Galerkin IGABEM, which is applied in numerical experiments at the end of the section. Similarly, Section 4 covers the hyper-singular integral equation. Details on the implementation, i.e., the stable computation of the involved (singular) boundary integrals, are given in Section 5. Finally, Appendix A provides an overview of all functions in IGABEM2D.

1.7 General Notation

Throughout and without any ambiguity, | | denotes the absolute value of scalars, the Euclidean norm of vectors in R 2 , the measure of a set in ℝ (e.g., the length of an interval), or the arc length of a curve in R 2 . Throughout, mesh-related quantities have the same index, e.g., N is the set of nodes of the partition Q , and h is the corresponding local mesh-width function etc. The analogous notation is used for partitions Q + , Q , etc. We sometimes use ^ to transform notation on the boundary to the parameter domain or vice versa, e.g., Q ^ is the partition of the parameter domain related to the partition Q of the boundary.

2 Isogeometric Analysis on 1D Boundaries

In this section, we recall the definition of B-splines and NURBS and discuss necessary assumptions on parametrized NURBS geometries. This provides the basis for the implemented IGABEM discretization discussed in Sections 34.

2.1 B-Splines and NURBS in 1D

We consider an arbitrary but fixed sequence K ^ := ( t , i ) i Z on ℝ with multiplicities

# t , i := # { i Z : t , i = t , i }

which satisfy that t , i - 1 t , i for i Z and lim i ± t , i = ± . Let N ^ := { t , i : i Z } = { x ^ , j : j Z } denote the corresponding set of nodes with x ^ , j - 1 < x ^ , j for j Z . For i Z , the 𝑖-th B-spline [12] of degree p N 0 is defined for t R inductively by

B ^ , i , 0 ( t ) := χ [ t , i - 1 , t , i ) ( t ) , B ^ , i , p ( t ) := t - t , i - 1 t , i - 1 + p - t , i - 1 B ^ , i , p - 1 ( t ) + t , i + p - t t , i + p - t , i B ^ , i + 1 , p - 1 ( t ) for p N ,

where we suppose the convention ( ) / 0 := 0 . It is well known that, for arbitrary I = [ a , b ) and p N 0 , the set { B ^ , i , p | I : i Z B ^ , i , p | I 0 } is a basis for the space of all right-continuous piecewise polynomials of degree lower than or equal to 𝑝 with breakpoints N ^ on 𝐼 which are, at each knot t , i , p - # t , i times continuously differentiable if p - # t , i 0 . Moreover, the B-splines are non-negative and have local support

(2.1) supp ( B ^ , i , p ) = [ t , i - 1 , t , i + p ] for all i Z .

Indeed, B ^ , i , p depends only on the knots t , i - 1 , , t , i + p . If t , i - 1 < t , i + p , then

(2.2) t , i = = t , i + p - 1 B ^ , i , p ( t , i - ) = 1 = B ^ , i , p ( t , i ) .

Moreover, B-splines form a partition of unity, i.e., i Z B , i , p = 1 for all p N 0 . For p 1 and i Z , the right derivative of the corresponding B-spline can be computed as

(2.3) B ^ , i , p r = p t , i + p - 1 - t , i - 1 B ^ , i , p - 1 - p t , i + p - t , i B ^ , i + 1 , p - 1 .

Finally, if i Z a , i B ^ , i , p is a given spline and K ^ + is obtained from K ^ by adding a single knot t with t ( t , - 1 , t , ] for some Z , there exist coefficients ( a + , i ) i Z such that

(2.4) k Z a , k B ^ , k , p = k Z a + , k B ^ + , k , p .

With the multiplicity # + t of t in the knots K ^ + , the new coefficients can be chosen as convex combinations of the old coefficients

a + , k = { a , k if k - p + # + t - 1 , t , k - 1 + p - t t , k - 1 + p - t , k - 1 a , k - 1 + t - t , k - 1 t , k - 1 + p - t , k - 1 a , k if - p + # + t k , a , k - 1 if + 1 k .

If # t k p + 1 for all k Z , these coefficients are unique. Proofs are found, e.g., in [12].

In addition to the knots K ^ = ( t , i ) i Z , we consider fixed positive weights W := ( w , i ) i Z with w , i > 0 . For i Z and p N 0 , we define the 𝑖-th NURBS (non-uniform rational B-spline) by

R ^ , i , p := w , i B ^ , i , p k Z w , k B ^ , k , p .

Note that the denominator is locally finite and positive. For any p N 0 , we define the spline space as well as the rational spline space

S ^ p ( K ^ ) := span { B ^ , i , p : i Z } and S ^ p ( K ^ , W ) := span { R ^ , i , p : i Z } .

2.2 Boundary Parametrization

Recall that Ω R 2 is a bounded Lipschitz domain with diam ( Ω ) < 1 and Γ Ω is a connected part of its boundary. We further assume that either Γ = Ω is parametrized by a closed continuous and piecewise continuously differentiable path γ : [ a , b ] Γ such that the restriction γ | [ a , b ) is even bijective, or that Γ Ω is parametrized by a bijective continuous and piecewise continuously differentiable path γ : [ a , b ] Γ . In the first case, we speak of closed Γ = Ω , whereas the second case is referred to as open Γ Ω . Throughout and by abuse of notation, we write γ - 1 for the inverse of γ | [ a , b ) and γ | ( a , b ] . The meaning will be clear from the context. For the left- and right-hand derivative of 𝛾, we assume that γ ( t ) 0 for t ( a , b ] and γ r ( t ) 0 for t [ a , b ) . Moreover, we assume that γ ( t ) + c γ r ( t ) 0 for all c > 0 and t [ a , b ] and t ( a , b ) , respectively. Note that these assumptions provide a pointwise representation of the arc-length derivative

(2.5) ( Γ u ) γ = ( u γ ) | γ | for all u H 1 ( Γ ) .

Finally and without loss of generality, we suppose that 𝛾 is positively oriented in the sense that the outer normal vector of Ω has the form

ν ( γ ( t ) ) = 1 | γ ( t ) | ( γ 2 ( t ) - γ 1 ( t ) ) for almost all t [ a , b ] .

While the given assumptions are sufficient for the abstract analysis of adaptive BEM, the later implementation requires that 𝛾 is a NURBS curve of degree p γ N in the following sense. Let K ^ γ = ( t γ , i ) i = 1 N γ be a sequence of knots with

a < t γ , 1 t γ , N γ - p γ - 1 < t γ , N γ - p γ = = t γ , N γ = b

and multiplicity # γ t γ , i p γ for i { 1 , , N γ - p γ } . Let

N ^ γ := { t γ , i : i { 1 , , N γ } } = { x ^ γ , j : j { 1 , , n γ } }

denote the corresponding set of nodes with x ^ γ , j - 1 < x ^ γ , j for j { 2 , , n γ } . Note that N γ = j = 1 n γ # γ x ^ γ , j . With x ^ γ , 0 := a , we define the induced mesh on [ a , b ] ,

Q ^ γ := { [ x ^ γ , j - 1 , x ^ γ , j ] : j { 1 , , n γ } } .

To use the definition of B-splines as in Section 2.1, we extend the knot sequence arbitrarily to ( t γ , i ) i Z with t γ , - p γ = = t γ , 0 = a , t γ , i t γ , i + 1 , and lim i ± t γ , i = ± . For the extended sequence, we also write K ^ γ . Moreover, let W γ = ( w γ , i ) i = 1 - p γ N γ - p γ and C γ = ( C i ) i = 1 - p γ N γ - p γ be given positive weights and control points in R 2 , respectively, which satisfy that w γ , 1 - p γ = w γ , N γ - p γ and C γ , 1 - p γ = C γ , N γ - p γ in the case of Γ = Ω . We extend W γ and C γ arbitrarily to ( w γ , i ) i Z and ( C γ , i ) i Z with positive weights w γ , i and control points C γ , i in R 2 , respectively. With the standard NURBS R ^ γ , i , p γ defined in Section 2.1, we suppose that 𝛾 has the form

γ | [ a , b ) = i Z C γ , i R ^ γ , i , p γ | [ a , b ) = i = 1 - p γ N γ - p γ C γ , i R ^ γ , i , p γ | [ a , b ) ,

where the second equality follows from the locality (2.1) of the B-splines resp. NURBS. The locality of NURBS even shows that this definition does not depend on how the knots, the weights, and the control points are precisely extended. We note that the assumptions w γ , 1 - p γ = w γ , N γ - p γ and C γ , 1 - p γ = C γ , N γ - p γ together with (2.2) below ensure that γ ( a ) = γ ( b - ) in the case of closed Γ = Ω .

2.3 Rational Splines on Γ

Let p N 0 be an arbitrary but fixed polynomial degree. Let K ^ 0 = ( t 0 , i ) i = 1 N 0 be a sequence of initial knots with

a < t 0 , 1 t 0 , N 0 - p - 1 < t 0 , N 0 - p = = t 0 , N 0 = b

and multiplicity # 0 t 0 , i p + 1 for i { 1 , , N 0 - p } . Let

N ^ 0 := { t 0 , i : i { 1 , , N 0 } } = { x ^ 0 , j : j { 1 , , n 0 } }

denote the corresponding set of nodes with x ^ 0 , j - 1 < x ^ 0 , j for j { 2 , , n 0 } . In order to apply standard quadrature rules, we assume that N ^ γ N ^ 0 . Moreover, let W 0 = ( w 0 , i ) i = 1 - p N 0 - p be given positive weights. We extend the knots and weights as in Section 2.2 and define the weight function

W ^ 0 | [ a , b ) := k Z w 0 , k B ^ 0 , k , p | [ a , b ) = k = 1 - p N 0 - p w 0 , k B ^ 0 , k , p | [ a , b ) ,

where B ^ 0 , k , p are the standard B-splines defined in Section 2.1. We define W ^ 0 : [ a , b ] R by continuously extending this function at 𝑏.

Now, let K ^ = ( t 0 , i ) i = 1 N be a finer knot vector, i.e., K ^ 0 is a subsequence of K ^ which satisfies the same properties as K ^ 0 . Outside the interval ( a , b ] , we extend K ^ as K ^ 0 . Let

N ^ := { t , i : i { 1 , , N 0 } } = { x ^ , j : j { 1 , , n } }

denote the corresponding set of nodes on [ a , b ] with x ^ , j - 1 < x ^ , j for j { 1 , , n } , and let

N := { γ ( x ^ , j ) : j { 0 , , n } }

denote the corresponding nodes on Γ. Note that N ^ does not contain the node x ^ , 0 = a , which is natural for the case Γ = Ω , since then γ ( x ^ , 0 ) = γ ( x ^ , n ) . We define the induced mesh on [ a , b ] ,

Q ^ := { [ x ^ , j - 1 , x ^ , j ] : j { 1 , , n } } ,

where we set x ^ , 0 := a and the induced mesh on Γ, Q := { γ ( Q ^ ) : Q ^ Q ^ } . Recall that the non-vanishing B-splines on [ a , b ) form a basis; see Section 2.1. This proves the existence and uniqueness of weights W = ( w , i ) i = 1 - p N - p such that

W ^ 0 | [ a , b ) = k = 1 - p N 0 - p w 0 , k B ^ 0 , k , p | [ a , b ) = k = 1 - p N - p w , k B ^ , k , p | [ a , b ) .

Choosing these weights, we ensure that the denominator of the rational splines does not change. These weights are just convex combinations of the initial weights W 0 and can be computed via the knot insertion procedure (2.4). Finally, we extend W arbitrarily to ( w , i ) i Z with w , i > 0 and define the space of (transformed) rational splines on Γ as

S p ( K , W ) := { S ^ W ^ γ - 1 : S ^ S ^ p ( K ^ ) } ,

where S ^ p ( K ^ ) denotes the space of all right-continuous piecewise polynomials of degree lower than or equal to 𝑝 with breakpoints N ^ on [ a , b ) which are, at each knot t , i , p - # t , i times continuously differentiable if p - # t , i 0 ; see also Section 2.1. Here, we extend each quotient S ^ / W ^ left-continuously at 𝑏. The locality (2.1) of B-splines shows that this definition does not depend on how the knots and the weights are precisely extended. With the standard NURBS R ^ , i , p from Section 2.1, a basis of S p ( K , W ) is given by

S p ( K , W ) = span { R , i , p : i { 1 - p , , N - p } } with R , i , p := R ^ , i , p γ - 1 ,

where the functions R ^ , i , p are again left-continuously extended at 𝑏.

3 IGABEM2D for Weakly Singular Integral Equation

3.1 Sobolev Spaces

The usual Lebesgue and Sobolev spaces on Γ are denoted by L 2 ( Γ ) = H 0 ( Γ ) and H 1 ( Γ ) . With the weak arc-length derivative Γ , the H 1 -norm reads

u H 1 ( Γ ) 2 = u L 2 ( Γ ) 2 + Γ u L 2 ( Γ ) 2 for all u H 1 ( Γ ) ,

We stress that H 1 ( Γ ) is continuously embedded in C 0 ( Γ ) . Moreover, we equip the space

H ~ 1 ( Γ ) := { v H 1 ( Ω ) : supp ( v ) Γ }

with the same norm. We define the Sobolev space H 1 / 2 ( Γ ) as the space of all functions u L 2 ( Γ ) with finite Sobolev–Slobodeckij norm

u H 1 / 2 ( Γ ) 2 := u L 2 ( Γ ) 2 + | u | H 1 / 2 ( Γ ) 2 with | u | H 1 / 2 ( Γ ) 2 := Γ Γ | u ( x ) - u ( y ) | 2 | x - y | 2 d y d x .

We will also need the seminorm | u | H 1 / 2 ( ω ) for subsets ω Γ , which is defined analogously. Moreover, H ~ 1 / 2 ( Γ ) := { v H 1 / 2 ( Ω ) : supp ( v ) Γ } is equipped with the same norm.

Sobolev spaces of negative order are defined by duality H - 1 / 2 ( Γ ) := H ~ 1 / 2 ( Γ ) * and H ~ - 1 / 2 ( Γ ) := H 1 / 2 ( Γ ) * , where duality is understood with respect to the extended L 2 ( Γ ) -scalar product , Γ . Note that we have H ~ ± σ ( Ω ) = H ± σ ( Ω ) in case of Γ = Ω even with equal norms for σ { 0 , 1 2 , 1 } .

All details and equivalent definitions of the Sobolev spaces are, for instance, found in the monographs [37, 32, 46, 42].

3.2 Weakly Singular Integral Equation

For σ { 0 , 1 2 , 1 } , the single- and double-layer operators V : H ~ σ - 1 ( Γ ) H σ ( Γ ) and K : H ~ σ ( Γ ) H σ ( Γ ) , respectively, are well-defined, linear, and continuous. As H 1 ( Γ ) C 0 ( Γ ) , the case σ = 1 yields continuous functions, which is essential for the implementation described in Section 5.

For σ = 1 2 , V : H ~ - 1 / 2 ( Γ ) H 1 / 2 ( Γ ) is symmetric and elliptic under the assumption that diam ( Ω ) < 1 . In particular, ϕ , ψ V := V ϕ , ψ Γ defines an equivalent scalar product on H ~ - 1 / 2 ( Γ ) with corresponding norm V . With this notation, the strong form (1.3) with data f = u for some u H 1 / 2 ( Γ ) or f = ( 1 2 + K ) u for some u H ~ 1 / 2 ( Γ ) is equivalently stated as

(3.1) ϕ , ψ V = f , ψ Γ for all ψ H ~ - 1 / 2 ( Γ ) .

The Lax–Milgram lemma applies, and hence (1.3) admits a unique solution ϕ H ~ - 1 / 2 ( Γ ) . If f = u , the approach is called indirect; otherwise, if f = ( 1 2 + K ) u , it is called direct. More details and proofs are found, e.g., in [37, 32, 46, 42].

3.3 Galerkin IGABEM

Let p N 0 be a fixed polynomial degree. Moreover, let K ^ and W be knots and weights as in Section 2.3. We introduce the ansatz space

X := S p ( K , W ) L 2 ( Γ ) .

Recall that

X = span { R , i , p : i = 1 - p , , N - p } ,

where the set even forms a basis. We define the Galerkin approximation Φ X of 𝜙 by

(3.2) Φ , Ψ V = f , Ψ Γ for all Ψ X .

Note that (3.2) is equivalent to solving the finite-dimensional linear system

(3.3) ( R , i , p , R , i , p V ) i , i = 1 - p N - p ( c , i ) i = 1 - p N - p = ( f , R , i , p Γ ) i = 1 - p N - p ,

where Φ = i = 1 - p N - p c , i R , i , p . The computation of the matrix and the right-hand side vector in (3.3) is realized in VMatrix.c and RHSVectorWeak.c and can be called via buildVMatrix and buildRHSVectorWeak in Matlab; see Appendix A. For details on the implementation, we refer to Sections 5.1 and 5.2.

3.4 A Posteriori Error Estimation

Let K ^ be a knot vector as in Section 3.3 with corresponding ansatz space X . We define the mesh-size function h L ( Γ ) by h | Q := | Q | for all Q Q . Moreover, we abbreviate the node patch

ω ( x ) := { Q Q : x Q } for all x N .

We consider the following three different node-based estimators: the ( h - h 2 ) -estimator

(3.4) η V , hh2 , 2 := x N η V , hh2 , ( x ) 2 with η V , hh2 , ( x ) 2 := h 1 / 2 ( Φ + - Φ ) L 2 ( ω ( x ) ) 2 ,

where K ^ + are the uniformly refined knots obtained from K ^ by adding the midpoint of each element Q ^ Q ^ with multiplicity one (see Algorithm 3.1); the Faermann estimator

(3.5) η V , fae , 2 := x N η V , fae , ( x ) 2 with η V , fae , ( x ) 2 := | f - V Φ | H 1 / 2 ( ω ( x ) ) 2 ;

and the weighted-residual estimator

(3.6) η V , res , 2 := x N η V , res , ( x ) 2 with η V , res , ( x ) 2 := h 1 / 2 Γ ( f - V Φ ) L 2 ( ω ( x ) ) 2 ,

which requires the additional regularity f H 1 ( Γ ) to ensure that f - V Φ H 1 ( Γ ) and hence (3.6) is well-defined. If f = ( K + 1 2 ) u stems from a Dirichlet problem as in Section 1.4, Section 3.2 shows that this is satisfied for u H ~ 1 ( Γ ) .

The computation of the estimators is implemented in HHEstWeak.c, FaerEstWeak.c, and ResEstWeak.c and can be called in Matlab via buildHHEstWeak, buildFaerEstWeak, and buildResEstWeak; see Appendix A. The residual estimators require the evaluation of the boundary integral operators 𝔙 and 𝔎 applied to some function, which is implemented in ResidualWeak.c. The evaluations can be used in Matlab via evalV and evalRHSWeak; see Appendix A. The implementation of the estimators and the evaluations are discussed in Section 5.

3.5 Adaptive IGABEM Algorithm

To refine a given ansatz space X , we compute one of the corresponding error estimators η and determine a set of marked nodes with large error indicator η ( x ) . By default, we then apply the following refinement algorithm, which uses ℎ-refinement controlling the maximal mesh ratio in the parameter domain

κ ^ := max { | γ - 1 ( Q ) | | γ - 1 ( Q ) | : Q , Q Q with Q Q }

as well as multiplicity increase.

Algorithm 3.1

Input: polynomial degree p N 0 , initial maximal mesh ratio κ ^ 0 , knot vector K ^ with κ ^ 2 κ ^ 0 , marked nodes M N .

  1. Define the set of marked elements M := .

  2. If both nodes of an element Q Q belong to M , mark 𝑄 by adding it to M .

  3. For all other nodes in M , increase the multiplicity if it is less than or equal to 𝑝. Otherwise, mark the elements which contain one of these nodes by adding them to M .

  4. Bisect all Q M in the parameter domain by inserting the midpoint of γ - 1 ( Q ) with multiplicity one to the current knot vector. Use a minimal number of further bisections to guarantee that the new knot vector K ^ + satisfies that κ ^ + 2 κ ^ 0 .

Output: refined knot vector K ^ + .

The marked elements M and the nodes whose multiplicity should be increased are determined in the function markNodesElements.m. The multiplicity increase and the computation of the new weights are realized in increaseMult.m. An optimal 1D bisection algorithm as in (iv) is discussed and analyzed in [4]. Together with the computation of the new weights, it is realized in refineBoundaryMesh.m. We stress that we have also implemented two further relevant strategies that rely on ℎ-refinement only: replace (iii) by adding all elements Q Q containing one of the other nodes in M to M ; or replace (iii) by adding all elements Q Q containing one of the other nodes in M to M and insert the midpoints in (iv) with multiplicity p + 1 . The first strategy leads to refined splines of full regularity, whereas the second one leads to lowest regularity.

We fix the considered error estimator η { η V , hh2 , , η V , fae , , η V , res , } . The corresponding error indicators η ( x ) are defined accordingly.

Algorithm 3.2

Input: polynomial degree p N 0 , initial knot vector K ^ 0 , initial weights W 0 , marking parameter 0 < θ 1 .

Adaptive loop: for each = 0 , 1 , 2 , , iterate the following steps.

  1. Compute approximation Φ X by solving (3.2).

  2. Compute refinement indicators η ( x ) for all nodes x N .

  3. Determine a minimal set of nodes M N such that

    (3.7) θ η 2 x M η ( x ) 2 .

  4. Generate refined knot vector K ^ + 1 via Algorithm 3.1.

Output: approximate solutions Φ and estimators η for all N 0 .

The adaptive algorithm is realized in the main function IGABEMWeak.m. The considered problem as well as the used parameters can be changed there by the user. In Section 5, we discuss how the arising (singular) boundary integrals are computed. We also mention that Algorithm 3.2 (iii) is realized in markNodesElements.m, which also determines the corresponding set of marked elements M and the nodes whose multiplicity should be increased from Algorithm 3.1 (ii)–(iii).

3.6 Numerical Experiments

In this section, we empirically investigate the performance of Algorithm 3.2 for different geometries, ansatz spaces, and error estimators. Figure 1 shows the different geometries, namely a pacman geometry and a slit. The boundary of the pacman geometry can be parametrized via rational splines of degree 2, while the slit can be parametrized by splines of degree 1. As initial ansatz space, we either consider the same (rational) splines, i.e., K ^ 0 = K ^ γ and W 0 = W γ , splines of degree 𝑝 and smoothness C p - 1 , i.e.,

K ^ 0 = ( x ^ γ , 1 , x ^ γ , 2 , , x ^ γ , n γ - 1 , x ^ γ , n γ , , x ^ γ , n γ # = p + 1 )

and W 0 = ( 1 , , 1 ) , or piecewise polynomials of degree 𝑝, i.e.,

K ^ 0 = ( x ^ γ , 1 , , x ^ γ , 1 # = p + 1 , x ^ γ , 2 , , x ^ γ , 2 # = p + 1 , , x ^ γ , n γ , , x ^ γ , n γ # = p + 1 )

and W 0 = ( 1 , , 1 ) . In the latter case, we always consider ℎ-refinement with new knots having multiplicity p + 1 as explained after Algorithm 3.1.

Figure 1 
                  Geometries and initial nodes for examples from Section 3.6.
Figure 1

Geometries and initial nodes for examples from Section 3.6.

3.6.1 Singular Problem on Pacman Geometry

We prescribe an exact solution 𝑃 of the Laplace problem in polar coordinates ( x 1 , x 2 ) = r ( cos β , sin β ) with β ( - π 2 τ , π 2 τ ) and τ := 4 7 as P ( x , y ) := r τ cos ( τ β ) on the pacman domain, cf. Figure 1. The solution 𝜙 of the weakly singular integral equation (1.3) with f = ( 1 2 + K ) ( P | Γ ) is just the normal derivative of 𝑃, which reads

ϕ ( x , y ) = ( cos ( β ) cos ( τ β ) + sin ( β ) sin ( τ β ) sin ( β ) cos ( τ β ) - cos ( β ) sin ( τ β ) ) ν ( x , y ) τ r τ - 1

and has a generic singularity at the origin.

Figure 2 
                     Example from Section 3.6.1: distribution and histogram of the knots of the first step of Algorithm 3.2 where the error estimator is below 
                           
                              
                                 
                                    10
                                    
                                       -
                                       6
                                    
                                 
                              
                              
                              10^{-6}
                           
                         without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order 
                           
                              
                                 
                                    p
                                    =
                                    2
                                 
                              
                              
                              p=2
                           
                         and the weighted-residual estimator with 
                           
                              
                                 
                                    θ
                                    =
                                    0.75
                                 
                              
                              
                              \theta=0.75
                           
                        .
Figure 2

Example from Section 3.6.1: distribution and histogram of the knots of the first step of Algorithm 3.2 where the error estimator is below 10 - 6 without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order p = 2 and the weighted-residual estimator with θ = 0.75 .

Figure 2 shows the knot distribution (i.e., the relative number of knots lower than or equal to the parameter points on the 𝑥-axis) as well as a histogram of the knots in the parameter domain [ a , b ] of the first step of Algorithm 3.2 where the residual error estimator is below 10 - 6 . We compare pure ℎ-refinement (top) to the proposed refinement strategy including multiplicity increase (bottom), i.e., Algorithm 3.1, where the knots with full multiplicity p + 1 are plotted in red. Splines of order p = 2 are used as ansatz spaces, and the weighted-residual estimator with θ = 0.75 is used to steer the adaptive refinement. It can be seen that Algorithm 3.2 heavily refines the mesh towards the reentrant corner ( 0 , 0 ) Γ , which corresponds to γ ( 1 2 ) and where 𝜙 has a singularity. The solution 𝜙 additionally has jumps at γ ( 1 3 ) and γ ( 2 3 ) . Since we use splines of polynomial degree p = 2 and there are no knots with multiplicity p + 1 in ( 0 , 1 ) for Algorithm 3.2 without multiplicity increase (top), we know that the resulting approximation is at least continuous at each knot, cf. Section 2.1, and that it can thus not resolve these jumps appropriately. As a result, the algorithm uses ℎ-refinement at these jump points to reduce the error. Since Algorithm 3.2 with multiplicity increase (bottom) allows for knots with full multiplicity p + 1 , i.e., it allows for jumps in the resulting approximation, the jumps at γ ( 1 3 ) and γ ( 2 3 ) are simply resolved by knots with full multiplicity.

Figure 3 
                     Example from Section 3.6.1: (approximate) error 
                           
                              
                                 
                                    
                                       ∥
                                       
                                          ϕ
                                          -
                                          
                                             Φ
                                             ℓ
                                          
                                       
                                       ∥
                                    
                                    V
                                 
                              
                              
                              \lVert\phi-\Phi_{\ell}\rVert_{\mathfrak{V}}
                           
                         and error estimator 
                           
                              
                                 
                                    η
                                    ℓ
                                 
                              
                              
                              \eta_{\ell}
                           
                         with respect to the number of knots 𝑁 for 
                           
                              
                                 
                                    θ
                                    ∈
                                    
                                       {
                                       0.75
                                       ,
                                       1
                                       }
                                    
                                 
                              
                              
                              \theta\in\{0.75,1\}
                           
                        , 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       0
                                       ,
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{0,1,2,3\}
                           
                        , different ansatz spaces (piecewise polynomials, splines), and different estimators (Faermann, weighted-residual).
Left: Faermann estimator.
Right: weighted-residual estimator.
Top: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       0
                                       ,
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{0,1,2,3\}
                           
                         for piecewise polynomials.
Middle: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       0
                                       ,
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{0,1,2,3\}
                           
                         for splines.
Bottom: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{1,2,3\}
                           
                         for different ansatz spaces.
Figure 3

Example from Section 3.6.1: (approximate) error ϕ - Φ V and error estimator η with respect to the number of knots 𝑁 for θ { 0.75 , 1 } , p { 0 , 1 , 2 , 3 } , different ansatz spaces (piecewise polynomials, splines), and different estimators (Faermann, weighted-residual). Left: Faermann estimator. Right: weighted-residual estimator. Top: comparison of p { 0 , 1 , 2 , 3 } for piecewise polynomials. Middle: comparison of p { 0 , 1 , 2 , 3 } for splines. Bottom: comparison of p { 1 , 2 , 3 } for different ansatz spaces.

In Figure 3, we compare uniform refinement with θ = 1 and adaptive refinement with θ = 0.75 , different ansatz spaces, and different estimators for Algorithm 3.2. First (top), for p = 2 fixed, we compare (an approximation of the) error for the different ansatz spaces and the algorithm steered either by the Faermann estimator (top, left) or the weighted-residual estimator (top, right). Instead of the exact error ϕ - Φ V , we compute Φ + - Φ V , where K ^ + are the uniformly refined knots, which result from K ^ by adding the midpoint of each element Q ^ Q ^ with multiplicity one (see also Algorithm 3.1). We get similar plots, where uniform mesh refinement leads to the suboptimal rate of convergence O ( N - 4 / 7 ) because the solution lacks regularity. However, for all adaptive cases, Algorithm 3.2 regains the optimal rate of convergence O ( N - 3 / 2 - p ) , where splines and NURBS exhibit a significantly better multiplicative constant than piecewise polynomials. Next (middle), we consider the estimators for piecewise polynomials. For both the Faermann estimator (middle, left) and the weighted-residual estimator (middle, right), we compare different orders p { 0 , 1 , 2 , 3 } for the ansatz spaces. Again, uniform mesh refinement leads to the suboptimal rate O ( N - 4 / 7 ) (only displayed for p = 0 ), whereas the adaptive strategy regains the optimal order of convergence O ( N - 3 / 2 - p ) . Lastly, we get similar results for splines (bottom).

3.6.2 Crack Problem on Slit Geometry

We consider a crack problem on the slit Γ = [ - 1 , 1 ] × { 0 } , cf. Figure 1. Let 𝑓 in (3.1) be defined as f ( x , 0 ) := - x 2 . Then the exact solution of (3.1) reads

ϕ ( x , 0 ) = - x 1 - x 2

and has singularities at the tips x = ± 1 .

Figure 4 
                     Example from Section 3.6.2: distribution and histogram of the knots of the first step of Algorithm 3.2, where the error estimator is below 
                           
                              
                                 
                                    10
                                    
                                       -
                                       6
                                    
                                 
                              
                              
                              10^{-6}
                           
                         without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order 
                           
                              
                                 
                                    p
                                    =
                                    2
                                 
                              
                              
                              p=2
                           
                         and the weighted-residual estimator with 
                           
                              
                                 
                                    θ
                                    =
                                    0.75
                                 
                              
                              
                              \theta=0.75
                           
                        .
Figure 4

Example from Section 3.6.2: distribution and histogram of the knots of the first step of Algorithm 3.2, where the error estimator is below 10 - 6 without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order p = 2 and the weighted-residual estimator with θ = 0.75 .

In Figure 4, the knot distribution (i.e., the relative number of knots lower than or equal to the parameter points on the 𝑥-axis) as well as a histogram of the knots in the parameter domain [ a , b ] of the first step of Algorithm 3.2, where the residual error estimator is below 10 - 6 , are plotted over the parameter domain. For splines of degree p = 2 and the weighted-residual estimator with θ = 0.75 , we compare Algorithm 3.2 without (top) and with (bottom) multiplicity increase. We see that, for this example, the difference between these two approaches is smaller than for the example of Section 3.6.1. Since the solution 𝜙 is continuous in ( a , b ) , the knot distribution looks quite similar for both cases. However, we can see that multiplicity increase takes place nonetheless (bottom), and Algorithm 3.2 refines towards the two tips x = ± 1 , where the solution has singularities.

Figure 5 
                     Example from Section 3.6.2: (approximate) error 
                           
                              
                                 
                                    
                                       ∥
                                       
                                          ϕ
                                          -
                                          
                                             Φ
                                             ℓ
                                          
                                       
                                       ∥
                                    
                                    V
                                 
                              
                              
                              \lVert\phi-\Phi_{\ell}\rVert_{\mathfrak{V}}
                           
                         and error estimator 
                           
                              
                                 
                                    η
                                    ℓ
                                 
                              
                              
                              \eta_{\ell}
                           
                         with respect to the number of knots 𝑁 for 
                           
                              
                                 
                                    θ
                                    ∈
                                    
                                       {
                                       0.75
                                       ,
                                       1
                                       }
                                    
                                 
                              
                              
                              \theta\in\{0.75,1\}
                           
                        , 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       0
                                       ,
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{0,1,2,3\}
                           
                        , different ansatz spaces (piecewise polynomials, splines), and different estimators (Faermann, weighted-residual).
Left: Faermann estimator.
Right: weighted-residual estimator.
Top: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       0
                                       ,
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{0,1,2,3\}
                           
                         for piecewise polynomials.
Middle: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       0
                                       ,
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{0,1,2,3\}
                           
                         for splines.
Bottom: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       }
                                    
                                 
                              
                              
                              p\in\{1,2,3\}
                           
                         for different ansatz spaces.
Figure 5

Example from Section 3.6.2: (approximate) error ϕ - Φ V and error estimator η with respect to the number of knots 𝑁 for θ { 0.75 , 1 } , p { 0 , 1 , 2 , 3 } , different ansatz spaces (piecewise polynomials, splines), and different estimators (Faermann, weighted-residual). Left: Faermann estimator. Right: weighted-residual estimator. Top: comparison of p { 0 , 1 , 2 , 3 } for piecewise polynomials. Middle: comparison of p { 0 , 1 , 2 , 3 } for splines. Bottom: comparison of p { 1 , 2 , 3 } for different ansatz spaces.

In Figure 5, we compare uniform refinement with θ = 1 and adaptive refinement with θ = 0.75 , different ansatz spaces, and different estimators for Algorithm 3.2. First (top), for p = 1 fixed, we compare (an approximation of the) error for the different ansatz spaces and the algorithm steered either by the Faermann estimator (top, left) or the weighted-residual estimator (top, right). Instead of the exact error ϕ - Φ V , we compute Φ + - Φ V , where K ^ + are the uniformly refined knots, which result from K ^ by adding the midpoint of each element Q ^ Q ^ with multiplicity one (see also Algorithm 3.1). We get similar plots, where uniform mesh refinement leads to the suboptimal rate of convergence O ( N - 1 / 2 ) because the solution lacks regularity at the tips x = ± 1 . However, for all adaptive cases, Algorithm 3.2 regains the optimal rate of convergence O ( N - 3 / 2 - p ) , where splines exhibit a better multiplicative constant than piecewise polynomials. Next (middle), we consider the estimators for piecewise polynomials. For both the Faermann estimator (middle, left) and the weighted-residual estimator (middle, right), we compare different orders p { 0 , 1 , 2 , 3 } for the ansatz spaces. Again, uniform mesh refinement leads to the suboptimal rate O ( N - 1 / 2 ) (only displayed for p = 0 ), whereas the adaptive strategy regains the optimal order of convergence O ( N - 3 / 2 - p ) . Lastly, we get similar results for splines (bottom).

4 IGABEM2D for Hyper-singular Integral Equation

4.1 Sobolev Spaces

Besides the Sobolev spaces from Section 3.1, the treatment of the closed boundary Γ = Ω in the hyper-singular case requires the definition of H 0 ± σ ( Ω ) := { u H ± σ ( Ω ) : u , 1 Ω = 0 } for σ { 0 , 1 2 , 1 } .

4.2 Hyper-singular Integral Equation

For σ { 0 , 1 2 , 1 } , the hyper-singular integral operator W : H ~ σ ( Γ ) H σ - 1 ( Γ ) and the adjoint double-layer operator K : H ~ σ - 1 ( Γ ) H σ - 1 ( Γ ) are well-defined, linear, and continuous. Moreover, K is indeed the adjoint operator of 𝔎.

For Γ Ω and σ = 1 2 , W : H ~ 1 / 2 ( Γ ) H - 1 / 2 ( Γ ) is symmetric and elliptic. Hence, u , v W := W u , v Γ defines an equivalent scalar product on H ~ 1 / 2 ( Γ ) with corresponding norm W .

For Γ = Ω , the operator 𝔚 is symmetric and elliptic up to the constant functions, i.e.,

W : H 0 1 / 2 ( Ω ) H 0 - 1 / 2 ( Ω )

is elliptic. In particular,

u , v W := W u , v Ω + u , 1 Ω v , 1 Ω

is an equivalent scalar product on H 1 / 2 ( Ω ) = H ~ 1 / 2 ( Ω ) with corresponding energy norm W . Additionally, there holds the mapping property K : H 0 - 1 / 2 ( Ω ) H 0 - 1 / 2 ( Ω ) .

We assume either g = ϕ for some ϕ H - 1 / 2 ( Γ ) with ϕ H 0 - 1 / 2 ( Ω ) if Γ = Ω , or g = ( 1 2 - K ) ϕ for some ϕ H ~ - 1 / 2 ( Γ ) with ϕ H 0 - 1 / 2 ( Ω ) if Γ = Ω . Then the strong form (1.4) is equivalently stated as

u , v W = g , v Γ for all v H ~ 1 / 2 ( Γ ) .

The Lax–Milgram lemma applies and hence (1.4) admits a unique solution u H ~ 1 / 2 ( Γ ) . If g = ϕ , the approach is called indirect, otherwise if g = ( 1 2 - K ) ϕ , it is called direct. Details and proofs are found, e.g., in [37, 32, 46, 42].

4.3 Galerkin IGABEM

Let p N be a fixed polynomial degree. Moreover, let K ^ and W be knots and weights as in Section 2.3. We suppose that the initial knots K ^ 0 additionally satisfy that # 0 t 0 , i p for i { 1 , , N 0 - p } and that w 0 , 1 - p = w 0 , N 0 - p if Γ = Ω . Note that (2.2) shows that B ^ 0 , 1 - p ( a ) = B ^ 0 , N 0 - p ( b - ) = 1 = B ^ , 1 - p ( a ) = B ^ , N - p ( b - ) . If Γ = Ω , the assumption w 0 , 1 - p = w 0 , N γ - p thus ensures that W ^ 0 ( a ) = W ^ 0 ( b - ) , which further yields that w , 1 - p = w , N - p .

We introduce the ansatz space

Y := { { V S p ( K , W ) : V ( γ ( a ) ) = V ( γ ( b - ) ) } H 1 ( Γ ) if Γ = Ω , { V S p ( K , W ) : 0 = V ( γ ( a ) ) = V ( γ ( b - ) ) } H ~ 1 ( Γ ) if Γ Ω .

According to Section 2.1, it holds that

Y = { span ( { R , i , p : i = 2 - p , , N - p - 1 } { R , 1 - p , p + R , N - p , p } ) if Γ = Ω , span { R , i , p : i = 2 - p , , N - p - 1 } if Γ Ω .

In each case, the corresponding sets even form a basis of Y . We abbreviate the (continuous) basis functions R , i , p c := R , i , p for i = 2 - p , , N - p - 1 and R , i , p c := R , 1 - p , p + R , N - p , p for i = 1 - p .

We assume the additional regularity ϕ L 2 ( Γ ) and approximate 𝜙 by ϕ := Π ϕ , where Π is the L 2 -orthogonal projection onto the space of transformed piecewise polynomials

P p ( Q ) := { Ψ ^ γ - 1 : Ψ ^ is piecewise polynomial of degree p with breakpoints N ^ } .

Note that ϕ , 1 Ω = 0 in case of Γ = Ω also yields that ϕ , 1 Ω = 0 . With g := ϕ resp. g := ( 1 2 - K ) ϕ , the corresponding Galerkin approximation U Y reads

(4.1) U , V W = g , V Γ for all V Y .

At least for the direct method, we empirically observed that working with 𝑔 instead of with g leads to strong implementational instabilities (since a possible singularity of 𝑔 meets the singularity of the integral kernels of the boundary integral operators.)

Note that (4.1) is equivalent to solving the finite-dimensional linear system

(4.2) ( R , i , p c , R , i , p c W ) i , i = k - p N - p - 1 ( c , i ) i = k - p N - p = ( g , R , i , p c Γ ) i = k - p N - p - 1 ,

where k = 1 for Γ = Ω and k = 2 for Γ Ω . Then U = i = k - p N - p c , i R , i , p c . The computation of the matrix and the right-hand side vector in (4.2) is realized in WMatrix.c and RHSVectorHyp.c, where the required projection Π is realized in PhiApprox.c. They can be called in Matlab via buildWMatrix, buildRHSVectorHyp, and buildPhiApprox; see Appendix A. For details on the implementation, we refer to Sections 5.1 and 5.2.

4.4 A Posteriori Error Estimation

In the hyper-singular case, we consider the following two different node-based estimators: the ( h - h 2 ) -estimator

(4.3) η W , hh2 , 2 := x N η W , hh2 , ( x ) 2 with η W , hh2 , ( x ) 2 := h 1 / 2 Γ ( u + - u ) L 2 ( ω ( x ) ) 2 ,

where K ^ + are the uniformly refined knots, which result from K ^ by adding the midpoint of each element Q ^ Q ^ with multiplicity one (see also Algorithm 3.1); and the weighted-residual estimator

(4.4) η W , res , 2 := x N η W , res , ( x ) 2 with η W , res , ( x ) 2 := h 1 / 2 ( g - W U ) L 2 ( ω ( x ) ) 2 ,

where the additional regularity ϕ L 2 ( Γ ) ensures that g - W U L 2 ( Γ ) ; see Section 4.2. To account for the approximation of 𝑔 by g , we additionally consider the oscillations

(4.5) osc W , 2 := x N osc W , ( x ) 2 with osc W , ( x ) := h 1 / 2 ( 1 - Π ) ϕ L 2 ( ω ( x ) ) 2 .

The computation of the estimators and oscillations is implemented in HHEstHyp.c, ResEstHyp.c, and OscHyp.c. They can be called in Matlab via buildHHEstHyp, buildResEstHyp, and buildOscHyp. The residual estimators require the evaluation of the boundary integral operators 𝔚 and K applied to some function, which is implemented in ResidualHyp.c. The evaluations can be used in Matlab via evalW.c and evalRHSHyp.c; see Appendix A. The implementation of the estimators and the evaluations are discussed in Section 5.

4.5 Adaptive IGABEM Algorithm

To enrich a given ansatz space X , we compute one of the error estimators η and determine a set of marked nodes with large indicator η ( x ) . By default, we then apply an adapted version of the refinement Algorithm 3.1 with the following two obvious modifications. First, we only consider p N instead of p N 0 as input. Second, in (iii), we only increase the multiplicity if it is less than or equal to p - 1 to ensure that the basis functions are at least continuous. As before, the required marking is realized in markNodesElements.m. Together with the computation of the new weights, the ℎ-refinement is realized in refineBoundaryMesh.m, while the multiplicity increase is realized in increaseMult.m. We stress that we have also implemented two further relevant strategies that rely on ℎ-refinement only: replace (iii) by adding all elements Q Q containing one of the other nodes in M to M ; or replace (iii) by adding all elements Q Q containing one of the other nodes in M to M and insert the midpoints in (iv) with multiplicity 𝑝. The first strategy leads to refined splines of full regularity, whereas the second one leads to lowest possible regularity, i.e., mere continuity.

We fix the considered error estimator η { ( η W , hh2 , 2 + osc 2 ) 1 / 2 , ( η W , res , 2 + osc 2 ) 1 / 2 } . The corresponding error indicators η ( x ) are defined accordingly.

Algorithm 4.1

Input: polynomial degree p N , initial knot vector K ^ 0 , initial weights W 0 , marking parameter 0 < θ 1 . Adaptive loop: for each = 0 , 1 , 2 , , iterate the following steps.

  1. Compute approximation U X by solving (4.1).

  2. Compute refinement indicators η ( x ) for all nodes x N .

  3. Determine a minimal set of nodes M N such that (3.7) holds.

  4. Generate refined knot vector K ^ + 1 via the adapted version of Algorithm 3.1.

Output: approximate solutions U and estimators η for all N 0 .

The adaptive algorithm is realized in IGABEMHyp.m. The considered problem as well as the used parameters can be changed there as desired. In Section 5, we discuss how the arising (singular) boundary integrals are computed. We also mention that Algorithm 4.1 (iii) is realized in markNodesElements.m, which also determines the corresponding set of marked elements M and the nodes whose multiplicity should be increased from Algorithm 3.1 (ii)–(iii).

4.6 Numerical Experiments

In this section, we empirically investigate the performance of Algorithm 4.1 for different ansatz spaces and error estimators. Figure 6 shows the different geometries, namely a circle and a heart geometry whose boundary can be parametrized via rational splines of degree 2. As initial ansatz space, we either consider the same rational splines, i.e., K ^ 0 = K ^ γ and W 0 = W γ , splines of degree 𝑝 and smoothness C p - 1 , i.e.,

K ^ 0 = ( x ^ γ , 1 , x ^ γ , 2 , , x ^ γ , n γ - 1 , x ^ γ , n γ , , x ^ γ , n γ # = p + 1 )

and W 0 = ( 1 , , 1 ) , or continuous piecewise polynomials of degree 𝑝, i.e.,

K ^ 0 = ( x ^ γ , 1 , , x ^ γ , 1 # = p , , x ^ γ , n γ - 1 , , x ^ γ , n γ - 1 # = p , x ^ γ , n γ , , x ^ γ , n γ # = p + 1 )

and W 0 = ( 1 , , 1 ) . In the latter case, we always consider ℎ-refinement with new knots having multiplicity 𝑝 as before Algorithm 4.1.

Figure 6 
                  Geometry and initial nodes for examples from Section 4.6.
Figure 6

Geometry and initial nodes for examples from Section 4.6.

4.6.1 Singular Problem on Heart Geometry

Similar to Section 3.6.1, we prescribe an exact solution 𝑃 of the Laplace problem in polar coordinates ( x 1 , x 2 ) = r ( cos β , sin β ) with β ( - 3 π 2 , π 2 ) as P ( x 1 , x 2 ) := r 2 / 3 cos ( τ β ) on the heart geometry, cf. Figure 6. The solution 𝑢 of the hyper-singular equation (1.4) with g = ( 1 2 - K ) ( P ν ) is just the restriction 𝑃 onto Ω , which has a generic singularity at the origin.

Figure 7 
                     Example from Section 4.6.1: distribution and histogram of the knots of the first step of Algorithm 4.1 where the error estimator is below 
                           
                              
                                 
                                    10
                                    
                                       -
                                       6
                                    
                                 
                              
                              
                              10^{-6}
                           
                         without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order 
                           
                              
                                 
                                    p
                                    =
                                    3
                                 
                              
                              
                              p=3
                           
                         and the weighted-residual estimator with 
                           
                              
                                 
                                    θ
                                    =
                                    0.5
                                 
                              
                              
                              \theta=0.5
                           
                        .
Figure 7

Example from Section 4.6.1: distribution and histogram of the knots of the first step of Algorithm 4.1 where the error estimator is below 10 - 6 without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order p = 3 and the weighted-residual estimator with θ = 0.5 .

In Figure 7, the knot distribution (i.e., the relative number of knots lower than or equal to the parameter points on the 𝑥-axis) as well as a histogram of the knots in the parameter domain [ a , b ] of the first step of Algorithm 4.1 where the residual error estimator is below 10 - 6 are plotted over the parameter domain. For splines of degree p = 3 and the weighted-residual estimator with θ = 0.5 , Algorithm 4.1 without multiplicity increase (top) and Algorithm 4.1 with multiplicity increase (bottom) heavily refine the mesh towards γ ( 1 2 ) = ( 0 , 0 ) , where 𝑢 has a generic singularity. For the latter one, we can see that multiplicity increase leads to a smoother knot distribution where less ℎ-refinement takes place besides the singularity.

Figure 8 
                     Example from Section 4.6.1: (approximate) error 
                           
                              
                                 
                                    
                                       ∥
                                       
                                          u
                                          -
                                          
                                             U
                                             ℓ
                                          
                                       
                                       ∥
                                    
                                    W
                                 
                              
                              
                              \lVert u-U_{\ell}\rVert_{\mathfrak{W}}
                           
                         and error estimator 
                           
                              
                                 
                                    η
                                    ℓ
                                 
                              
                              
                              \eta_{\ell}
                           
                         with oscillations with respect to the number of knots 𝑁 for 
                           
                              
                                 
                                    θ
                                    =
                                    0.5
                                 
                              
                              
                              \theta=0.5
                           
                        , 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       ,
                                       4
                                       }
                                    
                                 
                              
                              
                              p\in\{1,2,3,4\}
                           
                        , different ansatz spaces (continuous piecewise polynomials, splines, NURBS), and different estimators (
                           
                              
                                 
                                    (
                                    
                                       h
                                       -
                                       
                                          h
                                          2
                                       
                                    
                                    )
                                 
                              
                              
                              (h-\frac{h}{2})
                           
                        , weighted-residual).
Left: 
                           
                              
                                 
                                    (
                                    
                                       h
                                       -
                                       
                                          h
                                          2
                                       
                                    
                                    )
                                 
                              
                              
                              (h-\frac{h}{2})
                           
                        -estimator.
Right: weighted-residual estimator.
Top: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       ,
                                       4
                                       }
                                    
                                 
                              
                              
                              p\in\{1,2,3,4\}
                           
                         for continuous piecewise polynomials.
Middle: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       1
                                       ,
                                       2
                                       ,
                                       3
                                       ,
                                       4
                                       }
                                    
                                 
                              
                              
                              p\in\{1,2,3,4\}
                           
                         for splines.
Bottom: comparison of 
                           
                              
                                 
                                    p
                                    ∈
                                    
                                       {
                                       2
                                       ,
                                       3
                                       ,
                                       4
                                       }
                                    
                                 
                              
                              
                              p\in\{2,3,4\}
                           
                         for different ansatz spaces.
Figure 8

Example from Section 4.6.1: (approximate) error u - U W and error estimator η with oscillations with respect to the number of knots 𝑁 for θ = 0.5 , p { 1 , 2 , 3 , 4 } , different ansatz spaces (continuous piecewise polynomials, splines, NURBS), and different estimators ( ( h - h 2 ) , weighted-residual). Left: ( h - h 2 ) -estimator. Right: weighted-residual estimator. Top: comparison of p { 1 , 2 , 3 , 4 } for continuous piecewise polynomials. Middle: comparison of p { 1 , 2 , 3 , 4 } for splines. Bottom: comparison of p { 2 , 3 , 4 } for different ansatz spaces.

In Figure 8, we compare uniform refinement with θ = 1 and adaptive refinement with θ = 0.5 , different ansatz spaces, and different estimators for Algorithm 4.1. First (top), for p = 2 fixed, we compare (an approximation of the) error for the different ansatz spaces and the algorithm steered either by the ( h - h 2 ) -estimator (top, left) or the weighted-residual estimator (top, right). Instead of the exact error u - U W , we compute U + - U W , where K ^ + are the uniformly refined knots, which result from K ^ by adding the midpoint of each element Q ^ Q ^ with multiplicity one (see also Algorithm 3.1). We get similar plots, where uniform mesh refinement leads to the suboptimal rate of convergence O ( N - 2 / 3 ) because the solution lacks regularity. However, for all adaptive cases, Algorithm 4.1 regains the optimal rate of convergence O ( N - 1 / 2 - p ) , where splines exhibit a better multiplicative constant than piecewise polynomials. Next (middle), we consider the estimators for continuous piecewise polynomials. For both the ( h - h 2 ) -estimator (middle, left) and the weighted-residual estimator (middle, right), we compare different orders p { 1 , 2 , 3 , 4 } for the ansatz spaces. Again, uniform mesh refinement leads to the suboptimal rate O ( N - 2 / 3 ) (only displayed for p = 1 ), whereas the adaptive strategy regains the optimal order of convergence O ( N - 1 / 2 - p ) . Lastly, we get similar results for splines (bottom).

5 Implementational Aspects

5.1 Computation of Galerkin Matrices

The computation of the Galerkin matrices of (3.3) and (4.2) is realized in VMatrix.c and WMatrix.c, and can be called in Matlab via buildVMatrix and buildWMatrix. Recall that R , i , p c , R , i , p c W involves the term R , i , p c , 1 Ω R , i , p c , 1 Ω if Γ = Ω , which can be approximated by standard tensor-Gauss quadrature. For the term W R , i , p c , R , i , p c Γ , we employ Maue’s formula

(5.1) W u , v Γ = V Γ u , Γ v Γ = Γ u , Γ v V for all u , v H ~ 1 ( Γ ) ;

see [36]. Note that Γ R , i , p c and Γ R , i , p c can be easily calculated via (2.3) and (2.5). Thus, it is sufficient to explain how to compute ϕ , ψ V for (at least) Q -piecewise continuous functions ϕ , ψ H ~ - 1 / 2 ( Γ ) .

We start with

ϕ , ψ V = Γ Γ ϕ ( y ) ψ ( x ) G ( x , y ) d y d x = Q Q Q Q Q Q ϕ ( y ) ψ ( x ) G ( x , y ) d y d x .

Note that only elements 𝑄 and Q that are contained in the supports of 𝜓 and 𝜙, respectively, contribute to the sum. Recall from (2.1) that (derivatives of) NURBS have local support. Let Q , Q Q with Q = γ ( Q ^ ) , Q ^ = [ x ^ , j - 1 , x ^ , j ] ) , Q = γ ( Q ^ ) , Q ^ = [ x ^ , j - 1 , x ^ , j ] for some j , j { 1 , , n } . Then elementary integral transformations show that

Q Q ϕ ( y ) ψ ( x ) G ( x , y ) d y d x = Q ^ Q ^ ϕ ( γ ( t ) ) ψ ( γ ( s ) ) G ( γ ( s ) , γ ( t ) ) | γ ( t ) | | γ ( s ) | d t d s = 0 1 0 1 ϕ ( γ j ( τ ) ) ψ ( γ j ( σ ) ) G ( γ j ( σ ) , γ j ( τ ) ) | γ j ( τ ) | | γ j ( σ ) | d τ d σ ,

where γ j ( σ ) := γ ( x ^ , j - 1 + σ ( x ^ , j - x ^ , j - 1 ) ) and γ j ( τ ) := γ ( x ^ , j - 1 + τ ( x ^ , j - x ^ , j - 1 ) ) . We abbreviate

ϕ ~ ( τ ) := ϕ ( γ j ( τ ) ) | γ j ( τ ) | and ψ ~ ( σ ) := ϕ ( γ j ( σ ) ) | γ j ( σ ) | ,

which gives

Q Q ϕ ( y ) ψ ( x ) G ( x , y ) d y d x = 0 1 0 1 ϕ ~ ( τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( τ ) ) d τ d σ .

Now, we distinguish three cases.

Case 1 (No Intersection)

We assume that Q Q = . In this case, the integrand is (at least) continuous and we can use standard tensor-Gauss quadrature to approximate the integral.

Case 2 (Identical Elements)

We assume that Q = Q , which also implies that j = j . We split the integral into two summands, use Fubini’s theorem as well as the reflection ( σ , τ ) ( τ , σ ) for the second one, and use the Duffy transformation ( σ , τ ) ( σ , σ τ ) with Jacobi determinant 𝜎 (see Figure 9 for a visualization)

0 1 0 1 ϕ ~ ( τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( τ ) ) d τ d σ
= 0 1 0 σ ϕ ~ ( τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( τ ) ) d τ d σ + 0 1 σ 1 ϕ ~ ( τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( τ ) ) d τ d σ
= 0 1 0 σ ϕ ~ ( τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( τ ) ) d τ d σ + 0 1 0 σ ϕ ~ ( σ ) ψ ~ ( τ ) G ( γ j ( τ ) , γ j ( σ ) ) d τ d σ
= 0 1 0 1 ( ϕ ~ ( σ τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( σ τ ) ) + ϕ ~ ( σ ) ψ ~ ( σ τ ) G ( γ j ( σ τ ) , γ j ( σ ) ) ) σ d τ d σ .
Recall that G ( x , y ) = - 1 2 π log | x - y | . Thus, we further obtain (with the transformation ( σ , τ ) ( σ , 1 - τ ) for the last integral) that

= - 1 2 π 0 1 0 1 ( ϕ ~ ( σ τ ) ψ ~ ( σ ) + ϕ ~ ( σ ) ψ ~ ( σ τ ) ) log ( | γ j ( σ ) - γ j ( σ τ ) | σ - σ τ ( σ - σ τ ) ) σ d τ d σ = - 1 2 π 0 1 0 1 ( ϕ ~ ( σ τ ) ψ ~ ( σ ) + ϕ ~ ( σ ) ψ ~ ( σ τ ) ) log ( | γ j ( σ ) - γ j ( σ τ ) | σ - σ τ ) σ d τ d σ - 1 2 π 0 1 0 1 ( ϕ ~ ( σ τ ) ψ ~ ( σ ) + ϕ ~ ( σ ) ψ ~ ( σ τ ) ) log ( σ ) σ d τ d σ - 1 2 π 0 1 0 1 ( ϕ ~ ( σ ( 1 - τ ) ) ψ ~ ( σ ) + ϕ ~ ( σ ) ψ ~ ( σ ( 1 - τ ) ) ) log ( τ ) σ d τ d σ .

With the fundamental theorem of calculus and the fact that 𝛾 is piecewise smooth with | γ | > 0 , it is easy to see that ( σ , τ ) | γ j ( σ ) - γ j ( σ τ ) | / ( σ - σ τ ) is smooth and (uniformly) larger than 0; see [23, Lemma 5.2] for details. Thus, one can use standard tensor-Gauss quadrature to approximate the first integral. Note that the term log ( σ ) σ is continuous, but not even C 1 . Hence, we use tensor-Gauss quadrature with weight function log ( σ ) and log ( τ ) for the second and third integral, respectively.

Figure 9 
                     Visualization of the integral transformations from Cases 2 and 3 in Section 5.1: the reflection 
                           
                              
                                 
                                    
                                       (
                                       σ
                                       ,
                                       τ
                                       )
                                    
                                    ↦
                                    
                                       (
                                       τ
                                       ,
                                       σ
                                       )
                                    
                                 
                              
                              
                              (\sigma,\tau)\mapsto(\tau,\sigma)
                           
                         for the upper triangle and the (inverse) Duffy transformation 
                           
                              
                                 
                                    
                                       (
                                       σ
                                       ,
                                       τ
                                       )
                                    
                                    ↦
                                    
                                       (
                                       σ
                                       ,
                                       
                                          τ
                                          /
                                          σ
                                       
                                       )
                                    
                                 
                              
                              
                              (\sigma,\tau)\mapsto(\sigma,\tau/\sigma)
                           
                         for both triangles.
The colors indicate the sets onto which each of the colored points and lines are mapped.
Figure 9

Visualization of the integral transformations from Cases 2 and 3 in Section 5.1: the reflection ( σ , τ ) ( τ , σ ) for the upper triangle and the (inverse) Duffy transformation ( σ , τ ) ( σ , τ / σ ) for both triangles. The colors indicate the sets onto which each of the colored points and lines are mapped.

Case 3 (Common Node)

We assume that Q Q contains only one point. Without loss of generality, we further assume that the singularity is at ( σ , τ ) = ( 0 , 1 ) , i.e., x ^ , j - 1 = x ^ , j or x ^ , j - 1 = a x ^ , j = b if Γ = Ω . We rotate the integration domain by π 2 , i.e., ( σ , τ ) ( τ , 1 - σ ) , which transforms the singularity to ( σ , τ ) = ( 0 , 0 ) , and then employ the same transformations as in Case 2 (see also Figure 9),

0 1 0 1 ϕ ~ ( τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( τ ) ) d τ d σ = 0 1 0 1 ϕ ~ ( 1 - σ ) ψ ~ ( τ ) G ( γ j ( τ ) , γ j ( 1 - σ ) ) d τ d σ = 0 1 0 1 ( ϕ ~ ( 1 - σ ) ψ ~ ( σ τ ) G ( γ j ( σ τ ) , γ j ( 1 - σ ) ) + ϕ ~ ( 1 - σ τ ) ψ ~ ( σ ) G ( γ j ( σ ) , γ j ( 1 - σ τ ) ) ) σ d τ d σ .

Recall that G ( x , y ) = - 1 2 π log | x - y | . Thus, we further obtain that

= - 1 2 π 0 1 0 1 ϕ ~ ( 1 - σ ) ψ ~ ( σ τ ) log ( | γ j ( σ τ ) - γ j ( 1 - σ ) | σ σ ) σ d τ d σ - 1 2 π 0 1 0 1 ϕ ~ ( 1 - σ τ ) ψ ~ ( σ ) log ( | γ j ( σ ) - γ j ( 1 - σ τ ) | σ σ ) σ d τ d σ = - 1 2 π 0 1 0 1 ϕ ~ ( 1 - σ ) ψ ~ ( σ τ ) log ( | γ j ( σ τ ) - γ j ( 1 - σ ) | σ ) σ d τ d σ - 1 2 π 0 1 0 1 ϕ ~ ( 1 - σ τ ) ψ ~ ( σ ) log ( | γ j ( σ ) - γ j ( 1 - σ τ ) | σ ) σ d τ d σ - 1 2 π 0 1 0 1 ( ϕ ~ ( 1 - σ ) ψ ~ ( σ τ ) + ϕ ~ ( 1 - σ τ ) ψ ~ ( σ ) ) log ( σ ) σ d τ d σ .

With the fundamental theorem of calculus and the fact that 𝛾 is piecewise smooth with | γ | > 0 , it is easy to see that ( σ , τ ) | γ j ( σ τ ) - γ j ( 1 - σ ) | / σ and ( σ , τ ) | γ j ( σ ) - γ j ( 1 - σ τ ) | / σ are smooth and (uniformly) larger than 0; see [23, Lemma 5.3] for details. Thus, one can use standard tensor-Gauss quadrature to approximate the first and second integral. Note that the term log ( σ ) σ is continuous, but not even C 1 . Hence, we use tensor-Gauss quadrature with weight function log ( σ ) for the third integral.

5.2 Computation of Right-Hand Side Vectors

The computation of the right-hand side vectors of (3.3) and (4.2) is realized in RHSVectorWeak.c and RHSVectorHyp.c, where the required projection Π is realized in PhiApprox.c. They can be called in Matlab via buildRHSVectorWeak, buildRHSVectorHyp, and buildPhiApprox. Clearly, the terms u 2 , R , i , p Γ and Π ϕ 2 , R , i , p c Γ can be approximated by standard tensor-Gauss quadrature. For the term K ( Π ϕ ) , R , i , p c Γ , we use the fact that K is the adjoint operator of 𝔎, K Π ϕ , R , i , p c Γ = K R , i , p c , Π ϕ Γ . Thus, it is sufficient to explain how to compute K u , ψ for some (at least) Q -piecewise continuous functions u H ~ 1 / 2 ( Γ ) , ψ H ~ - 1 / 2 ( Γ ) . For x , y Γ , we abbreviate

(5.2) G ν ( x , y ) := y ν ( y ) G ( x , y ) = 1 2 π ( x - y ) ν ( y ) | x - y | 2 .

We start with

K u , ψ = Γ Γ u ( y ) ψ ( x ) G ν ( x , y ) d y d x = Q Q Q Q Q Q u ( y ) ψ ( x ) G ν ( x , y ) d y d x .

Note that only elements 𝑄 and Q that are contained in the supports of 𝜓 and 𝑢, respectively, contribute to the sum. Recall from (2.1) that NURBS have local support. Let Q , Q Q with Q = γ ( Q ^ ) , Q ^ = [ x ^ , j - 1 , x ^ , j ] , Q = γ ( Q ^ ) , Q ^ = [ x ^ , j - 1 , x ^ , j ] for some j , j { 1 , , n } . As in Section 5.1, elementary integral transformations show that

Q Q u ( y ) ψ ( x ) G ν ( x , y ) d y d x = 0 1 0 1 u ~ ( τ ) ψ ~ ( σ ) G ν ( γ j ( σ ) , γ j ( τ ) ) d τ d σ ,

where γ j ( σ ) := γ ( x ^ , j - 1 + σ ( x ^ , j - x ^ , j - 1 ) ) , γ j ( τ ) := γ ( x ^ , j - 1 + τ ( x ^ , j - x ^ , j - 1 ) ) , u ~ ( τ ) := u ( γ j ( τ ) ) | γ j ( τ ) | , and ψ ~ ( σ ) := u ( γ j ( σ ) ) | γ j ( σ ) | . Now, we distinguish three cases.

Case 1 (No Intersection)

We assume that Q Q = . In this case, the integrand is (at least) continuous and we can use standard tensor-Gauss quadrature to approximate the integral.

Case 2 (Identical Elements)

We assume that Q = Q , which also implies that j = j . Note that Taylor expansion together with the fact that 𝛾 is piecewise C 2 shows that

(5.3) ( x - y ) ν ( y ) = O ( | x - y | 2 ) for all x , y Q ;

see, e.g., [23, equation (5.4)] for a detailed calculation. Since 𝛾 is even piecewise C , this argument even yields that ( σ , τ ) G ν ( γ j ( σ ) , γ j ( τ ) ) is smooth. Thus, in principle, one can use standard tensor-Gauss quadrature to approximate the integral. However, to avoid the computation of G ν ( γ j ( σ ) , γ j ( τ ) ) in the limit case σ = τ , we make the same steps as in Case 2 of Section 5.1, i.e., we split the integral into two summands, use Fubini’s theorem as well as the reflection ( σ , τ ) ( τ , σ ) for the second one, and use the Duffy transformation ( σ , τ ) ( σ , σ τ ) with Jacobi determinant 𝜎 (see Figure 9 for a visualization). Altogether, this results in

0 1 0 1 u ~ ( τ ) ψ ~ ( σ ) G ν ( γ j ( σ ) , γ j ( τ ) ) d τ d σ = 0 1 0 1 ( u ~ ( σ τ ) ψ ~ ( σ ) G ν ( γ j ( σ ) , γ j ( σ τ ) ) + u ~ ( σ ) ψ ~ ( σ τ ) G ν ( γ j ( σ τ ) , γ j ( σ ) ) ) σ d τ d σ .

In particular, this shifts the limit case to the boundary σ = 0 τ = 1 , while tensor-Gauss quadrature only evaluates the integral in the interior of the unit square.

Remark 5.1

We mention that the smoothness of ( σ , τ ) G ν ( γ j ( σ ) , γ j ( τ ) ) hinges on the considered Laplace equation. It is also satisfied for the Helmholtz equation, but fails, for instance, for the Lamé equation from linear elasticity. Nevertheless, the same transformations as above yield a final integrand that is (at least) continuous if the Lamé equation is considered.

Case 3 (Common Node)

We assume that Q Q contains only one point. Without loss of generality, we further assume that the singularity is at ( σ , τ ) = ( 0 , 1 ) , i.e., x ^ , j - 1 = x ^ , j or x ^ , j - 1 = a x ^ , j = b if Γ = Ω . We make the same steps as in Case 3 of Section 5.1, i.e., we rotate the integration domain by π 2 , i.e., ( σ , τ ) ( τ , 1 - σ ) , which transforms the singularity to ( σ , τ ) = ( 0 , 0 ) , and then employ the same transformations as in Case 2 (see also Figure 9). Altogether, this results in

0 1 0 1 u ~ ( τ ) ψ ~ ( σ ) G ν ( γ j ( σ ) , γ j ( τ ) ) d τ d σ = 0 1 0 1 ( u ~ ( 1 - σ ) ψ ~ ( σ τ ) G ν ( γ j ( σ τ ) , γ j ( 1 - σ ) ) + u ~ ( 1 - σ τ ) ψ ~ ( σ ) G ν ( γ j ( σ ) , γ j ( 1 - σ τ ) ) ) σ d τ d σ .

Note that G ν ( γ j ( σ τ ) , γ j ( 1 - σ ) ) σ = G ν ( γ j ( σ ) , γ j ( 1 - σ τ ) ) σ = O ( 1 ) . In particular, one can show with the fundamental theorem of calculus and the fact that 𝛾 is piecewise C 1 with | γ | > 0 that these terms are (at least) continuous. Thus, one can use standard tensor-Gauss quadrature to approximate the integral.

5.3 Evaluation of Single-Layer Operator 𝔙

Let K ^ be a knot vector. Let x Γ and s [ a , b ] with x = γ ( s ) . Moreover, let Q Q with x Q and s Q ^ = [ x ^ , j - 1 , x ^ , j ] with γ ( Q ^ ) = Q . For ϕ H ~ - 1 / 2 ( Γ ) (at least) Q -piecewise continuous, we want to evaluate [ V ϕ ] ( x ) . These values are required to compute the Faermann estimator in Section 5.8 and the weighted-residual estimator in Section 5.9. We also mention that they can be used to implement a collocation method instead of a Galerkin method to approximate the solution of (1.3). The evaluation is realized in ResidualWeak.c and can be called in Matlab via evalV.

We start with

[ V ϕ ] ( x ) = Γ ϕ ( y ) G ( x , y ) d y = Q ^ Q ^ Q ^ ϕ ( γ ( t ) ) G ( γ ( s ) , γ ( t ) ) | γ ( t ) | d t .

For all Q ^ Q ^ with Q ^ Q ^ , the integrands are (at least) continuous and we can use standard tensor-Gauss quadrature after transforming the integration domain to the unit square. Recall that G ( x , y ) = - 1 2 π log | x - y | . With the abbreviation ϕ ¯ ( t ) := ϕ ( γ ( t ) ) | γ ( t ) | , it remains to consider

Q ^ ϕ ¯ ( t ) G ( γ ( s ) , γ ( t ) ) d t = x ^ , j - 1 s ϕ ¯ ( t ) G ( γ ( s ) , γ ( t ) ) d t + s x ^ , j ϕ ¯ ( t ) G ( γ ( s ) , γ ( t ) ) d t = ( s - x ^ , j - 1 ) 0 1 ϕ ¯ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) G ( γ ( s ) , γ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) ) d τ + ( x ^ , j - s ) 0 1 ϕ ¯ ( s + τ ( x ^ , j - s ) ) G ( γ ( s ) , γ ( s + τ ( x ^ , j - s ) ) ) d τ = - 1 2 π ( s - x ^ , j - 1 ) 0 1 ϕ ¯ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) log ( | γ ( s ) - γ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) | τ ) d τ - 1 2 π ( s - x ^ , j - 1 ) 0 1 ϕ ¯ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) log ( τ ) d τ - 1 2 π ( x ^ , j - s ) 0 1 ϕ ¯ ( s + τ ( x ^ , j - s ) ) log ( | γ ( s ) - γ ( s + τ ( x ^ , j - s ) ) | τ ) d τ - 1 2 π ( x ^ , j - s ) 0 1 ϕ ¯ ( s + τ ( x ^ , j - s ) ) log ( τ ) d τ .

With the fundamental theorem of calculus and the fact that 𝛾 is piecewise smooth with | γ | > 0 , it is easy to see that τ | γ ( s ) - γ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) | / τ and τ | γ ( s ) - γ ( s + τ ( x ^ , j - s ) ) | / τ are smooth and (uniformly) larger than 0; see [23, Lemma 5.6] for details. Thus, we can use Gauss quadrature with weight function 1 and log ( τ ) to approximate the final integrals.

5.4 Evaluation of Double-Layer Operator 𝔎

Let K ^ be a knot vector. Let x Γ and s [ a , b ] with x = γ ( s ) . Moreover, let Q Q with x Q and s Q ^ = [ x ^ , j - 1 , x ^ , j ] with γ ( Q ^ ) = Q . For u H ~ 1 / 2 ( Γ ) (at least) Q -piecewise C 1 , we want to evaluate [ K u ] ( x ) . These values are required to compute the Faermann estimator in Section 5.8 and the weighted-residual estimator in Section 5.9. We also mention that they can be used to implement a collocation method[1] instead of a Galerkin method to approximate the solution of (1.3). The evaluation is realized in ResidualWeak.c and can be called in Matlab via evalRHSWeak.

Recall the abbreviation G ν ( x , y ) of (5.2). We start with

[ K u ] ( x ) = Γ ϕ ( y ) G ν ( x , y ) d y = Q ^ Q ^ Q ^ u ( γ ( t ) ) G ν ( γ ( s ) , γ ( t ) ) | γ ( t ) | d t .

For all Q ^ Q ^ with Q ^ Q ^ , the integrands are (at least) continuous and we can use standard tensor-Gauss quadrature after transforming the integration domain to the unit square. Recall that G ( x , y ) = - 1 2 π log | x - y | . With the abbreviation u ¯ ( t ) := u ( γ ( t ) ) | γ ( t ) | , it remains to consider

Q ^ u ¯ ( t ) G ν ( γ ( s ) , γ ( t ) ) d t = x ^ , j - 1 s u ¯ ( t ) G ν ( γ ( s ) , γ ( t ) ) d t + s x ^ , j u ¯ ( t ) G ν ( γ ( s ) , γ ( t ) ) d t = ( s - x ^ , j - 1 ) 0 1 u ¯ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) G ν ( γ ( s ) , γ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) ) d τ + ( x ^ , j - s ) 0 1 u ¯ ( s + τ ( x ^ , j - s ) ) G ν ( γ ( s ) , γ ( s + τ ( x ^ , j - s ) ) ) d τ .

With the fundamental theorem of calculus and the fact that 𝛾 is piecewise smooth with | γ | > 0 , it is easy to see that τ | γ ( s ) - γ ( x ^ , j - 1 + τ ( s - x ^ , j - 1 ) ) | / τ and τ | γ ( s ) - γ ( s + τ ( x ^ , j - s ) ) | / τ are smooth and (uniformly) larger than 0; see [23, Lemma 5.6] for details. Based on (5.3), one can show that the final two integrands are (at least) continuous; see [23, Lemma 5.7] for details. Thus, we can use standard Gauss quadrature to approximate the integrals.

5.5 Evaluation of Hyper-singular Integral Operator 𝔚

Under the assumptions of Section 5.4, we want to evaluate [ W u ] ( x ) for x Γ N γ . These values are required to compute the weighted-residual estimators in Section 5.9. We also mention that they can be used to implement a collocation method instead of a Galerkin method to approximate the solution of (1.4). The evaluation is realized in ResidualHyp.c and can be called in Matlab via evalW. Maue’s formula (5.1), V u H ~ 1 ( Γ ) (see Section 3.2), and integration by parts show that W u , v Γ = - Γ V Γ u , v Γ for all v H ~ 1 ( Γ ) . By density, we see that W u = - Γ V Γ u . To approximate ( W u ) ( x ) , we replace ( V Γ u ) γ | Q ^ by an interpolation polynomial, where the required point evaluations have been discussed in Section 5.3. The arc-length derivatives can be computed with (2.3) and (2.5).

5.6 Evaluation of Adjoint Double-Layer Operator K

Under the assumptions of Section 5.3, we want to evaluate [ K u ] ( x ) for x Γ N γ . These values are required to compute the weighted-residual estimators in Section 5.9. We also mention that they can be used to implement a collocation method instead of a Galerkin method to approximate the solution of (1.4). The evaluation is realized in ResidualHyp.c and can be called in Matlab via evalRHSHyp. To approximate [ K ϕ ] ( x ) , one can proceed along the lines of Section 5.4; see [43, Section 6.4] for details.

5.7 Computation of ( h - h 2 ) -Error Estimators

Let K ^ be a knot vector with uniformly refined knot vector K ^ + (see Section 3.4). The computation of the error indicators η V , hh2 , ( x ) and η W , hh2 , ( x ) , x N , of (3.4) and (4.3) is realized in HHEstWeak.c and HHEstHyp.c, and can be called in Matlab via buildHHEstWeak and buildHHEstHyp (which expect among others the coefficients of the Galerkin approximations on the coarse and on the fine mesh). Clearly, η V , hh2 , ( x ) can be approximated by standard Gauss quadrature. With (2.3) and (2.5), also the indicators η W , hh2 , ( x ) can be approximated by standard Gauss quadrature.

5.8 Computation of Faermann Estimator

Let K ^ be a knot vector. The computation of the error indicators η V , fae , ( x ) , x N , of (3.5) is realized in FaerEstWeak.c and can be called in Matlab via buildFaerEstWeak. Let Q , Q Q with Q Q = ω ( x ) . We assume that Q Q , i.e., 𝑥 is not at the boundary of Γ Ω . The case Q = Q works analogously. We abbreviate the residual r := f - V Φ , which gives that

η V , fae , ( x ) 2 = | r | H 1 / 2 ( ω ( x ) ) 2 = | r | H 1 / 2 ( Q ) 2 + 2 Q Q | r ( y ) - r ( z ) | 2 | y - z | 2 d z d y + | r | H 1 / 2 ( Q ) 2 .

Given the evaluation procedures of Sections 5.3 and 5.4, the terms | r | H 1 / 2 ( Q ) 2 and | r | H 1 / 2 ( Q ) 2 can be approximated exactly as in Case 2 of Section 5.2, where one can additionally exploit the symmetry of the integrand. The remaining integral can be approximated exactly as in Case 3 of Section 5.2.

5.9 Computation of Weighted-Residual Error Estimators

Let K ^ be a knot vector. The computation of the error indicators η V , res , ( x ) and η W , res , ( x ) , x N , of (3.6) and (4.4) is realized in ResEstWeak.c and ResEstHyp.c, where the required projection Π is realized in PhiApprox.c. They can be called in Matlab via buildResEstWeak, buildResEstHyp, and buildPhiApprox.c. To approximate the error indicators η V , res , ( x ) , x N , of (3.6), we replace ( f - V Φ ) γ | Q ^ , Q ^ Q ^ with γ ( Q ^ ) ω ( x ) , by an interpolation polynomial, where the required point evaluations have been discussed in Sections 5.35.4. The arc-length derivatives can be computed with (2.3) and (2.5). Finally, we use standard Gauss quadrature. To approximate the error indicators η W , res , ( x ) , x N , of (4.4), we can use standard Gauss quadrature. The required point evaluations have been discussed in Sections 5.55.6.

5.10 Computation of Data Oscillations

Let K ^ be a knot vector. Clearly, the oscillations osc W , ( x ) , x N , of (4.5) can be approximated by standard Gauss quadrature.

Funding source: Austrian Science Fund

Award Identifier / Grant number: P29096

Award Identifier / Grant number: W1245

Award Identifier / Grant number: SFB F65

Award Identifier / Grant number: J4379

Funding statement: The authors are supported by the Austrian Science Fund (FWF) through the research projects Optimal isogeometric boundary element method (grant P29096), the doctoral school Dissipation and dispersion in nonlinear PDEs (grant W1245), the special research program Taming complexity in partial differential systems (grant SFB F65), and the Erwin Schrödinger Fellowship Optimal adaptivity for space-time methods (grant J4379).

A Overview of Functions Provided by IGABEM2D

Root Folder igabem2d/

The folder contains the three subfolders source/, MEX-files/, and functions/. With mexIGABEM.m, all required paths are added, the C files of source/ are compiled (via the Matlab-C interface mex) and the resulting mex-files are stored in MEX-files/. (To use mex, you first have to set it up by typing mex -setup in MATLAB’s command window.) Note that the contained main files IGABEMWeak.m and IGABEMHyp.m, where the adaptive Algorithm 3.2 and 4.1 are implemented, automatically call mexIGABEM.m. In these main files, the user can choose out of a variety of different parameters, in particular different Dirichlet data 𝑢 and Neumann data 𝜙, which are provided in source/Examples.h, as well as different NURBS geometries as in Section 2.2, which are provided in functions/Geometries.m. If the exact solution of the corresponding Laplace problem is known, it is given in functions/Examples.m together with its derivative, which allows for additional visualization in the main files IGABEMWeak.m and IGABEMHyp.m. If desired, the number of Gauss quadrature points for the involved boundary integrals can be changed in functions/getConfigWeak.m resp. functions/getConfigHyp.m; see also Section 5.

Subfolder igabem2d/source/

This folder contains all required C files with corresponding header-files. To call the latter in Matlab, we use the Matlab-C interface mex; see the comments on ../mexIGABEM.m. In Structures.h, structures for splines and quadrature are defined. In Functions.h, elementary C functions are implemented. In Splines.c and Splines.h, the evaluation of NURBS and their derivatives is implemented. The evaluations can be used in Matlab by compiling (via mex) the files

  • evalNURBS.c (to evaluate R ^ , i , p ),

  • evalNURBSComb.c (to evaluate S ^ S ^ p ( K ^ , W ) ),

  • evalNURBSCombDeriv.c (to evaluate the right-hand derivative S ^ r of S ^ S ^ p ( K ^ , W ) ),

  • evalNURBSCurve.c (to evaluate 𝛾),

  • evalNURBSCurveDeriv.c (to evaluate the right-hand derivative γ r ).

The main functions to build the Galerkin systems are
  • VMatrix.c,

  • RHSVectorWeak.c,

  • WMatrix.c,

  • PhiApprox.c,

  • RHSVectorHyp.c.

They can be used in Matlab by compiling (via mex) the files
  • buildVMatrix.c (to build the matrix in (3.3)),

  • buildRHSVectorWeak.c (to build the right-hand side vector in (3.3)),

  • buildWMatrix.c (to build the matrix in (4.2)),

  • buildPhiApprox.c (to build Π ϕ ),

  • buildRHSVectorHyp.c (to build the right-hand side vector in (4.2)).

The estimators and oscillations are implemented in
  • HHEstWeak.c,

  • FaerEstWeak.c,

  • ResEstWeak.c,

  • HHEstHyp.c,

  • ResEstHyp.c,

  • OscHyp.c.

They can be used in Matlab by compiling (via mex) the files
  • buildHHEstWeak.c (to build η V , hh2 , ),

  • buildFaerEstWeak.c (to build η V , fae , ),

  • buildResEstWeak.c (to build η V , res , ),

  • buildHHEstHyp.c (to build η W , hh2 , ),

  • buildResEstHyp.c (to build η W , hh2 , ),

  • buildOscHyp.c (to build osc W , ).

The residual estimators require the evaluation of the boundary integral operators applied to some function, which is implemented in ResidualWeak.c and ResidualHyp.c. The evaluations can be used in Matlab by compiling (via mex) the source files
  • evalV.c (to evaluate V Φ for Φ X ),

  • evalRHSWeak.c (to evaluate K u ),

  • evalW.c (to evaluate W U for U Y ),

  • evalRHSHyp.c (to evaluate K Π ϕ ).

Finally, Examples.h provides different Dirichlet data 𝑢 and Neumann data 𝜙 that can be used in the main files ../IGABEMWeak.m and ../IGABEMHyp.m.

Subfolder igabem2d/MEX-files/

Initially empty, all MEX-files are stored here; see also the comments on ../mexIGABEM.m.

Subfolder igabem2d/functions/

This folder contains all required Matlab files. As already mentioned, it contains the files

  • Geometries.m (providing different NURBS geometries),

  • Examples.m (providing solutions of the Laplace problem along with their derivative), and

  • getConfigWeak.m, getConfigHyp.m (providing the number of quadrature points).

The Dörfler marking (3.7) and the marking of Algorithm 3.1 is implemented in markNodesElements.m. The actual refinement steps of Algorithm 3.1 are implemented in increaseMult.m, which increases the multiplicity of marked nodes, and refineBoundaryMesh.m, which bisects at least all marked elements. Other auxiliary functions are
  • BasisTrafo.m (to transform coefficients corresponding to a coarse basis to coefficients corresponding to a finer basis),

  • ChebyNodes.m (to compute Chebyshev nodes),

  • GaussData.m (to compute standard Gauss nodes and weights),

  • IntMean.m (to compute the integral mean over Γ),

  • LagrDerivMatrix.m (to compute derivatives of Lagrange polynomials),

  • LogGaussData.m (to compute Gauss nodes and weights for the logarithmic weight function),

  • ShapeRegularity.m (to compute κ ^ ).

References

[1] A. Aimi, F. Calabrò, M. Diligenti, M. L. Sampoli, G. Sangalli and A. Sestini, Efficient assembly based on B-spline tailored quadrature rules for the IgA-SGBEM, Comput. Methods Appl. Mech. Engrg. 331 (2018), 327–342. 10.1016/j.cma.2017.11.031Suche in Google Scholar

[2] A. Aimi, M. Diligenti, M. L. Sampoli and A. Sestini, Isogemetric analysis and symmetric Galerkin BEM: A 2D numerical study, Appl. Math. Comput. 272 (2016), no. part 1, 173–186. 10.1016/j.amc.2015.08.097Suche in Google Scholar

[3] J. Alberty, C. Carstensen and S. A. Funken, Remarks around 50 lines of Matlab: Short finite element implementation, Numer. Algorithms 20 (1999), no. 2–3, 117–137. 10.1023/A:1019155918070Suche in Google Scholar

[4] M. Aurada, M. Feischl, T. Führer, M. Karkulik and D. Praetorius, Efficiency and optimality of some weighted-residual error estimator for adaptive 2D boundary element methods, Comput. Methods Appl. Math. 13 (2013), no. 3, 305–332. 10.1515/cmam-2013-0010Suche in Google Scholar

[5] C. Bahriawati and C. Carstensen, Three MATLAB implementations of the lowest-order Raviart–Thomas MFEM with a posteriori error control, Comput. Methods Appl. Math. 5 (2005), no. 4, 333–361. 10.2478/cmam-2005-0016Suche in Google Scholar

[6] A. Bantle, On high-order NURBS-based boundary element methods in two dimensions-numerical integration and implementation, PhD thesis, University of Ulm, 2015. Suche in Google Scholar

[7] G. Beer, B. Marussig and C. Duenser, The Isogeometric Boundary Element Method, Lect. Notes Appl. Comput. Mech. 90, Springer, Cham, 2020. 10.1007/978-3-030-23339-6Suche in Google Scholar

[8] A. Buffa, G. Gantner, C. Giannelli, D. Praetorius and R. Vázquez, Mathematical foundations of adaptive isogeometric analysis, preprint (2021), https://arxiv.org/abs/2107.02023. 10.1007/s11831-022-09752-5Suche in Google Scholar

[9] A. Buffa and C. Giannelli, Adaptive isogeometric methods with hierarchical splines: Error estimator and convergence, Math. Models Methods Appl. Sci. 26 (2016), no. 1, 1–25. 10.1142/S0218202516500019Suche in Google Scholar

[10] A. Buffa and C. Giannelli, Adaptive isogeometric methods with hierarchical splines: Optimality and convergence rates, Math. Models Methods Appl. Sci. 27 (2017), no. 14, 2781–2802. 10.1142/S0218202517500580Suche in Google Scholar

[11] J. A. Cottrell, T. J. R. Hughes and Y. Bazilevs, Isogeometric Analysis, John Wiley & Sons, Chichester, 2009. 10.1002/9780470749081Suche in Google Scholar

[12] C. de Boor, B (asic)-spline basics, Technical report, Mathematics Research Center, University of Wisconsin-Madison, 1986. Suche in Google Scholar

[13] J. Dölz, H. Harbrecht, S. Kurz, M. Multerer, S. Schöps and F. Wolf, Bembel: The fast isogeometric boundary element C++ library for Laplace, Helmholtz, and electric wave equation, SoftwareX 11 (2020), Article ID 100476. 10.1016/j.softx.2020.100476Suche in Google Scholar

[14] J. Dölz, H. Harbrecht, S. Kurz, S. Schöps and F. Wolf, A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems, Comput. Methods Appl. Mech. Engrg. 330 (2018), 83–101. 10.1016/j.cma.2017.10.020Suche in Google Scholar

[15] J. Dölz, H. Harbrecht and M. Peters, An interpolation-based fast multipole method for higher-order boundary elements on parametric surfaces, Internat. J. Numer. Methods Engrg. 108 (2016), no. 13, 1705–1728. 10.1002/nme.5274Suche in Google Scholar

[16] J. Dölz, S. Kurz, S. Schöps and F. Wolf, Isogeometric boundary elements in electromagnetism: Rigorous analysis, fast methods, and examples, SIAM J. Sci. Comput. 41 (2019), no. 5, B983–B1010. 10.1137/18M1227251Suche in Google Scholar

[17] A. Falini, C. Giannelli, T. Kanduč, M. L. Sampoli and A. Sestini, An adaptive IgA-BEM with hierarchical B-splines based on quasi-interpolation quadrature schemes, Internat. J. Numer. Methods Engrg. 117 (2019), no. 10,1038–1058. 10.1002/nme.5990Suche in Google Scholar

[18] M. Feischl, G. Gantner, A. Haberl and D. Praetorius, Adaptive 2D IGA boundary element methods, Eng. Anal. Bound. Elem. 62 (2016), 141–153. 10.1016/j.enganabound.2015.10.003Suche in Google Scholar

[19] M. Feischl, G. Gantner, A. Haberl and D. Praetorius, Optimal convergence for adaptive IGA boundary element methods for weakly-singular integral equations, Numer. Math. 136 (2017), no. 1, 147–182. 10.1007/s00211-016-0836-8Suche in Google Scholar PubMed PubMed Central

[20] M. Feischl, G. Gantner and D. Praetorius, Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations, Comput. Methods Appl. Mech. Engrg. 290 (2015), 362–386. 10.1016/j.cma.2015.03.013Suche in Google Scholar PubMed PubMed Central

[21] T. Führer, G. Gantner, D. Praetorius and S. Schimanko, Optimal additive Schwarz preconditioning for adaptive 2D IGA boundary element methods, Comput. Methods Appl. Mech. Engrg. 351 (2019), 571–598. 10.1016/j.cma.2019.03.038Suche in Google Scholar

[22] S. Funken, D. Praetorius and P. Wissgott, Efficient implementation of adaptive P1-FEM in Matlab, Comput. Methods Appl. Math. 11 (2011), no. 4, 460–490. 10.2478/cmam-2011-0026Suche in Google Scholar

[23] G. Gantner, Adaptive isogeometric BEM, Master’s thesis, Institute of Analysis and Scientific Computing, TU Wien, 2014. Suche in Google Scholar

[24] G. Gantner, Optimal adaptivity for splines in finite and boundary element methods, PhD thesis, TU Wien, 2017. Suche in Google Scholar

[25] G. Gantner, D. Haberlik and D. Praetorius, Adaptive IGAFEM with optimal convergence rates: Hierarchical B-splines, Math. Models Methods Appl. Sci. 27 (2017), no. 14, 2631–2674. 10.1142/S0218202517500543Suche in Google Scholar

[26] G. Gantner and D. Praetorius, Adaptive BEM for elliptic PDE systems, part I: Abstract framework, for weakly-singular integral equations, Appl. Anal. (2020), 10.1080/00036811.2020.1800651. 10.1080/00036811.2020.1800651Suche in Google Scholar

[27] G. Gantner and D. Praetorius, Adaptive IGAFEM with optimal convergence rates: T-splines, Comput. Aided Geom. Design 81 (2020), Article ID 101906. 10.1016/j.cagd.2020.101906Suche in Google Scholar

[28] G. Gantner and D. Praetorius, Adaptive BEM for elliptic PDE systems, part II: Isogeometric analysis with hierarchical B-splines for weakly-singular integral equations, Comput. Math. Appl. 117 (2022), 74–96. 10.1016/j.camwa.2022.04.006Suche in Google Scholar

[29] G. Gantner, D. Praetorius and S. Schimanko, Adaptive isogeometric boundary element methods with local smoothness control, Math. Models Methods Appl. Sci. 30 (2020), no. 2, 261–307. 10.1142/S0218202520500074Suche in Google Scholar

[30] G. Gantner, D. Praetorius and S. Schimanko, IGABEM2D, Software, zenodo.6282998, 2022. Suche in Google Scholar

[31] L. Heltai, M. Arroyo and A. DeSimone, Nonsingular isogeometric boundary element method for Stokes flows in 3D, Comput. Methods Appl. Mech. Engrg. 268 (2014), 514–539. 10.1016/j.cma.2013.09.017Suche in Google Scholar

[32] G. C. Hsiao and W. L. Wendland, Boundary Integral Equations, Appl. Math. Sci. 164, Springer, Berlin, 2008. 10.1007/978-3-540-68545-6Suche in Google Scholar

[33] T. J. R. Hughes, J. A. Cottrell and Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement, Comput. Methods Appl. Mech. Engrg. 194 (2005), no. 39–41, 4135–4195. 10.1016/j.cma.2004.10.008Suche in Google Scholar

[34] S. Keuchel, N. C. Hagelstein, O. Zaleski and O. von Estorff, Evaluation of hypersingular and nearly singular integrals in the isogeometric boundary element method for acoustics, Comput. Methods Appl. Mech. Engrg. 325 (2017), 488–504. 10.1016/j.cma.2017.07.025Suche in Google Scholar

[35] B. Marussig, J. Zechner, G. Beer and T.-P. Fries, Fast isogeometric boundary element method based on independent field approximation, Comput. Methods Appl. Mech. Engrg. 284 (2015), 458–488. 10.1016/j.cma.2014.09.035Suche in Google Scholar

[36] A.-W. Maue, Zur Formulierung eines allgemeinen Beugungsproblems durch eine Integralgleichung, Z. Phys. 126 (1949), 601–618. 10.1007/BF01328780Suche in Google Scholar

[37] W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University, Cambridge, 2000. Suche in Google Scholar

[38] B. H. Nguyen, X. Zhuang, P. Wriggers, T. Rabczuk, M. E. Mear and H. D. Tran, Isogeometric symmetric Galerkin boundary element method for three-dimensional elasticity problems, Comput. Methods Appl. Mech. Engrg. 323 (2017), 132–150. 10.1016/j.cma.2017.05.011Suche in Google Scholar

[39] M. J. Peake, J. Trevelyan and G. Coates, Extended isogeometric boundary element method (XIBEM) for two-dimensional Helmholtz problems, Comput. Methods Appl. Mech. Engrg. 259 (2013), 93–102. 10.1016/j.cma.2013.03.016Suche in Google Scholar

[40] C. Politis, A. I. Ginnis, P. D. Kaklis, K. Belibassakis and C. Feurer, An isogeometric BEM for exterior potential-flow problems in the plane, 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling, ACM, New York (2009), 349–354. 10.1145/1629255.1629302Suche in Google Scholar

[41] C. Politis, A. I. Ginnis, P. D. Kaklis and C. Feurer, An isogeometric BEM for exterior potential-flow problems in the plane, Comput. Methods Appl. Mech. Engrg. 254 (2013), 197–221. 10.1145/1629255.1629302Suche in Google Scholar

[42] S. A. Sauter and C. Schwab, Boundary Element Methods, Springer Ser. Comput. Math. 39, Springer, Berlin, 2011. 10.1007/978-3-540-68093-2Suche in Google Scholar

[43] S. Schimanko, Adaptive isogeometric boundary element method for the hyper-singular integral equation, Master’s thesis, Institute of Analysis and Scientific Computing, TU Wien, 2016. Suche in Google Scholar

[44] R. N. Simpson, S. P. A. Bordas, H. Lian and J. Trevelyan, An isogeometric boundary element method for elastostatic analysis: 2D implementation aspects, Comput. Structures 118 (2013), 2–12. 10.1016/j.compstruc.2012.12.021Suche in Google Scholar

[45] R. N. Simpson, S. P. A. Bordas, J. Trevelyan and T. Rabczuk, A two-dimensional isogeometric boundary element method for elastostatic analysis, Comput. Methods Appl. Mech. Engrg. 209/212 (2012), 87–100. 10.1016/j.cma.2011.08.008Suche in Google Scholar

[46] O. Steinbach, Numerical Approximation Methods for Elliptic Boundary Value Problems, Springer, New York, 2008. 10.1007/978-0-387-68805-3Suche in Google Scholar

[47] T. Takahashi and T. Matsumoto, An application of fast multipole method to isogeometric boundary element method for Laplace equation in two dimensions, Eng. Anal. Bound. Elem. 36 (2012), no. 12, 1766–1775. 10.1016/j.enganabound.2012.06.004Suche in Google Scholar

Received: 2022-03-01
Revised: 2022-03-08
Accepted: 2022-03-09
Published Online: 2022-06-09
Published in Print: 2022-07-01

© 2022 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 10.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/cmam-2022-0050/html
Button zum nach oben scrollen