Home A hypersurface containing the support of a Radon transform must be an ellipsoid. II: The general case
Article Open Access

A hypersurface containing the support of a Radon transform must be an ellipsoid. II: The general case

  • Jan Boman ORCID logo EMAIL logo
Published/Copyright: February 2, 2021

Abstract

If the Radon transform of a compactly supported distribution f0 in n is supported on the set of tangent planes to the boundary D of a bounded convex domain D, then D must be an ellipsoid. The special case of this result when the domain D is symmetric was treated in [J. Boman, A hypersurface containing the support of a Radon transform must be an ellipsoid. I: The symmetric case, J. Geom. Anal. 2020, 10.1007/s12220-020-00372-8]. Here we treat the general case.

MSC 2010: 44A12

1 Introduction

It has been known for a long time that the Radon transform of a compactly supported distribution f on n can be given a natural definition and that the result is a distribution on the manifold of hyperplanes in n; see, e.g., [3]. For instance, if n=2 and f is the Dirac measure at the point a=(a1,a2)2, then the Radon transform Rf, defined by Rf,φ=f,R*φ for test functions φ (see Section 2 for details), is a smooth measure supported on the set of lines that contain the point a. This set of lines corresponds to a sinus shaped curve p=a1cosα+a2sinα in the αp-plane, called the sinogram by tomographers. A distribution whose Radon transform is a measure on the set of tangents to a circle can be constructed as follows: Define the functions

f0(x)=1π(1-|x|2)+-1/2andf1(x)=1π(1-|x|2)+1/2

in the plane. Simple calculations show that

Rf0(ω,p)=1andRf1(ω,p)=12(1-p2)

for |p|<1 and all ω, and obviously Rf(ω,p)=0 for |p|1. Denote the Laplace operator in two variables by Δ and the characteristic function for the interval [-1,1] by χ[-1,1]. Using the identity

R(Δf)(ω,p)=p2Rf(ω,p),

denoting the Dirac measure at the origin in by δ(), and noting that

12p2(1-p2)+=-χ[-1,1](p)+δ(p-1)+δ(p+1),

we see that the distribution f=Δf1+f0 satisfies

Rf(ω,p)=δ(p-1)+δ(p+1).

By means of affine transformations, we can easily construct similar examples where the circle is replaced by an ellipse.

However, for other convex domains than ellipses such distributions do not exist. This was proved in [2] for the case of domains D that are symmetric with respect to a point, which may be assumed to be the origin, D=-D. Here we will treat the general case by proving the following theorem; see [2] for additional background and references.

Theorem 1.1.

Let D be an open, convex, bounded subset of Rn, n2, with boundary D. If there exists a distribution f0 with support in D¯ such that the Radon transform of f is supported on the set of supporting planes for D, then D must be an ellipsoid.

Note that a supporting plane for D is a tangent plane to the boundary D if the boundary is C1 smooth.

The search for distributions whose Radon transforms are supported on the set of tangent planes to the boundary surface of a domain Dn was motivated by an attempt to prove the following conjecture.

Conjecture 1.2.

Let D be a bounded convex domain in the plane and let K be a closed subset of D. Then there exists a smooth function f, not identically zero, suppfD¯, such that its Radon transform Rf(L) vanishes for all lines L that intersect K.

If D is a ball, the assertion of Conjecture 1.2 is true and well known since long ago (see, e.g., [5]), and by an affine transformation also if the boundary of D is an ellipsoid. However, to my knowledge, for other convex sets Conjecture 1.2 is still open.

To explain the connection between Conjecture 1.2 and Theorem 1.1, let Kε={x2:dist(x,K)<ε}, where ε>0 is so small that K2εD. If we could find a compactly supported distribution f0 in the plane such that Rf is supported in the set of tangents to Kε, then the convolution f*ϕ, where ϕ is a smooth function with support in a sufficiently small neighborhood of the origin, would prove the conjecture. Because the Radon transform of f*ϕ is equal to the one-dimensional convolution of the Radon transform pRf(ω,p) and pRϕ(ω,p) for fixed ω, so Rf would be supported in a neighborhood of the set of tangent planes to Kε, which we can make as small as we please by choosing the support of ϕ sufficiently small. Theorem 1.1 obviously shows that this strategy to prove Conjecture 1.2 must fail. However, we think that Theorem 1.1 has independent interest. For instance, Theorem 1.1 turned out to provide a new proof of a special case of a well-known conjecture of Arnold; see [2] for details and references.

From another point of view we can see the assertion of Theorem 1.1 as a partial answer to the following question: which sets of hyperplanes can be the support of a Radon transform with compact support? Not much seems to be known about this question. For instance, if Dn is a bounded convex set whose boundary D is not an ellipsoid, we do not know if an arbitrarily small neighborhood of the set of tangent planes to D can contain the support of Rf for some f of compact support (cf. Conjecture 1.2). However, we can easily characterize the sets that can be the complement of the unbounded connected component of the complement of the support of the Radon transform of a compactly supported distribution (Theorem 7.1).

An outline of the proof of Theorem 1.1 is given at the end of Section 3.

2 Distributions on the manifold of hyperplanes

The manifold n of hyperplanes in n can be identified with the manifold (Sn-1×)/(±1), the set of pairs (ω,p)Sn-1×, where (ω,p) is identified with (-ω,-p). Thus a function on n can be represented as an even function g(ω,p)=g(-ω,-p) on Sn-1×. As in [2], a distribution on n will be a linear form on Ce(Sn-1×), the set of smooth even functions on Sn-1×, and a locally integrable even function h(ω,p) on Sn-1× will be identified with the distribution

Ce(Sn-1×)φSn-1h(ω,p)φ(ω,p)𝑑ω𝑑p,

where dω is area measure on Sn-1. Using the standard definition of R*, i.e.

R*φ(x)(x)=Sn-1φ(ω,xω)𝑑ω,

we can then define the Radon transform of the compactly supported distribution f on n as the linear form

Ce(Sn-1×)φf,R*φ.

Let Dn be an open, convex, bounded subset of n with boundary D. We may assume that the origin is contained in D. Introduce the supporting function of D:

ρ(ω)=sup{xω:xD},ωSn-1.

If we introduce the temporary notation

ρ-(ω)=inf{xω:xD},

we observe that

ρ-(ω)=-sup{-xω:xD}=-sup{x(-ω):xD}=-ρ(-ω).

An arbitrary measure g(ω,p) on Sn-1× can be written as

g(ω,p)=q+(ω)δ(p-ρ(ω))+q-(ω)δ(p-ρ-(ω))
=q+(ω)δ(p-ρ(ω))+q-(ω)δ(p+ρ(-ω))

for some measures q+(ω) and q-(ω). Since δ(t)=δ(-t), we then have

g(-ω,-p)=q+(-ω)δ(-p-ρ(-ω))+q-(-ω)δ(-p+ρ(ω))
=q+(-ω)δ(p+ρ(-ω))+q-(-ω)δ(p-ρ(ω)).

The condition for g(ω,p) to be even, g(ω,p)=g(-ω,-p), therefore becomes

q-(-ω)=q+(ω)for all ω.

It is therefore sufficient to introduce one density function, say q+(ω)=q(ω), because then q-(ω)=q(-ω) (we shall see later that q(ω) must be a continuous function). We conclude that an arbitrary measure on the manifold n can be represented:

g(ω,p)=q(ω)δ(p-ρ(ω))+q(-ω)δ(p+ρ(-ω)).

If the boundary D is smooth, and hence ρ(ω) is smooth, we can argue similarly, using the fact that δ(j)() is odd if j is odd, to see that an arbitrary distribution g(ω,p) on n that is supported on

(2.1){(ω,p):p=ρ(ω)}{(ω,p):p=-ρ(-ω)}

can be written as

(2.2)g(ω,p)=j=0m-1qj(ω)δ(j)(p-ρ(ω))+(-1)jqj(-ω)δ(j)(p+ρ(-ω))

for some distributions q0(ω),,qm-1(ω) on Sn-1. But if ρ(ω) is not smooth, this is not always true. Moreover, the product qj(ω)δ(j)(p-ρ(ω)) does not even make sense as a distribution if j1, qj is a distribution of positive order, and ρ(ω) is only continuous.

However, if g=Rf for some compactly supported distribution f, then the arguments in [2, Lemma 2] prove that the representation (2.2) is valid and qj(ω) must be continuous. For the convenience of the reader, we repeat the argument briefly in Lemma 2.1.

The theory of the wave front set of distributions is not needed in this article, but it can be used to make a helpful comment to the next lemma. It is well known that the restriction of a distribution u in n to a smooth submanifold Σ makes sense as a distribution on Σ provided the conormal of Σn is disjoint from the wave front set WF(u); see [4, Chapter 8]. The 1–1 correspondence between the wave front set of a distribution f and that of its Radon transform g=Rf shows in particular that if f has compact support, then WF(g) cannot contain any conormal to a submanifold of n of the form ω= constant.[1] This is reflected in the fact, well known from computerized tomography, that the so-called sinogram, the density plot of a 2-dimensional Radon transform Rf(ω,p) in an αp plane with ω=(cosα,sinα), never contains vertical discontinuities. More precisely, if suppf is contained in the ball |x|B, then all conormals to the hypersurface γy={Ln:yL}n must be disjoint from WF(Rf) if |y|>B. In particular, for any compactly supported distribution f, the restriction of the distribution Rf(ω,p) to ω=ω0 always makes sense as a distribution on ; cf. the distribution Rω in (2.3).

Lemma 2.1.

Let D be an open, convex, bounded subset of Rn. Let f be a compactly supported distribution in Rn and assume that g=Rf is supported on the set of supporting planes to D. Then there exist a number m and continuous functions qj(ω) such that the distribution g can be written in the form (2.2).

Proof.

For arbitrary ωSn-1, define the distribution Rωf on by

(2.3)Rωf,ψ=f,xψ(xω)for ψC().

The map ωRωf must be continuous in the sense that ωRωf,ψ is continuous for every test function ψC(). Moreover, Rf can be expressed in terms of Rωf as follows: If φ(ω,p)=φ0(ω)φ1(p), then

Rf,φ=f,R*φ=f,Sn-1φ0(ω)φ1(xω)dω
=Sn-1φ0(ω)f,φ1(xω)𝑑ω
(2.4)=Sn-1φ0(ω)Rωf,φ1𝑑ω.

Formula (2.4) shows that if g=Rf is supported on the hypersurface (2.1), then Rωf must be supported on the union of the two points p=ρ(ω) and p=-ρ(-ω) for every ω. Hence Rωf can be represented as the right-hand side of (2.2) for every ω. It remains only to prove that all qj(ω) are continuous. It is enough to prove that qj(ω) is continuous in some neighborhood of an arbitrary ω0Sn-1. If we choose ψ such that ψ(p)=0 in some neighborhood of -ρ(-ω0), then

Rωf,ψ=j=0mqj(ω)δ(j)(p-ρ(ω)),ψ(p)=j=0m(-1)jqj(ω)ψ(j)(ρ(ω)).

If we choose ψ(p) such that ψ(p)=1 in a neighborhood of ρ(ω0), we get Rωf,ψ=q0(ω) in a neighborhood of ω0. Recalling that ωRωf is continuous, we see that q0 must be continuous at ω0. Next, choosing ψ(p) such that ψ(p)=p in a neighborhood of ρ(ω0), we get

Rωf,ψ=q0(ω)ρ(ω)-q1(ω),

which shows that q1(ω) must be continuous since ρ(ω) is continuous. Continuing in this way completes the proof. ∎

Remark 2.2.

Theorem 1.1 shows that the function ρ(ω) must be real analytic and, as we shall see later, the functions qj(ω) must also be real analytic. It would be interesting to know if one could prove by arguments like those of the proof of Lemma 2.1 that qj(ω) must be real analytic. That would make the crucial Proposition 5.1 an immediate consequence of formula (5.5) and thereby considerably simplify the proof of Theorem 3.1. In particular, Lemma 5.2 and Remarks 4.3 and 4.6 could be omitted.

Next, we write down the conditions on qj(ω) and ρ(ω) for g(ω,p) to belong to the range of the Radon transform.

3 The range conditions

A compactly supported (ω,p)-even function or distribution g(ω,p) belongs to the range of the Radon transform if and only if the function

ω=(ω1,,ωn)g(ω,p)pk𝑑p

is the restriction to the unit sphere of a homogeneous polynomial of degree k in ω for every non-negative integer k; see [3].

We now compute the moments g(ω,p)pk𝑑p for the expression (2.2). The computations are only slightly different from those of [2]. By the definition of δ(j), for any a0,

δ(j)(p-a)pk𝑑p=0if j>k,

and

δ(j)(p-a)pk𝑑p=(-1)jδ(p-a)pjpkdp=(-1)jk!(k-j)!ak-jif jk.

For arbitrary non-negative integers k,j, we define the constant ck,j by ck,j=0 if j>k and

(3.1)ck,j=k!(k-j)!=k(k-1)(k-j+1)if 0jk.

For instance, if j=2, then ck,j=k(k-1) for all k. It follows that if g(ω,p) is defined by (2.2), then

g(ω,p)pk𝑑p=j=0m-1ck,j(-1)j(qj(ω)ρ(ω)k-j+qj(-ω)(-ρ(-ω))k-j)

for every k0. Thus, for g(ω,p) to be the Radon transform of a distribution, it is necessary and sufficient that

j=0m-1ck,j((-1)jqj(ω)ρ(ω)k-j+(-1)kqj(-ω)ρ(-ω)k-j)

is equal to the restriction to Sn-1 of a homogeneous polynomial of degree k for every k. In Sections 5 and 6, we will show that those conditions imply that the graph of ρ(ω) is a second degree surface in n, which, since the surface is bounded, implies that it is an ellipsoid.

Thus Theorem 1.1 follows from the following purely algebraic result. We shall denote the set of homogeneous polynomials of degree k by 𝒫k.

Theorem 3.1.

Assume that the strictly positive and continuous function ρ(ω) on Sn-1 and the continuous functions q0,q1,,qm-1, not all zero, satisfy the infinitely many identities

(3.2)j=0m-1ck,j((-1)jρ(ω)k-jqj(ω)+(-1)kρ(-ω)k-jqj(-ω))=pk(ω)𝒫k

for k=0,1, where ck,j is defined by (3.1). Then the graph of ρ(ω), the set of supporting planes to D, is a quadric in Pn.

In order to shorten formulas, we will sometimes write

q(-ω)=qˇ(ω),ρ(-ω)=ρˇ(ω).

For instance, if m=3, the first six of equations (3.2) can then be written in matrix form as follows:

(3.3)(100100ρ-10-ρˇ-10ρ2-2ρ2ρˇ22ρˇ2ρ3-3ρ26ρ-ρˇ3-3ρˇ2-6ρˇρ4-4ρ312ρ2ρˇ44ρˇ312ρˇ2ρ5-5ρ420ρ3-ρˇ5-5ρˇ4-20ρˇ3)(q0q1q2qˇ0qˇ1qˇ2)=(p0p1p2p3p4p5).

Here is an outline of the proof of Theorem 3.1. As in [2], we first eliminate the functions qj from systems of, in this case 2m+1, consecutive equations from the infinite system (3.2). This gives an infinite set of linear equations in the 2m quantities ρ(ω)j and ρˇ(ω)j=ρ(-ω)j for 0jm-1 with coefficients that are multiples of the polynomials pj. The matrix of the system of the first 2m of those equations is

Π0=(pj+k)j,k=02m-1.

If this matrix is non-singular, then, just as in [2], we can easily prove that certain symmetric polynomial expressions in ρ(ω) and -ρ(-ω), for instance ρ(ω)mρ(-ω)m and ρ(ω)-ρ(-ω), must be rational functions of ω=(ω1,,ωn), and in fact that ρ(ω)mρ(-ω)m and even ρ(ω)ρ(-ω) must be polynomials. As in [2], the fact that qm-1 is not identically zero implies that the matrix Π0 has maximal rank (Proposition 5.1), but this was more difficult to prove than the corresponding statement in [2]. The determinant of the matrix Π0(ω) is equal to a multiple of qm-1(ω)qm-1(-ω), so it was necessary to exclude the possibility that qm-1(ω)qm-1(-ω) is identically zero without qm-1 being identically zero. Knowing that Π0(ω) is non-singular, it would have been easy to prove that qm-1(ω)qm-1(-ω)0. But without yet knowing that Π0(ω) is non-singular, we had to consider for a moment the possibility that the rank of Π0(ω) is smaller than maximal and deal with a linear system with fewer equations and unknowns (Lemma 5.2). Finally, knowing that ρ(ω)ρ(-ω) is a polynomial and that the other symmetric polynomials in ρ(ω) and -ρ(-ω) are rational was not enough to prove Theorem 3.1 (see Section 6). We had to deduce more information from system (3.2) than we did in Section 5. However, comparing two expression for the trace of the matrix S~k (see Section 6), we could prove (Lemma 6.1) that the function ρ(ω)-ρ(-ω) must be a homogeneous first degree polynomial, that is, linear in ω. By using this fact, it is easy to complete the proof of Theorem 3.1. Alternatively, the fact that ρ(ω)-ρ(-ω) is linear implies that we can make a translation of the coordinate system so that ρ(ω)-ρ(-ω) vanishes, which means that we can apply the special case treated in [2].

4 Algebraic preliminaries

This section contains a study of the matrix that defines the infinite set of linear expressions in qj(ω) and qj(-ω) that are given by (3.2), of which (3.3) is an example. The main point is, just as in [2], that the sequence of 2m×2m submatrices forms a geometric sequence, where the left and right quotients have special properties. The role of the identities (4.12) is to help us eliminate the qj from the system. The left half of the matrix M, which describes the dependence on qj(ω), is similar to the corresponding matrix in [2], but simpler: here both the left and right quotients have an extremely simple structure; see (4.2) and (4.4). What is interesting, and perhaps not quite so obvious, is that the corresponding right quotient in the case of the full matrix is also very simple; it is a two-block matrix (4.9) where each block has the same form as T in (4.4).

Consider a matrix L that consists of m columns and infinitely many rows and is defined as follows: the first column is 1,x,x2,, the elements of the second column are the formal derivatives of those of the first, 0,1,2x,3x2,, and the elements of the third column are the second derivatives of the same elements, and so on. In other words, the entries k,j of the matrix L can be written (D denotes formal differentiation with respect to x) as

(4.1)k,j=Djxk=ck,jxk-jfor 0jm-1 and all k0,

where the constants ck,j are defined by (3.1). Denote the successive m×m submatrices of L by L0,L1,L2 etc. For instance, if m=4, then

L0=(1000x100x22x20x33x26x6),L1=(x100x22x20x33x26x6x44x312x224x)etc.

We are interested in the dependence of the matrix Lk on k. We shall see that there are matrices S and T such that

(4.2)Lk=SkL0=L0Tkfor all k0.

Denote by δ() the function on the set of integers for which δ(0)=1 and δ(j)=0 for all j0. Define the m×m matrices S=(sk,j), and T=(tk,j), 0k,jm-1, by

sk,j=δ(j-k-1)if km-2,
sm-1,j=-(mj)(-x)m-j,
(4.3)tk,j=xδ(j-k)+(k+1)δ(j-k-1).

For instance, if m=4, this means that

(4.4)S=(010000100001-x44x3-6x24x)andT=(x1000x2000x3000x).

Lemma 4.1.

The matrices S, T, and Lk are non-singular as matrices over the field of rational functions in x, (4.2) holds, and detS=detT=xm. Moreover, detL0=bm, where

bm=1!2!(m-1)!.

Proof.

The identity of the first m-1 rows in SLk=Lk+1 is trivial. The coefficients sm-1,j in the last row of the matrix S satisfy the identity

(4.5)F(t)=(t-x)m=j=0m(mj)tj(-x)m-j=tm-j=0m-1sm-1,jtj.

The identity of the last rows of SL0 and L1 means that

k=0m-1sm-1,kk,j=m,jfor all j,

that is,

k=0m-1sm-1,kDjxk=Djxm,0jm-1.

But this is (4.5) differentiated j times with respect to t and then t set equal to x.

We next prove that LkT=Lk+1 for all k0. Taking into account the description of T in (4.3), we see that LkT=Lk+1 means precisely that

k,j-1j+k,jx=k+1,jfor 0jm-1 and all k0.

By (4.1), the assertion therefore follows from the formula

Djxk+1=Dj(xxk)=xDjxk+jDj-1xk.

Now we can prove that SkL0=Lk for all k by making repeated use of the identity SL0=L0T. In fact,

(4.6)SkL0=Sk-1SL0=Sk-1L0T=Sk-2L0T2==L0Tk=Lk.

It is obvious that L0 and T are non-singular as matrices over the field of rational functions in x and that detT=xm. Now it follows from (4.2) that all Lk are non-singular. More precisely, it is easily seen from the definition of L0 that detL0=bm=1!2!(m-1)!, and hence detLk=bmxkm. Since S=L0TL0-1, we also have detS=detT=xm. This completes the proof of Lemma 4.1. ∎

In the proof of Theorem 3.1, we shall have to consider matrices M=M(x,y) with 2m columns and infinitely many rows that consist of two blocks, each of the same kind as the matrix L above, the left block containing the variable x and the right block containing the variable y. As before, we introduce the successive square submatrices, so that, for instance if m=3,

(4.7)M0=(100100x10y10x22x2y22y2x33x26xy33y26yx44x312x2y44y312y2x55x420x3y55y420y3)

and

M1=(x10y10x66x530x4y66y530y4).

Define the 2m×2m matrix S=(sk,j) by means of a sequence of 1’s next to the main diagonal,

sk,j=δ(j-k-1)if km-2,

and the entries in the last row

s2m-1,j=σ2m-j(x,y),

where the polynomials σ2m-j are defined by the identity

(4.8)G(t)=(t-x)m(t-y)m=t2m-j=02m-1tjσ2m-j.

For instance, σ2m=-xmym and σ1=m(x+y). The expression σν(x,y) is up to sign equal to the elementary symmetric polynomial of degree ν in 2m variables evaluated at (x,,x,y,,y). Furthermore, we define the matrix T as the block diagonal matrix

(4.9)T=(Tx00Ty),

where Tx is the m×m matrix defined by (4.3) and Ty is the same with y instead of x. For instance, if m=3, we have

(4.10)Tx=(x100x200x)andTy=(y100y200y),

and

(4.11)S=(010000001000000100000010000001σ6σ5σ4σ3σ2σ1),

where

σ1=3(x+y),
σ2=-3(x2+3xy+y2),
σ3=x3+9x2y+9xy2+y3,
σ4=-3xy(x2+3xy+y2),
σ5=3x3y2+3x2y3,
σ6=-x3y3.

Lemma 4.2.

The matrices M0, S, and T are non-singular, detS=detT=xmym, and

(4.12)Mk=SkM0=M0Tkfor all k0.

Proof.

Due to the block structure of the matrix T, the fact that MkT=Mk+1 is an immediate consequence of (4.2). In fact, if we write Mk=(Ak,Bk), then the identity MkT=Mk+1 becomes

MkT=(Ak,Bk)(Tx00Ty)=(AkTx,BkTy)=(Ak+1,Bk+1)=Mk+1,

where the second last equality follows from (4.2).

The identity SM0=M1 can be proved in the same way as the identity SL0=L1 above. For the first 2m-1 rows the assertion is trivial. For the first element in the last row the assertion means that

j=02m-1σ2m-jxj=x2m,

and this identity results if we set t=x in (4.8). For the second element in the last row we first differentiate (4.8) with respect to t and then set t=x. And so on up to the m-th element in the last row. For the last m elements in the last row we argue similarly, but set t=y instead.

We can now finish the proof of (4.12) in exactly the same way as we did the analogous step for the matrix Lk. Namely, reasoning as in (4.6), we get

SkM0=Sk-1SM0=Sk-1M0T=Sk-2M0T2==M0Tk=Mk.

The fact that M0 is non-singular will follow from the next lemma, which gives an exact expression for the determinant of M0. However, to see that M0 is non-singular, it is enough to observe that the matrix M0(0,y) is non-singular. And after we have set x=0, it is easy to make elementary column operations so that this matrix gets block diagonal form

(Dm-100Lm(y)),

where Dm-1 is a diagonal matrix of positive integers and Lm(y) is the matrix Lm considered in Lemma 4.1, whose determinant is given there to be bmym2. ∎

Remark 4.3.

The matrix identities described in Lemma 4.2 remain valid if a number of columns in the matrix M are deleted from the right, so that M consists of mx-columns as before and only the first ry-columns, 1r<m. For instance, if m = 3 and r=2,

M0=(10010x10y1x22x2y22yx33x26xy33y2x44x312x2y44y3),

and

M1=(x10y1x55x420x3y55y4),

then S=M1M0-1 is a 5×5 matrix analogous to (4.11) with

σ5=x3y2,
σ4=-x2y(2x+3y),
σ3=x(x2+6xy+3y2),
σ2=-3x2-6xy-y2,
σ1=3x+2y.

These facts are proved with exactly the same arguments as in Lemma 4.2, if only the expression G(t) in (4.8) is replaced by

G(t)=(t-x)m(t-y)r=tm+r-j=0m+rtjσm+r-j.

Lemma 4.4.

The determinant of the 2m×2m matrix M0 is given by

detM0=bm2(y-x)m2,

where bm=1!2!(m-1)!.

Proof.

If we introduce the column 2m-vectors

X0=(1,x,x2,,x2m-1)t,
X1=DX0=(0,1,2x,3x2,,(2m-1)x2m-2)t,

X2=D2X0 etc., where the elements of Xk+1 are formal derivatives of the corresponding elements of Xk, and let Yk have the analogous meaning with the variable x replaced by y, then the matrix M0 can be written as

M0=M0(x,y)=(X0,X1,,Xm-1,Y0,Y1,,Ym-1).

To simplify notation and increase readability, we will first consider the case m=3. To find the order of the zero of D(x,y)=detM0(x,y) at x=y, we will study the polynomial F(t)=D(x,x+t). Then note that

(4.13){Y0(x+t)=X0+tX1+12t2X2+13!t3X3+14!t4X4+15!t5X5,Y1(x+t)=X1+tX2+12t2X3+13!t3X4+14!t4X5,Y2(x+t)=X2+tX3+12t2X4+13!t3X5.

Recall that M0 is the 6×6 matrix (X0,X1,X2,Y0,Y1,Y2). Replacing the columns Y0,Y1,Y2 here with the respective expressions from (4.13), we see immediately that we can make elementary column operations to get rid of all terms containing X0, X1, and X2 in the expressions for Yj. This shows that detM0(x,x+t) can be written as

det(X0,X1,X2,t33!X3+t44!X4+t55!X5,t22!X3+t33!X4+t44!X5,tX3+t22!X4+t33!X5).

This shows already that

detM0(x,x+t)=𝒪(t3+2+1)=𝒪(t6)as t0.

However, we can also get rid of the X3 terms in the fourth and fifth columns, by subtracting suitable multiples of the last column. And finally we can get rid of the X4 term in the fourth column by subtracting a multiple of the fifth column from the fourth. Then we get

detM0(x,x+t)=det(X0,X1,X2,at5X5,bt3X4+𝒪(t4),tX3+𝒪(t2))

for some constants a and b. This shows that

detM0(x,x+t)=𝒪(t5+3+1)=𝒪(t9)as t0,

which implies that

detM0(x,y)=c(y-x)9

for some constant c. To compute the value of c we can set x=0 in (4.7), which gives

detM0(0,y)=2det(y33y26yy44y312y2y55y420y3)=2y9(13614121520)=4y9.

It follows that

detM0(x,y)=4(y-x)9.

If we perform the same computations for M0(x,y) with arbitrary m, we get, instead of the number 5+3+1=9, the number

1+3+5++2m-1=m1+2m-12=m2.

And for the determinant of M0(0,y) we get

12!3!(m-1)!detLm.

It follows from Lemma 4.1 and formula (4.2) that

detLm=detL0(detTx)m=1!2!3!(m-1)!xm2.

This completes the proof of Lemma 4.4. ∎

The matrix

Π0=(pj+k)j,k=02m-1

can be written as a row of column vectors (P0,P1,,P2m-1), where Pk is the column vector

(4.14)Pk=(pk,pk+1,,pk+2m-1)t.

For ξ,ηn we denote by ζ=(ξ,η) the column vector

ζ=(ξ1,,ξm,η1,,ηm)t.

Since the matrix M0 is non-singular and SM0=M0T, the matrix (ζ,Sζ,,S2m-1ζ) is non-singular if and only if the matrix

(4.15)W=(ζ,Tζ,,T2m-1ζ)

is non-singular. Therefore, we now study the matrix W.

Lemma 4.5.

The vectors ζ,Tζ,,T2m-1ζ are linearly independent in the vector space C(x,y)2m over the field C(x,y) of rational functions in x and y if and only if ξmηm0.

Proof.

The structure of the matrix Tx, see (4.10), shows that the vectors Txkξ0 belong to the subspace {ξn=0} for all k if ξn0=0, and the analogous assertion is of course valid for Ty. Therefore, it is obvious that the condition ξmηm0 is necessary for the vectors ζ,Tζ,,T2m-1ζ to be linearly independent. It remains to prove the sufficiency. This statement is a special case of a well-known fact from linear algebra, so we only sketch the proof here. Denote the vector space (x,y)2m by V and the subspace generated by all vectors Tkζ by E. It is enough to prove that E=V, because if for instance T2m-1ζ can be written as a linear combination

T2m-1ζ=a0ζ+a1Tζ++a2m-2T2m-2ζ

with aj(x,y), then the same identity holds with ζ replaced by Tkζ for all k0, and using this fact for k=1,2, shows that E must then be contained in a (2m-1)-dimensional subspace of V. To prove that E=V we first note that TEE. Then we use the standard fact, valid for an arbitrary linear operator on a finite-dimensional space, that E must be the sum of its generalized eigenspaces for T, which means in this case that

(4.16)E=ExEy,

where Ex={uE:(T-x)ku=0 for some k} and Ey is defined analogously. Define

Vx={uV:(T-x)ku=0 for some k}

and Vy similarly. The fact that ξm-10 is easily seen to imply that Ex=Vx, and in the same way ηm-10 implies Ey=Vy. The fact that E=V now follows immediately from (4.16).

By using the arguments of Lemma 4.4, the determinant of the 2m×2m matrix (4.15) can in fact be computed explicitly, and its value is

(4.17)detW=cmξmmηmm(y-x)m2,

where cm>0. We have c2=1, c3=22b3, c4=3232b4, and c5=426242b5, where bm is the constant defined in Lemma 4.4. Let us explain the computation for m=3. By using the notation from the previous lemma, the transpose Wt of W can be written as

Wt=(ξ0X0+ξ1X1+ξ2X2,ξ1X0+2ξ2X1,ξ2X0,η0Y0+η1Y1+η2Y2,η1Y0+2η2Y1,η2Y0),

where the Xj and Yj denote column 6-vectors. Since ξ2 and η2 are different from zero, we can get rid of the X0 and Y0 in the first, second, fourth, and fifth columns by elementary column operations and obtain

detWt=(ξ1X1+ξ2X2,2ξ2X1,ξ2X0,η1Y1+η2Y2,2η2Y1,η2Y0).

Similarly, we can now get rid of X1 and Y1 in the first and fourth columns, which gives

detW=detWt=(ξ2X2,2ξ2X1,ξ2X0,η2Y2,2η2Y1,η2Y0)
=4ξ23η23(X2,X1,X0,Y2,Y1,Y0)
=4ξ23η23detM0
=4ξ23η234(y-x)9
=16ξ23η23(y-x)9.

It is clear that the analogous operations can be done in the general case and that this proves (4.17). ∎

Remark 4.6.

The assertion of Lemma 4.5 is also valid if the matrices Tx and Ty have different dimensions, for instance Tx is m×m as before and Ty is r×r. Write

ζ=(ξ1,,ξm,η1,,ηr)t

in m+r. The assertion is then that the vectors ζ,Tζ,,Tm+r-1ζ are linearly independent in the vector space (x,y)m+r if and only if ξmηr0. The proof of this statement is the same as the proof of Lemma 4.5 (cf. Remark 4.3).

5 Proving that ρ(ω)ρ(-ω) must be a polynomial

We will now use the matrices introduced in the previous section to formulate the equation system (3.2). Denote by M~ the matrix with 2m columns and infinitely many rows that is obtained if x and y in M=M(x,y) are replaced by ρ(ω) and -ρ(-ω), respectively. The notations M~0,M~1, and S~ and T~ will have the analogous meaning. The value of the determinant detM0 given by Lemma 4.4 shows that the determinant of M~0 is equal to

detM~0(ω)=bm2(ρ(ω)+ρ(-ω))m2,

which is a positive continuous function. Hence M~0 is invertible in the ring of matrices over the ring of continuous functions. This fact will be important for us in the future. Moreover, we introduce the column 2m vector

Q=(q0(ω),q1(ω),,qm-1(ω),q0(-ω),q1(-ω),,qm-1(-ω))t,

and for each k0 the column 2m vector of polynomials

Pk=(pk(ω),pk+1(ω),,pk+2m-1(ω))t,k=0,1,.

The equation M~0Q=P0 is now almost the same as the first 2m-1 equations from (3.2), but not quite because of the factor (-1)j in the first term of (3.2). This inconvenience could of course have been avoided with a different definition of the matrix M(x,y), but we wanted M(x,y) to have as much symmetry as possible. But this problem can easily be fixed by the introduction of one more matrix. Introduce the diagonal 2m×2m matrix

J=diag(1,-1,,(-1)m,1,-1,,(-1)m).

If m=3, then M~0J is the matrix (3.3), and M~0JQ=P0 for arbitrary m. System (3.2) can now be written as

(5.1)M~kJQ=Pk,k0.

Using SMk=Mk+1 and (5.1), we then get

(5.2)S~Pk=S~M~kJQ=M~k+1JQ=Pk+1 for all k.

Our goal is to deduce information about the entries of S~ from (5.2).

Incidentally, we note that solving Q from (5.1) with k=0 shows that the assumptions of Theorem 3.1 imply that all qj(ω) must be continuous, which we already proved in Lemma 2.1.

The entries of Pk are homogeneous polynomials. The entries of the last row of S~, which we shall denote by σ~2m(ω),,σ~1(ω), are defined by

σ~k(ω)=σk(ρ(ω),-ρ(-ω)).

The identity (5.2) therefore means that

(5.3)j=02m-1σ~2m-jpj+k=p2m+k,k0.

We want to solve σ~j(ω) from this system to show that σ~j(ω) must be rational functions. Therefore, the fact that the matrix

(5.4)Π0(ω)=(pi+j(ω))0i,j2m-1

is non-singular is of fundamental importance. Recalling that the matrix Π0(ω) can be written as a row of column vectors (4.14) and that

Pk=S~kM~0JQ=M~0JT~kQ,

and hence Π0=M~0JW~ with

W~=(Q,T~Q,,T~2m-1Q),

we see that detΠ0(ω0)0 if and only if qm-1(ω0)qm-1(-ω0)0 by Lemma 4.5. And more precisely, formula (4.17) shows that

(5.5)detΠ0(ω)=bm2cm(ρ(ω)+ρ(-ω))2m2qm-1(ω0)qm-1(-ω0).

Proposition 5.1.

Assume that assumption (3.2) of Theorem 3.1 is valid and that the function qm-1(ω) is not identically zero. Then the matrix Π0(ω) is non-singular.

For the proof we need the following lemma.

Lemma 5.2.

Assume that assumption (3.2) of Theorem 3.1 is valid and that the function qm-1(ω) is not identically zero. Then the functions σ~j(ω) are rational functions and ρ(ω) is algebraic.

Proof.

To make the simple idea of the proof clear we first assume that we already know that the matrix Π0(ω) is non-singular. Then the linear system consisting of the first 2m of equations (5.3) can be solved for the “unknowns” σ~k(ω), which gives

σ~k(ω)=Fk(ω)G(ω),k=1,,2m,

where Fk(ω) and G(ω) are homogeneous polynomials and

G(ω)=detΠ0(ω).

Recalling that σ~1(ω),,σ~2m(ω) are the elementary symmetric polynomials in m copies of each of the variables ρ(ω) and -ρ(-ω), we see that ρ(ω) and ρ(-ω) must be roots of a polynomial equation with polynomial coefficients, so ρ(ω) must be an algebraic function.

To consider the possibility that the determinant of Π0(ω) is identically zero, we will argue similarly, but working with linear systems with fewer unknowns and fewer equations. To begin with, Lemma 4.5 shows that in this case qm-1(ω)qm-1(-ω) must be identically zero. At this point, we cannot exclude that also qm-1(ω)qm-2(-ω) is identically zero. Let r be the largest number for which qm-1(ω)qr(-ω) is not identically zero. Choose a point ω0 where qm-1(ω0)qr(-ω0)0. For a while we shall have to consider (m+r)-vectors of numbers and polynomials. Set d=m+r and introduce the notation Pkd for the column d-vector

Pkd(ω)=(pk(ω),,pk+m+r-1(ω))t.

By Remark 4.6, the fact that qm-1(ω0)qm+r(-ω0)0 implies that the vectors of real numbers

P0d(ω),,Pm+r-1d(ω)

are linearly independent in m+r for ω in some neighborhood of ω0, and hence the determinant of the corresponding matrix is different from zero. But this determinant is a polynomial function of ω. Therefore, the m+r vectors of polynomials P0d,,Pm+r-1d are linearly independent in the vector space (ω)m+r. By Remark 4.3, the relationship between Pkd and Pk+1d is given by a matrix S~, whose entries in the last row are the elementary symmetric polynomials in m copies of ρ(ω) and r copies of -ρ(-ω). We can therefore consider a linear system of m+r equations

S~dPk=Pk+1d,k=0,,m+r-1,

and reason as above to conclude that the symmetric polynomials in ρ(ω) and ρ(-ω) must be rational functions of ω. And this in turn implies that ρ(ω) must be an algebraic function. This completes the proof of Lemma 5.2. ∎

Proof of Proposition 5.1.

Recall the equation P0(ω)=M~0(ω)JQ(ω), which expresses the 2m-vector P0 of polynomials in terms of the vector Q of density functions qj. The matrix M~0(ω) is pointwise invertible, and since ρ(ω) is algebraic, the entries of M~0(ω) are algebraic, and so are the entries of the inverse M~0-1(ω). It follows that Q(ω)=J-1M~0(ω)-1P0(ω) must be algebraic. In particular, qm-1(ω) must be an algebraic function. By assumption, qm-1(ω) is not identically zero. Hence qm-1(ω) cannot vanish in an open set, and qm-1(ω)qm-1(-ω) cannot vanish everywhere. Choose a point ω0 such that qm-1(ω0)qm-1(-ω0)0. By Lemma 4.5, we can now conclude that detΠ0(ω)0 in some neighborhood of ω0, and hence the polynomial detΠ0(ω) cannot be the zero-polynomial. This completes the proof. ∎

Lemma 5.3.

If the assumptions of Theorem 3.1 are valid, then ρ(ω)ρ(-ω) must be a homogeneous quadratic polynomial.

Proof.

From Lemma 5.2 we learnt that all σ~j(ω) must be rational functions of ω. But observe also that now that we know by Lemma 5.2 that the matrix Π0(ω) is non-singular, the argument in the first couple of lines of the proof of Lemma 5.2 proves very easily that all σ~j(ω), and in particular σ~2m(ω)=-ρ(ω)m(-ρ(-ω))m, must be rational functions of ω. The idea of the proof that ρ(ω)ρ(-ω) must be a polynomial is to consider 2m of equations (5.2) together and express the resulting equation as a matrix equation. Observe that (5.2) implies

(5.6)S~kPj=Pk+j

for all j and k. Generalizing (5.4), we introduce the matrices

Πk(ω)=(pk+i+j(ω))0i,j2m-1,k=0,1,.

Then we can combine 2m of equations (5.6) into the matrix identity

S~Π0=Π2m.

This equation contains no more information than (5.2), but it has the advantage that we can operate with S~ and obtain the identity

(5.7)S~kΠ0=Π2m+k

for arbitrary k. Now we use the product law for determinants. The determinant of S~ is ±ρ(ω)mρ(-ω)m and the determinant of Πk is a polynomial in ω for every k. Denoting the determinant of Π0(ω) by d(ω), we can conclude that

ρ(ω)mkρ(-ω)mkd(ω)must be a homogeneous polynomial for every k.

Since ρ(ω)ρ(-ω) is already known to be a rational function, it now follows that ρ(ω)ρ(-ω) must be a homogeneous polynomial. Since ρ(ω) by definition is a homogeneous function of degree 1, it follows that ρ(ω)ρ(-ω) must be a homogeneous quadratic polynomial. This completes the proof of Lemma 5.3. ∎

6 Proving that ρ(ω)-ρ(-ω) must be a polynomial

The arguments in Section 5 do not suffice for proving that the graph of ρ(ω) is a quadric. To explain why we observe first that if the graph of ρ(ω) is a quadric, then ρ(ω)-ρ(-ω) must be a linear function of ω. To see this, assume that p=ρ(ω) satisfies an equation F(p,ω)=p2+pq1(ω)+q2(ω)=0, where q1 is a linear function of ω and q2 is a homogeneous quadratic polynomial. Subtracting the equation F(p,ω) from F(-p,-ω) and dividing by ρ(ω)+ρ(-ω) gives

ρ(ω)-ρ(-ω)+q1(ω)=0,

which proves the claim.

For a small ε>0, we consider the function

ρ(ξ)=(ε2ξ16+|ξ|6)1/2|ξ|2+εξ13|ξ|2,ξ2{0}.

The function ρ(ξ) is homogeneous of degree 1, and tends to |ξ| as ε tends to zero. The latter function is strictly convex, and hence ρ(ξ) must be strictly convex if ε is small enough. Moreover,

ρ(ξ)ρ(-ξ)=ε2ξ16+|ξ|6|ξ|4-ε2ξ16|ξ|4=|ξ|6|ξ|4=|ξ|2

is a polynomial, and

ρ(ξ)-ρ(-ξ)=2εξ13|ξ|2

is a rational function. But ρ(ξ)-ρ(-ξ) is not a linear function.

This example shows that we have to deduce more information about ρ from the assumptions of Theorem 3.1. Up to now we used properties of the determinant of a matrix. We shall now also use properties of the trace of a matrix.

Lemma 6.1.

Under the assumptions of Theorem 3.1, the trace of T~, i.e. σ1(ω)=m(ρ(ω)-ρ(-ω)), must be a polynomial.

Proof.

The idea is to compare two different expressions for the trace of T~k for large k. If k is even, the trace of T~k is equal to

mρ(ω)k+m(-(ρ(-ω))k)=mρ(ω)k+mρ(-ω)k.

By Lemma 5.3, we know already that σ1=m(ρ(ω)-ρ(-ω)) is a rational function. To shorten formulas we write for a moment ρ(-ω)=ρˇ as before. Assume that

ρ-ρˇ=rs,

where r and s are polynomials without common factor and the degree of s is greater than or equal to 1. We will show that this assumption leads to a contradiction. Since ρρˇ is a polynomial, which we denote by q, we can write

ρ2+ρˇ2=(ρ-ρˇ)2+2ρρˇ=r2s2+2q=q1s2,

where q1 is a polynomial that lacks a common factor with s. Similarly,

ρ4+ρˇ4=(ρ2+ρˇ2)2-2ρ2ρˇ2=q12s4-2q2=q2s4,

where q2 is another polynomial without common factor with s. Continuing in this way, we obtain for arbitrary k of the form 2ν that

(6.1)ρk+ρˇk=qksk,

where qk is a polynomial without common factor with s.

On the other hand, the trace of T~p is equal to the trace of S~p, and S~p=Π2m+pΠ0-1 for arbitrary p by (5.7). The trace of S~p is equal to the coefficient of λ2m-1 in the polynomial

det(S~p-λI)=det(Π2m+pΠ0-1-λI).

Here detΠ0 is a homogeneous polynomial of degree d=2m(2m-1). Since the entries of Π2m+p are polynomials, the entries of the matrix Π2m+pΠ0-1 are rational functions whose denominators have degree at most d2m. The important fact is that this number is independent of p. It follows that the trace of T~p=Π2m+pΠ0-1 is a rational function whose denominator has degree d2m. Comparing this with (6.1), we have a contradiction and the lemma is proved. ∎

Rest of the proof of Theorem 3.1.

By Lemma 6.1, we know that ρ-ρˇ is a polynomial. But ρ and ρˇ are restrictions to Sn-1 of homogeneous functions on n{0} of degree 1. Hence ρ-ρˇ is a homogeneous polynomial of degree 1, in other words a linear function, say ρ-ρˇ=a1ω1+a2ω2=aω. Multiplying by ρ, we obtain

ρ2-ρρˇ=ρaω.

Recalling that ρρˇ is a quadratic polynomial, we now see that p=ρ(ω) and ω=(ω1,ω2) satisfy a quadratic equation as claimed.

Alternatively, we can observe that a translation of coordinates xx+a2 changes ρ(ω) to ρ(ω)-ωa2 and ρˇ(ω) to ρ(-ω)+ωa2, so that ρ-ρˇ will be replaced by

ρ(ω)-ωa2-(ρ(-ω)+ωa2)=ρ(ω)-ρ(-ω)-ωa=ωa-ωa=0.

Thus ρ(ω)=ρ(-ω), D=-D, and ρ(ω)2 is a quadratic polynomial in a suitable chosen coordinate system, so the boundary of D is an ellipse. This completes the proof of Theorem 3.1 and hence of Theorem 1.1. ∎

7 The support of a Radon transform

If Σ is a closed bounded subset of n, then the complement Σ contains a unique unbounded, connected component, which we shall denote by (Σ). We shall also need an abbreviation for the complement of (Σ), namely

(7.1)F(Σ)=(Σ).

This definition makes sense also if Σ is replaced by a subset of n. For instance, if K is the union of the boundaries of two disjoint closed disks, then F(K) is the union of the disks. So, intuitively, the effect of the operation F is to fill the holes in K. It is obvious that F is a hull operation, ΣF(Σ)=F(F(Σ)), and that Σ1Σ2 implies F(Σ1)F(Σ2).

Moreover, for xn we will denote by x^ the set {L:xL} of hyperplanes that contain x, and similarly for a subset Kn we define K^ as the union of all x^ for xK. Analogously, for Ln we could define L^ as the subset of n consisting of all xL, but we shall not need this notion here.

If K is convex (and bounded), then the complements of K and K^ are of course connected, so (K)=K and (K^)=K^. For a general compact set K neither K nor K^ is necessarily connected. But

(7.2)(K^)=chK^

because it is obvious that (K^)=(chK^) and the latter is equal to chK^. Here we have used the common notation chK for the convex hull of K.

If f is a compactly supported distribution in n, it is obvious that

(7.3)F(suppRf)(chsuppf)^,

where the right-hand side should be interpreted as K^ with K=chsuppf. To verify (7.3) we note that if L belongs to the complement of the right-hand side, then first of all LsuppRf, but in addition L must belong to the unbounded component of the complement of the support of Rf.

The next theorem asserts that there is actually equality in (7.3).

Theorem 7.1.

For a compact subset Σ of Pn we define the set F(Σ) by (7.1). If f is a compactly supported distribution in Rn, then

F(suppRf)=(chsuppf)^.

This shows in particular that up to possible “holes” in the support we can characterize the subsets of n that can be the support of a compactly supported distribution.

Proof of Theorem 7.1.

It remains to prove

F(suppRf)(chsuppf)^.

Taking complements, we get the equivalent inclusion

(7.4)((suppRf))K^,

where again K=chsuppf. To prove (7.4) take an arbitrary hyperplane L0 in the set on the left-hand side. By the definition of that set, there is a continuous path L(t), t[0,1], L(0)=L0, L(1)=L1, inside the open set of hyperplanes in the complement of suppRf that connects L0 to “infinity”, and hence in particular to a hyperplane L1 that does not intersect the convex hull of the support of f. Clearly, f vanishes in some neighborhood of L1. By the choice of the path L(t), we know that Rf(L) vanishes in a neighborhood of L(t) for each t. By continued use of the local unique continuation property of solutions to the equation Rf(L)=0 as given by Strichartz [6], we can now infer that f must vanish in the union of all L(t) (for an example of this kind of application of the Strichartz theorem, see, e.g., [1, Theorem 3.1]). This shows that L0 does not meet the convex hull of the support of f, which completes the proof. ∎

Acknowledgements

The author is grateful to Rikard Bögvad for helpful discussions on the contents of Section 4.

References

[1] J. Boman, A local uniqueness theorem for weighted Radon transforms, Inverse Probl. Imaging 4 (2010), 631–637. 10.3934/ipi.2010.4.631Search in Google Scholar

[2] J. Boman, A hypersurface containing the support of a Radon transform must be an ellipsoid. I: The symmetric case, J. Geom. Anal. (2020), 10.1007/s12220-020-00372-8. 10.1007/s12220-020-00372-8Search in Google Scholar

[3] S. Helgason, The Radon Transform, Birkhäuser, Stuttgart, 1980. 10.1007/978-1-4899-6765-7Search in Google Scholar

[4] L. Hörmander, The Analysis of Linear Partial Differential Operators. I, Springer, Berlin, 1983. Search in Google Scholar

[5] F. Natterer, The Mathematics of Computerized Tomography, Teubner, Stuttgart, 1986. 10.1007/978-3-663-01409-6Search in Google Scholar

[6] R. S. Strichartz, Radon inversion–variations on a theme, Amer. Math. Monthly 89 (1982), 377–384. 10.2307/2321649Search in Google Scholar

Received: 2020-10-21
Accepted: 2020-12-08
Published Online: 2021-02-02
Published in Print: 2021-06-01

© 2021 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 23.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jiip-2020-0139/html?lang=en
Scroll to top button