Home On second-order linear Stieltjes differential equations with non-constant coefficients
Article Open Access

On second-order linear Stieltjes differential equations with non-constant coefficients

  • Francisco J. Fernández , Ignacio Marquéz Albés and Fernández Tojo EMAIL logo
Published/Copyright: June 19, 2024

Abstract

In this work, we define the notions of Wronskian and simplified Wronskian for Stieltjes derivatives and study some of their properties in a similar manner to the context of time scales or the usual derivative. Later, we use these tools to investigate second-order linear differential equations with Stieltjes derivatives to find linearly independent solutions, as well as to derive the variation of parameters method for problems with g -continuous coefficients. This theory is later illustrated with some examples such as the study of the one-dimensional linear Helmholtz equation with piecewise-constant coefficients.

MSC 2010: 26A24; 34A12; 34A30; 34A36

1 Introduction

The Wronskian has been a very useful tool in the theory of differential equations since its discovery [1]. This instrument allows us to check whether a given family of n solutions of an order n differential equation is linearly independent and, in fact, if the value of the Wronskian is known, it can also be used to find an n th linearly independent solution of the differential equation of order n provided n 1 linearly independent solutions are known. Wronskians are also important tools in the variation of parameters method, where the derivative of the parameters is expressed in terms of the Wronskian.

These classical elements have also a role in newer theories of differential equations, such as the theory of Stieltjes differential equations, as this article will show. Unfortunately, the straightforward nature of the computations in the classical case (both regarding the Wronskian and the variation of parameters method) is not replicable in this setting due to the different nature of the product rule ([2, Proposition 3.9]) which, in this case, reads

( f 1 f 2 ) g ( t ) = ( f 1 ) g ( t ) f 2 ( t * ) + ( f 2 ) g ( t ) f 1 ( t * ) + ( f 1 ) g ( t ) ( f 2 ) g ( t ) Δ g ( t * ) ,

where g is a nondecreasing and left-continuous function defining the Stieltjes derivative, Δ g denotes the jump of g at a given point, and t * is a point that depends on t . The authors have dedicated another work to explore in great detail the caveats and consequences of this product rule [3].

In this article, we derive the definition of the Wronskian and the variation of parameters method in the context of Stieltjes calculus, taking into account the difficulties that arise from this theory and illustrating the applicability of the method with some examples. This endeavor will lead to the necessity of defining what we call a simplified Wronskian, as well as the study of a special family of functions which, despite having low regularity, preserve the smoothness of a function when multiplied by it – see Corollary 2.20.

Given the existing relations between Stieltjes differential equations and other differential problems ([4, Section 8]), our work draws from the classical theory as well as from the theory of dynamic equations in time scales, in which a notion of Wronskian also appears [5]. Remarkably, in this work, we will be able to apply our theory to the study of second-order differential equations with non-constant coefficients, which cannot be found in [5]. Also, we would like to acknowledge that the notion of Wronskian for the case of Stieltjes differential equations also appears in the Master Thesis [6], although in that work, it is not studied with an everywhere defined derivative, which is necessary for the finer points of the theory we develop below.

To reach our goal, in Section 2, we give a brief overview of Stieltjes calculus in order to set the basis for our research, as well as introduce some new tools that are important for the work ahead, such as a version of the integration by parts formula or some results concerning the regularity of maps. In Section 3, we introduce the Wronskian and its simplified counterpart, presenting some of their basic properties. We apply the Wronskian and the variation of parameters method to study second-order Stieltjes differential equations in Section 4, and, finally, in Section 5, we provide three examples to which we apply the theory developed. In particular, we study the one-dimensional linear Helmholtz equation with piecewise-constant coefficients and we show that the solution of the corresponding homogeneous equation arises naturally through the lense of Stieltjes differential problems, which is particularly remarkable as this function is not two times continuously differentiable in the usual setting, but it does present the corresponding regularity in the Stieltjes sense.

2 A brief overview of Stieltjes calculus

Let g : R R be a nondecreasing and left-continuous function, which we shall refer to as a derivator. We shall denote by μ g the Lebesgue-Stieltjes measure associated with g given by

μ g ( [ c , d ) ) = g ( d ) g ( c ) , c , d R , c < d ,

see [79]. We will use the term g-measurable with respect to a set or function to refer to its μ g -measurability in the corresponding sense. Let L g 1 ( X ; C ) be the set of Lebesgue-Stieltjes μ g -integrable functions with values in C on a g -measurable set X , whose integral we denote by

X f ( s ) d μ g ( s ) , f L g 1 ( X ; C ) .

Similarly, we will talk about properties holding g-almost everywhere on a set X (shortened to g -a.e. in X ), or holding for g -almost all (or simply, g -a.a.) x X , as a simplified way to express that they hold μ g -almost everywhere in X or for μ g -almost all x X , respectively.

Define the sets

C g = { t R : g  is constant on ( t ε , t + ε ) for some  ε > 0 } , D g = { t R : Δ g ( t ) > 0 } ,

where Δ g ( t ) = g ( t + ) g ( t ) , t R , and g ( t + ) denotes the right-hand side limit of g at t . First, observe that C g D g = . Furthermore, as pointed out in [10], the set C g has null g -measure and is open in the usual topology of the real line, so it can be uniquely expressed as a countable union of open disjoint intervals, say

(2.1) C g = n Λ ( a n , b n ) ,

where Λ N . With this notation, we introduce the sets N g and N g + in [11], defined as

N g = { a n : n Λ } \ D g , N g + = { b n : n Λ } \ D g , N g = N g N g + .

Before moving on to the study of the Stieltjes derivative, we present a result that contains some basic properties of the map Δ g that will be relevant for the work ahead.

Proposition 2.1

[3, Proposition 3.1] For each t R , we have that

(2.2) lim s t Δ g ( t ) = lim s t + Δ g ( t ) = 0 .

In particular, Δ g is a regulated function, Borel-measurable and g-measurable.

After these considerations, we are finally in a position to define the Stieltjes derivative of a function on an interval as presented in [2], where the derivative is defined on the whole domain of the function, unlike in other papers such as [4,10,12], where they exclude the points of C g . This everywhere defined derivative is what will allow us to consider the second-order derivative in a general setting, which is crucial, for instance, when g is a step function, such as is the case when g codifies the difference operator of difference equations. For a detailed discussion on the properties and implications of this everywhere defined derivative, see [2].

In order to present the derivative, we recall the hypotheses required in the mentioned paper, which we will also assume throughout this work. Namely, we will consider some T > 0 and we will assume that 0 D g N g and T N g + D g C g . A careful reader might observe that throughout [2] it is also required that g ( 0 ) = 0 ; however, this condition can easily be avoided by redefining the map g if necessary, so we will not be considering it.

Definition 2.2

[2, Definition 3.7] We define the Stieltjes derivative, or g-derivative, of a map f : [ a , b ] C at a point t [ 0 , T ] as

f g ( t ) = lim s t f ( s ) f ( t ) g ( s ) g ( t ) , t D g C g , lim s t + f ( s ) f ( t ) g ( s ) g ( t ) , t D g , lim s b n + f ( s ) f ( b n ) g ( s ) g ( b n ) , t C g , t ( a n , b n ) ,

where a n , b n are as in (2.1), provided the corresponding limits exist. In that case, we say that f is g-differentiable at t.

Remark 2.3

For t N g { 0 , T } , the corresponding limit in the definition of g -derivative at t must be understood in the sense explained in [12, Remark 2.2], that is, the Stieltjes derivative at such points is computed as

f g ( t ) = lim s t + f ( s ) f ( t ) g ( s ) g ( t ) , t N g + { 0 } , lim s t f ( s ) f ( t ) g ( s ) g ( t ) , t N g { T } ,

provided the corresponding limit exists. Similarly, as pointed out in [4], f is g -differentiable at t D g if and only if f ( t + ) exists and, in that case,

f g ( t ) = f ( t + ) f ( t ) Δ g ( t ) .

Remark 2.4

It is possible to further simplify the definition of the Stieltjes derivative at a point t [ a , b ] by defining

t * = t , t C g , b n , t ( a n , b n ) C g ,

with a n , b n as in (2.1). With this notation, we have that

f g ( t ) = lim s t f ( s ) f ( t ) g ( s ) g ( t ) , t D g C g , lim s t * + f ( s ) f ( t * ) g ( s ) g ( t * ) , t D g C g ,

provided the corresponding limit exists. Note that the information in Remark 2.3 should still be taken into account. From now on, given a function f , we denote by f * the function defined as f * ( t ) = f ( t * ) . For instance, Δ g * ( t ) = Δ g ( t * ) . Observe that { t [ 0 , T ] : t t * } C g ; therefore,

(2.3) 0 μ g ( { t [ 0 , T ] : t t * } ) μ g ( C g ) = 0 .

The following result, [2, Proposition 3.9], includes some basic properties of this derivative.

Proposition 2.5

Let t [ a , b ] . If f 1 , f 2 : [ 0 , T ] C are g-differentiable at t, then:

  • The function λ 1 f 1 + λ 2 f 2 is g-differentiable at t for any λ 1 , λ 2 C and

    ( λ 1 f 1 + λ 2 f 2 ) g ( t ) = λ 1 ( f 1 ) g ( t ) + λ 2 ( f 2 ) g ( t ) .

  • (Product rule). The product f 1 f 2 is g-differentiable at t and

    ( f 1 f 2 ) g ( t ) = ( f 1 ) g ( t ) f 2 ( t * ) + ( f 2 ) g ( t ) f 1 ( t * ) + ( f 1 ) g ( t ) ( f 2 ) g ( t ) Δ g ( t * ) .

  • (Quotient rule). If f 2 ( t * ) ( f 2 ( t * ) + ( f 2 ) g ( t ) Δ g ( t * ) ) 0 , the quotient f 1 f 2 is g-differentiable at t and

    f 1 f 2 g ( t ) = ( f 1 ) g ( t ) f 2 ( t * ) ( f 2 ) g ( t ) f 1 ( t * ) f 2 ( t * ) ( f 2 ( t * ) + ( f 2 ) g ( t ) Δ g ( t * ) ) .

The following result presents some conditions ensuring that the map Δ g * in the product rule is g -differentiable. This will be a fundamental result for the variation of parameters method. In what follows, X shall denote the derived set of a set X .

Proposition 2.6

[3, Corollary 4.4] Consider the sets

D 1 = { t [ 0 , T ] N g : t ( D g [ 0 , t ) ) } , D 2 = { t [ 0 , T ] N g + : t ( D g ( t , T ] ) } , D 3 = { t [ 0 , T ] \ ( N g D g : t ( D g [ 0 , T ] ) ) } ,

and assume that

(2.4) lim s t s D g Δ g [ 0 , T ] ( s ) g ( s ) g ( t ) = 0 , f o r a l l t D 1 D 2 D 3 .

Then, Δ g * [ 0 , T ] is g-differentiable on [ 0 , T ] and ( Δ g * ) g = χ D g * , where χ D g denotes the indicator function on D g .

In the context of Stieltjes calculus, we also find a concept of continuity related to the map g .

Definition 2.7

[4, Definition 3.1] A function f : [ 0 , T ] C is g-continuous at a point t [ 0 , T ] , or continuous with respect to g at t , if for every ε > 0 , there exists δ > 0 such that

f ( t ) f ( s ) < ε , for all s [ 0 , T ] , g ( t ) g ( s ) < δ .

If f is g -continuous at every point t [ 0 , T ] , we say that f is g -continuous on [ 0 , T ] . We denote by C g ( [ 0 , T ] ; C ) the set of g-continuous functions on [ 0 , T ] ; and by BC g ( [ 0 , T ] ; C ) the set of bounded g-continuous functions on [ 0 , T ] .

Remark 2.8

Observe that we distinguish between C g ( [ 0 , T ] ; C ) and BC g ( [ 0 , T ] ; C ) because, as pointed out in [13, Example 3.19], g -continuous function on [ 0 , T ] need not be bounded. It is important to note that, as explained in [13, Example 3.23], g -differentiable functions need not be g -continuous either.

Remark 2.9

The set BC g ( [ 0 , T ] ; C ) equipped with the supremum norm 0 is a Banach space. As such, it is possible to talk about linear independence in BC g ( [ 0 , T ] ; C ) . We shall say that v 1 , v 2 BC g ( [ 0 , T ] ; C ) are linearly dependent if there exist c 1 , c 2 C \ { 0 } such that

c 1 v 1 ( t ) + c 2 v 2 ( t ) = 0 , t [ 0 , T ] .

Otherwise, we say that v 1 and v 2 are linearly independent.

Naturally, we have that the sum and product of g -continuous functions are g -continuous. Similarly, the quotient of two g -continuous functions is also g -continuous provided that the function on the denominator does not vanish [3, Lemma 2.14]. The following result describes some other basic properties for g -continuous functions. It can be deduced directly from [4, Proposition 3.2], in which the same information is presented for real-valued functions.

Proposition 2.10

If f : [ 0 , T ] C is g-continuous on [ 0 , T ] , then

  • f is continuous from the left at every t ( 0 , T ] ;

  • if g is continuous at t [ 0 , T ) , then so is f ;

  • if g is constant on some [ α , β ] [ 0 , T ] , then so is f.

Remark 2.11

As a direct consequence of Proposition 2.10, we see that if f C g ( [ 0 , T ] ; C ) , then f * = f . Indeed, let t [ 0 , T ] and let us show that f ( t ) = f * ( t ) . If t C g this is trivial as t * = t . For t C g , g is constant on [ t , t * ] and, thus, so is f , so, f ( t ) = f ( t * ) = f * ( t ) .

In particular, this means that if f 1 , f 2 C g ( [ 0 , T ] ; C ) are g -differentiable at t , then

( f 1 f 2 ) g ( t ) = ( f 1 ) g ( t ) f 2 ( t ) + ( f 2 ) g ( t ) f 1 ( t ) + ( f 1 ) g ( t ) ( f 2 ) g ( t ) Δ g ( t * ) .

Next we introduce the concept of g -absolute continuity, which is the extension of the notion of absolute continuity to the context of Stieltjes calculus. This definition was presented in [10] and it connects the Stieltjes derivative and the Lebesgue-Stieltjes integral [10, Proposition 5.4]. Here we introduce its definition as part of the mentioned result, which was originally stated for real-valued functions but easily extends to complex-valued ones.

Theorem 2.12

Let F : [ 0 , T ] C . The following conditions are equivalent:

  1. The function F is g-absolutely continuous on [ 0 , T ] , according to the following definition: for every ε > 0 , there exists δ > 0 such that for every open pairwise disjoint family of subintervals { ( a n , b n ) } n = 1 m ,

    n = 1 m ( g ( b n ) g ( a n ) ) < δ n = 1 m F ( b n ) F ( a n ) < ε .

  2. The function F satisfies the following conditions:

    1. there exists F g ( t ) for g -a.a. t [ 0 , T ) ;

    2. F g L g 1 ( [ 0 , T ) , C ) ;

    3. for each t [ 0 , T ] ,

      (2.5) F ( t ) = F ( 0 ) + [ 0 , t ) F g ( s ) d μ g ( s ) .

We denote by AC g ( [ 0 , T ] ; C ) , the set of g-absolutely continuous functions on [ 0 , T ] .

The following result is a version of the formula of integration by parts for g -absolutely continuous functions, which is a direct consequence of Proposition 2.5 and Theorem 2.12.

Lemma 2.13

(Integration by parts) Given w 1 , w 2 AC g ( [ 0 , T ] ; C ) , we have that w 1 w 2 AC g ( [ 0 , T ] ; C ) and, furthermore, for each t [ 0 , T ] ,

w 1 ( t ) w 2 ( t ) w 1 ( 0 ) w 2 ( 0 ) = [ 0 , t ) ( w 1 ) g w 2 d μ g + [ 0 , t ) w 1 ( w 2 ) g d μ g + [ 0 , t ) ( w 1 ) g ( w 2 ) g Δ g d μ g .

Proof

First, observe that [4, Proposition 5.4] ensures that w 1 w 2 AC g ( [ 0 , T ] ; C ) . Now, Theorem 2.12 and Remark 2.11 ensure that

( w 1 w 2 ) g = w 1 ( w 2 ) g + ( w 1 ) g w 2 + ( w 1 ) g ( w 2 ) g Δ g * , g -a.e. in [ 0 , T ) ,

so, thanks to (2.3), we obtain

( w 1 w 2 ) g = w 1 ( w 2 ) g + ( w 1 ) g w 2 + ( w 1 ) g ( w 2 ) g Δ g , g -a.e. in [ 0 , T ) ,

from which the integration by parts formula follows using (2.5).□

As pointed out in [4, Proposition 5.5], AC g ( [ 0 , T ] ; C ) BC g ( [ 0 , T ] ; C ) so every g -absolutely continuous function is g -continuous and, as such, presents the properties introduced before. Note, however, that g -absolutely continuous functions are not, in general, g -differentiable everywhere, which motives the following definition, see [2, Definitions 3.11 and 3.12].

Definition 2.14

Given k N , we define C g 0 ( [ 0 , T ] ; C ) C g ( [ 0 , T ] ; C ) and C g k ( [ 0 , T ] ; C ) recursively as

C g k ( [ 0 , T ] ) { f C k 1 ( [ 0 , T ] ; C ) : ( f g ( k 1 ) ) g C g ( [ 0 , T ] ; C ) } ,

where f g ( 0 ) f and f g ( k ) ( f g ( k 1 ) ) g , k N . Similarly, given k N , we define BC g 0 ( [ 0 , T ] ; C ) BC g ( [ 0 , T ] ; C ) and BC g k ( [ 0 , T ] ; C ) recursively as

BC g k ( [ a , b ] ; C ) { f C g k ( [ a , b ] ; C ) : f g ( n ) BC g ( [ a , b ] ; C ) , n = 0 , , k } .

We also define C g ( [ 0 , T ] ; C ) n N C g k ( [ 0 , T ] ; C ) and BC g ( [ 0 , T ] ; C ) n N BC g k ( [ 0 , T ] ; C ) .

One of the most important examples of functions in the space BC g ( [ a , b ] ; C ) is the g -exponential map of a constant function. In the following definition, we collect some of the information available on [2,4,12] for the g -exponential map associated with an integrable function.

Definition 2.15

Given a function p : [ 0 , T ] C , we say that it is regressive if

(2.6) 1 + p ( t ) Δ g ( t ) 0 , t [ 0 , T ] D g .

Given a regressive function p L g 1 ( [ 0 , T ) ; C ) , we define the g-exponential map associated with the map p as

(2.7) exp g ( p ; t ) exp [ 0 , t ) p ˜ ( s ) d μ g ( s ) , t [ 0 , T ] ,

where, denoting by ln ( z ) ln z + i Arg ( z ) , where Arg is the principal branch of the complex argument

p ˜ ( s ) p ( s ) , s [ 0 , T ) \ D g , ln ( 1 + p ( s ) Δ g ( s ) ) Δ g ( s ) , s [ 0 , T ) D g .

Remark 2.16

The g -exponential map belongs to AC g ( [ 0 , T ] ; C ) , [2, Theorem 4.2], and, furthermore, it is the only function in that space satisfying

(2.8) v g ( t ) = p ( t ) v ( t ) , g -a.a. t [ 0 , T ] , v ( 0 ) = 1 .

In particular, if p BC g ( [ 0 , T ] ; C ) , then exp g ( p ; ) BC g 1 ( [ 0 , T ] ; C ) . Furthermore, if p ( t ) = λ C , t [ 0 , T ] , then exp g ( p ; ) BC g ( [ 0 , T ] ; C ) .

In order to make this work more self-contained, we highlight below some of the properties of the g -exponential function whose proof can be found in [2, Proposition 4.6].

Proposition 2.17

Let p , q L g 1 ( [ 0 , T ) ; C ) be two regressive functions. Then,

  • For each t [ 0 , T ] ,

    exp g ( p ; t ) exp g ( q ; t ) = exp g ( p + q + p q Δ g ; t ) .

    In particular,

    exp g ( p ; t ) 2 = exp g ( 2 p + p 2 Δ g ; t ) , t [ 0 , T ] .

  • For each t [ 0 , T ] ,

    exp g ( p ; t ) 1 = exp g p 1 + p Δ g ; t .

    As a consequence,

    exp g ( p ; t ) exp g ( q ; t ) 1 = exp g p q 1 + q Δ g ; t , t [ 0 , T ] .

One of the main problems that we will encounter when trying to apply the method of variation of parameters in the context of Stieltjes calculus is the fact that the product of two functions in the space BC g 1 ( [ 0 , T ] ; C ) is not necessarily a function in the space BC g 1 ( [ 0 , T ] ; C ) , as a consequence of the expression of the product rule given by Proposition 2.5 (see [2, Remark 3.16] and, for a detailed discussion, [3]). Nevertheless, for a given function f BC g 1 ( [ 0 , T ] ; C ) , it is possible, in some cases, to find another function, h BC g ( [ 0 , T ] ; C ) , g -differentiable on [ 0 , T ] , such that the product f h lies in BC g 1 ( [ 0 , T ] ; C ) . Indeed, under these assumptions, Remark 2.11 ensures that

( f h ) g = f g h + f h g + f g h g Δ g * = f g h + h g [ f + f g Δ g * ] .

Therefore, if f + f g Δ g * 0 on [ 0 , T ] , any function h such that

(2.9) h g = η f + f g Δ g * ,

with η BC g ( [ 0 , T ] ; C ) , would yield that ( f h ) g = f g h + η , which would belong to BC g 1 ( [ 0 , T ] ; C ) , as we wanted. Note that, in general, we still would not have that h BC g 1 ( [ 0 , T ] ; C ) , as h g might not belong to BC g ( [ 0 , T ] ; C ) . This reasoning is the idea behind Corollary 2.20, which we present as a consequence of the following technical result.

Lemma 2.18

Let f : [ 0 , T ] C be a bounded function which is continuous (in the usual sense) at every t [ 0 , T ] \ D g . Then, the map ψ : [ 0 , T ] C defined as

ψ ( t ) = [ 0 , t ) f ( s ) d μ g ( s ) , t [ 0 , T ] ,

is well-defined and belongs to AC g ( [ 0 , T ] ; C ) and

(2.10) ψ g ( t ) = f * ( t ) , f o r a l l t [ 0 , T ] .

Proof

First, observe that f L g 1 ( [ 0 , T ) ; C ) . Indeed, since f is continuous on [ 0 , T ] \ D g and D g is a countable set, we have that f is Borel measurable, thus g -measurable. Now, the g -integrability is clear since f is bounded. Thus, Theorem 2.12 ensures that ψ AC g ( [ 0 , T ] ; C ) and ψ g ( t ) = f ( t ) for g -a.a. t [ 0 , T ] and, in particular, for every t [ 0 , T ) D g . Hence, we need to show that equation (2.10) holds on [ 0 , T ] \ D g .

First, let t C g D g . In this case, we reason similarly to [2, Lemma 3.14]; namely, computing the limit

lim s t ψ ( s ) ψ ( t ) g ( s ) g ( t )

on the domain of the function, i.e., A t { s [ 0 , T ] : g ( s ) g ( t ) } . Let ε > 0 . Since f is continuous on [ 0 , T ] \ D g , there exists δ > 0 such that f ( u ) f ( t ) < ε if u t < δ . Now, for s [ 0 , T ] A t such that t s < δ , denoting [ t , s ) [ min { t , s } , max { t , s } ) , we have that

ψ ( s ) ψ ( t ) g ( s ) g ( t ) f ( t ) = sgn ( s t ) g ( s ) g ( t ) [ t , s ) f ( u ) d μ g ( u ) f ( t ) 1 g ( s ) g ( t ) [ t , s ) f ( u ) f ( t ) d μ g ( u ) ε .

Thus, since t C g D g , it follows that t * = t , so

lim s t ψ ( s ) ψ ( t ) g ( s ) g ( t ) = f ( t ) = f ( t * ) ,

as we wanted.

Finally, if t C g , then t * D g N g + , and we already know that ψ is g -differentiable at t * and ψ g ( t * ) = f * ( t * ) = f * ( t ) . Hence, by the definition of the g -derivative at a point of C g , we have that ψ g ( t ) = ψ g ( t * ) = f ( t * ) , which completes the proof of the result.□

Remark 2.19

Observe that in Lemma 2.18, even though we can only ensure that ψ AC g ( [ 0 , T ] ; C ) , we can still guarantee that the function ψ is g -differentiable on the whole [ 0 , T ] .

As anticipated, we have the following corollary which, for a given function, provides an explicit expression for another function such that their product is an element of BC g 1 ( [ 0 , T ] ; C ) .

Corollary 2.20

Let η BC g ( [ 0 , T ] ; C ) and f BC g 1 ( [ 0 , T ] ; C ) be such that the function

f ˜ ( t ) 1 f ( t ) + f g ( t ) Δ g ( t ) , t [ 0 , T ] ,

is well-defined and bounded. Then, the map φ : [ 0 , T ] C defined as

φ ( t ) [ 0 , t ) f ˜ ( s ) η ( s ) d μ g ( s ) , t [ 0 , T ] ,

is well-defined and belongs to AC g ( [ 0 , T ] ; C ) and φ g ( t ) = f ˜ ( t * ) η ( t ) , t [ 0 , T ] . Furthermore,

( f φ ) g ( t ) = f g ( t ) φ ( t ) + η ( t ) , t [ 0 , T ] ,

and, as a consequence, f φ BC g 1 ( [ 0 , T ] ; C ) .

Proof

Observe that it is enough to show that f ˜ η satisfies the conditions of Lemma 2.18, namely, it is bounded and continuous at every t [ 0 , T ] \ D g . Note that the boundedness follows directly from the hypotheses.

Let t [ 0 , T ] \ D g and { t n } n N [ 0 , T ] be a sequence such that t n t . In that case, Proposition 2.1 ensures that Δ g ( t n ) 0 , so

lim n f ˜ ( t n ) = lim n 1 f ( t n ) + f g ( t n ) Δ g ( t n ) = 1 f ( t ) = f ˜ ( t ) ,

where we have used the fact that f is continuous at t since f BC g 1 ( [ 0 , T ] ; C ) and t D g , see Proposition 2.10. This shows that f ˜ is continuous at t . Now Proposition 2.10 guarantees, once again, that η is continuous at t so f ˜ η is continuous at t .

Hence, we can apply Lemma 2.18 to see that φ is well-defined and belongs to AC g ( [ 0 , T ] ; C ) and

φ g ( t ) = f ˜ ( t * ) η ( t ) = η ( t ) f ( t ) + f g ( t ) Δ g ( t * ) , t [ 0 , T ] .

Finally, Proposition 2.5 ensures that, for each t [ 0 , T ] ,

( f φ ) g ( t ) = f g ( t ) φ ( t ) + f ( t ) φ g ( t ) + f g ( t ) φ g ( t ) Δ g ( t * ) = f g ( t ) φ ( t ) + φ g ( t ) ( f ( t ) + f g ( t ) Δ g ( t * ) ) = f g ( t ) φ ( t ) + η ( t ) ,

which completes the proof.□

Remark 2.21

In Corollary 2.20, we are assuming that f ˜ is well-defined and bounded, which is a technical condition required for all the functions occurring in the proof to be well-defined and some of the conditions of Lemma 2.20 to be satisfied. However, if f is bounded away from zero, those conditions can be dropped, giving a condition which might be easier to check. Indeed, suppose f is bounded away from zero. Then, there exists M > 0 such that f ( t ) > M , t [ 0 , T ] . Now, since f BC g 1 ( [ 0 , T ] ; C ) , we have as a direct consequence of Remark 2.3 and Proposition 2.10 that f ( t ) + f g ( t ) Δ g ( t ) = f ( t + ) , t [ 0 , T ] , so,

f ( t ) + f g ( t ) Δ g ( t ) = f ( t + ) M > 0 , t [ 0 , T ] ,

which guarantees that f ˜ is well-defined and bounded on [ 0 , T ] .

Corollary 2.22

Let η BC g ( [ 0 , T ] ; C ) and λ C be such that 1 + λ Δ g ( t ) 0 , t [ 0 , T ] D g . Then, the map φ : [ 0 , T ] C defined as

φ ( t ) [ 0 , t ) η ( s ) 1 + λ Δ g ( s ) d μ g ( s ) , t [ 0 , T ] ,

is well-defined and belongs to AC g ( [ 0 , T ] ; C ) and

φ g ( t ) = η ( t ) 1 + λ Δ g * ( t ) , t [ 0 , T ] .

Proof

Observe that, in order to obtain the result, it is enough to show that the map

θ ( t ) = η ( t ) 1 + λ Δ g ( t ) , t [ 0 , T ] ,

satisfies the hypotheses of Lemma 2.18.

We start by proving that θ is bounded. To this end, define

A = { t [ 0 , T ) : 1 + λ Δ g ( t ) 1 2 } , B = { t [ 0 , T ) : 1 + λ Δ g ( t ) < 1 2 } .

Observe that, necessarily, B [ 0 , T ) D g . We claim that B has finite cardinality. Indeed, since

0 t [ 0 , T ) D g Δ g ( t ) = [ 0 , T ) D g d μ g ( u ) [ 0 , T ) d μ g ( u ) = μ g ( [ 0 , T ) ) = g ( T ) g ( 0 ) < ,

the set X { [ 0 , T ) D g : λ Δ g ( t ) > 1 2 } is finite. Now, given an element t B , we have that, if 1 λ Δ g ( t ) < 1 + λ Δ g ( t ) < 1 2 , then t X , so B is finite. Hence,

θ 0 sup t A θ ( t ) + sup t B θ ( t ) η 0 ( 2 + max { 1 + λ Δ g ( t ) 1 : t B } ) < .

Now, the proof of the continuity of θ on [ 0 , T ] \ D g is analogous to the proof of the continuity of f ˜ in Corollary 2.20, so we omit it.

Therefore, we have that the hypotheses of Lemma 2.18 are satisfied, so φ AC g ( [ 0 , T ] ; C ) and

φ g ( t ) = η ( t * ) 1 + λ Δ g ( t * ) = η ( t ) 1 + λ Δ g ( t * ) , t [ 0 , T ] ,

where the last equality is a consequence of the g -continuity of η , see Remark 2.11.□

Finally, we include a result that will be of interest for the work ahead and it can be directly obtained from Corollary 2.22.

Corollary 2.23

Let λ C be such that 1 + λ Δ g ( t ) 0 , t [ 0 , T ] D g , and define

φ ( t ) = [ 0 , t ) 1 1 + λ Δ g ( s ) d μ g ( s ) , t [ 0 , T ] .

Then, the map v ( t ) = φ ( t ) exp g ( λ ; t ) , t [ 0 , T ] , belongs to BC g ( [ 0 , T ] ; C ) and, moreover,

(2.11) v g ( n ) ( t ) = n λ n 1 exp g ( λ ; t ) + λ n v ( t ) , t [ 0 , T ] , n N .

Proof

First, observe that Corollary 2.22 and [2, Theorem 4.2] are enough to guarantee that v BC g ( [ 0 , T ] ; C ) . Hence, it suffices to prove that (2.11) holds as, in that case, it follows that v g ( n ) BC g ( [ 0 , T ] ; C ) as it can be expressed in terms of v and the g -exponential map. We prove (2.11) by induction on n N .

Let t [ 0 , T ] . Since φ and exp g ( λ ; ) are g -differentiable at t , Proposition 2.5, Corollary 2.22, and Remark 2.11 yield

v g ( t ) = λ φ ( t * ) exp g ( λ ; t ) + φ g ( t ) exp g ( λ ; t * ) + λ φ g ( t ) exp g ( λ ; t ) Δ g ( t * ) = λ φ ( t ) exp g ( λ ; t ) + exp g ( λ ; t ) ,

which proves the case n = 1 . Assume now that (2.11) holds for every n N and let us show that it holds for n + 1 . Observe that this means that v g ( n ) BC g ( [ 0 , T ] ; C ) . Let t [ 0 , T ] . In that case, since v g ( n ) and v are g -differentiable at t , Proposition 2.5 and Remark 2.11 ensure that

v g ( n + 1 ) ( t ) = v g ( n ) ( t ) v ( t ) + v g ( n 1 ) ( t ) v g ( t ) + v g ( n ) ( t ) v g ( t ) Δ g ( t * ) = k λ n exp g ( λ ; t ) + λ n v g ( t ) = n λ n exp g ( λ ; t ) + λ n ( λ φ ( t ) exp g ( λ ; t ) + exp g ( λ ; t ) ) = ( n + 1 ) λ n exp g ( λ ; t ) + λ n + 1 v ( t ) ,

which completes the proof.□

3 The g -Wronskian and second-order linear Stieltjes differential equations

In this section, we will define the concept of g -Wronskian and simplified g -Wronskian, and we will study their applications for second-order linear Stieltjes differential equations. Observe that our definition of simplified g -Wronskian matches the definition of Wronskian in [6, Definition 5.2.1]. Nevertheless, our definition is more general as a consequence of having a broader definition of Stieljtes derivative that includes the points of C g .

Definition 3.1

( g -Wronskian and simplified g -Wronskian) Given y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) , we define the g-Wronskian of y 1 and y 2 as the map W g ( y 1 , y 2 ) : [ 0 , T ] C given by the expression

(3.1) W g ( y 1 , y 2 ) ( t ) = y 1 ( t ) y 2 ( t ) ( Δ g ( t ) ) 2 ( y 1 ) g ( t ) ( y 2 ) g ( t ) Δ g ( t ) ( y 1 ) g ( t ) ( y 2 ) g ( t ) 1 , t [ 0 , T ] .

Explicitly, with the notation Δ g 2 ( t ) ( Δ g ( t ) ) 2 ,

(3.2) W g ( y 1 , y 2 ) = y 1 ( y 2 ) g y 2 ( y 1 ) g + [ y 1 ( y 2 ) g y 2 ( y 1 ) g ] Δ g + [ ( y 1 ) g ( y 2 ) g ( y 2 ) g ( y 1 ) g ] Δ g 2 .

Similarly, given y 1 , y 2 BC g 1 ( [ 0 , T ] ; C ) , we define the simplified g-Wronskian of y 1 and y 2 as the function W ˜ g ( y 1 , y 2 ) : [ 0 , T ] R given by the expression

W ˜ g ( y 1 , y 2 ) ( t ) = y 1 ( t ) y 2 ( t ) ( y 1 ) g ( t ) ( y 2 ) g ( t ) , t [ 0 , T ] .

Explicitly,

W ˜ g ( y 1 , y 2 ) ( t ) = y 1 ( y 2 ) g y 2 ( y 1 ) g .

Remark 3.2

Observe that when condition (2.4) is satisfied, and noting that ( Δ g 2 * ) g = ( Δ g * ) g = χ D g * , we can rewrite W g ( y 1 , y 2 ) * as

W g ( y 1 , y 2 ) * = y 1 y 2 Δ g 2 * ( y 1 ) g ( y 2 ) g ( Δ g 2 * ) g ( y 1 ) g ( y 2 ) g 1 .

We can further rewrite it as

W g ( y 1 , y 2 ) * = ( 1 χ D g * ) y 1 y 2 0 ( y 1 ) g ( y 2 ) g 0 ( y 1 ) g ( y 2 ) g 1 + χ D g * y 1 y 2 Δ g 2 * ( y 1 ) g ( y 2 ) g ( Δ g 2 * ) g ( y 1 ) g ( y 2 ) g ( Δ g 2 * ) g ,

or equivalently,

W g ( y 1 , y 2 ) * = y 1 y 2 Δ g 2 * 0 ( y 1 ) g ( y 2 ) g ( Δ g 2 * ) g 0 ( y 1 ) g ( y 2 ) g ( Δ g 2 * ) g 1 0 0 ( 1 χ D g * ) χ D g * .

Remark 3.3

Observe that, given the order of derivation necessary in each of the definitions, we require different regularity on the functions for the definition of the g -Wronskian and the simplified g -Wronskian. Furthermore, we also need to note that W g ( y 1 , y 2 ) does not belong, in general, to the space BC g ( [ 0 , T ] ; C ) . However, thanks to the fact that BC g 1 ( [ 0 , T ] ; C ) AC g ( [ 0 , T ] , C ) and [4, Proposition 5.4], we have that W ˜ g ( y 1 , y 2 ) AC g ( [ 0 , T ] ; C ) BC g ( [ 0 , T ] ; C ) .

It is easy to see that both the g -Wronskian and the simplified g -Wronskian yield the usual Wronskian of two functions when we consider g = Id , that is, when the Stieltjes derivative coincides with the usual derivative. This can also be noted if we rewrite the expression (3.2) for W g ( y 1 , y 2 ) as

W g ( y 1 , y 2 ) = y 1 ( y 2 ) g y 2 ( y 1 ) g + [ y 1 ( y 2 ) g y 2 ( y 1 ) g ] Δ g + [ ( y 1 ) g ( y 2 ) g ( y 2 ) g ( y 1 ) g ] Δ g 2 = ( y 1 + ( y 1 ) g Δ g ) ( ( y 2 ) g + ( y 2 ) g Δ g ) ( y 2 + ( y 2 ) g Δ g ) ( ( y 1 ) g + ( y 1 ) g Δ g ) = y 1 + ( y 1 ) g Δ g y 2 + ( y 2 ) g Δ g ( y 1 ) g + ( y 1 ) g Δ g ( y 2 ) g + ( y 2 ) g Δ g .

Bearing in mind Remark 2.3, we have that f g ( t ) Δ g ( t ) = f ( t + ) f ( t ) for t D g , which means

W g ( y 1 , y 2 ) ( t ) = y 1 ( t ) ( y 2 ) g ( t ) y 2 ( t ) ( y 1 ) g ( t ) , t [ 0 , T ] D g , y 1 ( t + ) ( y 2 ) g ( t + ) y 2 ( t + ) ( y 1 ) g ( t + ) , t [ 0 , T ] D g ,

so, since y 1 , y 2 BC g 2 ( [ 0 , T ] , C ) , Proposition 2.10 allows us to rewrite W g ( y 1 , y 2 ) simply as

(3.3) W g ( y 1 , y 2 ) ( t ) = y 1 ( t + ) ( y 2 ) g ( t + ) y 2 ( t + ) ( y 1 ) g ( t + ) , t [ 0 , T ] ,

which shows that W g ( y 1 , y 2 ) ( t ) = W ˜ g ( y 1 , y 2 ) ( t + ) for t [ 0 , T ] . Furthermore, equation (3.3) leads to the following result.

Lemma 3.4

Given y 1 , y 2 BC g 2 ( [ 0 , T ] , C ) ,

W g ( y 1 , y 2 ) ( t + ) = W g ( y 1 , y 2 ) ( t ) , t [ 0 , T ) .

As a consequence, ( W g ( y 1 , y 2 ) ) g ( t ) = 0 for all t [ 0 , T ) D g .

Proof

Taking into account equation (3.3), it is enough to check that

lim s t + y k ( s + ) = y k ( t + ) , lim s t + y k ( s + ) = y k ( t + ) , k = 1 , 2 .

This follows directly from [14, Corollary 4.1.9] since the maps y k , y k are regulated as a direct consequence of Remark 2.3 and Proposition 2.10.□

Remark 3.5

Lemma 3.4 ensures that W g ( y 1 , y 2 ) is continuous from the right at every t [ 0 , T ) . However, in general, we cannot ensure continuity at such points. Indeed, consider the maps

g ( t ) = t , t 0 , t + 1 , t > 0 , y 1 ( t ) = g [ 1 , 1 ] , y 2 ( t ) = e t + 1 , 1 t 0 , 2 e t + 1 , 0 < t 1 .

It is possible to check that y 1 , y 2 BC g 2 ( [ 1 , 1 ] , C ) and

( y 1 ) g ( t ) = 1 , ( y 1 ) g ( t ) = 0 , ( y 2 ) g ( t ) = ( y 2 ) g ( t ) = y 2 ( t ) , t [ 1 , 1 ] .

Therefore, we can obtain the explicit expression of W g ( y 1 , y 2 ) from (3.1):

W g ( y 1 , y 2 ) ( t ) = e t + 1 ( t 1 ) , 1 t < 0 , 0 , t = 0 , 2 t e t + 1 , 0 < t 1 .

In this case, we have that W g ( y 1 , y 2 ) ( 0 + ) = W g ( y 1 , y 2 ) ( 0 ) = 0 e = W g ( y 1 , y 2 ) ( 0 ) , which shows that W g ( y 1 , y 2 ) need not be continuous at the points of [ 1 , 1 ] D g . Furthermore, this also shows that W g ( y 1 , y 2 ) might not be g -continuous since it is not left-continuous at 0, see Proposition 2.10.

Similar to the usual setting, our definition of Wronskian functions allows us to obtain a sufficient condition for two maps to be linearly independent in the corresponding space of functions, as presented in the next result.

Lemma 3.6

Let y 1 , y 2 : [ 0 , T ] C . Then,

  • If y 1 , y 2 BC g 1 ( [ 0 , T ] ; C ) and there exists t [ 0 , T ] such that W ˜ g ( y 1 , y 2 ) ( t ) 0 , then y 1 and y 2 are linearly independent.

  • If y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) and there exists t [ 0 , T ] such that W g ( y 1 , y 2 ) ( t ) 0 , then y 1 and y 2 are linearly independent.

Proof

First, suppose y 1 , y 2 BC g 1 ( [ 0 , T ] ; C ) and there exists t [ 0 , T ] such that W ˜ g ( y 1 , y 2 ) ( t ) 0 . Reasoning by contradiction, assume that y 1 and y 2 are linearly dependent. In that case, by the linearity of the g -derivative, the two columns of the determinant that defines W ˜ g ( y 1 , y 2 ) ( t ) must be linearly dependent for every t [ 0 , T ] , which implies that W ˜ g ( y 1 , y 2 ) ( t ) = 0 for all t [ 0 , T ] , and this contradicts the hypotheses.

Now, the proof for the case where y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) and there exists t [ 0 , T ] such that W g ( y 1 , y 2 ) ( t ) 0 is analogous and we omit it.□

The next result provides a partial converse to Lemma 3.6 for functions in BC g 1 ( [ 0 , T ] ; C ) .

Lemma 3.7

Let y 1 , y 2 BC g 1 ( [ 0 , T ] ; C ) be such that y 1 ( 0 ) y 2 ( 0 ) 0 , y 1 ( t ) y 2 ( t ) 0 for g -a.a. t [ 0 , T ) , and ( y 1 ) g y 1 L g 1 ( [ 0 , T ) , C ) . If W ˜ g ( y 1 , y 2 ) ( t ) = 0 , g -a.a. t [ 0 , T ) , then y 1 and y 2 are linearly dependent.

Proof

From the hypotheses, we know that there exists a g -measurable set, N [ 0 , T ) , such that μ g ( N ) = 0 and y 1 ( t ) y 2 ( t ) 0 for all t [ 0 , T ) \ N . Observe that, since W ˜ g ( y 1 , y 2 ) ( t ) = 0 g -a.a. t [ 0 , T ) , we must have that

( y 1 ) g ( t ) y 1 ( t ) = ( y 2 ) g ( t ) y 2 ( t ) , t [ 0 , T ) \ N .

Define

f : t [ 0 , T ) f ( t ) = ( y 1 ) g ( t ) y 1 ( t ) , t [ 0 , T ) \ N , 0 , t N .

Note that f coincides with ( y 1 ) g y 1 on [ 0 , T ) \ N . Now, since N is g -measurable and μ g ( N ) = 0 , it follows from the hypotheses that f L g 1 ( [ 0 , T ) , C ) . Observe that y 1 y 1 ( 0 ) and y 2 y 2 ( 0 ) belong to BC g 1 ( [ 0 , T ] ; C ) AC g ( [ 0 , T ] ; C ) and are solutions of the differential problem

u g ( t ) = f ( t ) u ( t ) , g -a.a. t [ 0 , T ) , u ( 0 ) = 1 ,

which has a unique solution, cf. [12, Theorem 4.6]. Therefore, y 2 ( 0 ) y 1 y 1 ( 0 ) y 2 = 0 , which means that y 1 and y 2 are linearly dependent.□

Remark 3.8

A result analogous to Lemma 3.7 can be stated for the case where y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) and W g ( y 1 , y 2 ) ( t ) = 0 , t [ 0 , T ] . Indeed, under these conditions, observing that W ˜ g ( y 1 , y 2 ) is the principal minor determinant of W g ( y 1 , y 2 ) of order two, we have that, on [ 0 , T ] ,

W ˜ g ( y 1 , y 2 ) = [ y 2 ( y 1 ) g y 1 ( y 2 ) g ] Δ g + [ ( y 2 ) g ( y 1 ) g ( y 1 ) g ( y 2 ) g ] Δ g 2 ,

which, in particular, yields that W ˜ g ( y 1 , y 2 ) ( t ) = 0 , for t [ 0 , T ] \ D g . On the other hand, if t [ 0 , T ] D g , since W ˜ g ( y 1 , y 2 ) is g -continuous, see Remark 3.3, Proposition 2.10 ensures that

W ˜ g ( y 1 , y 2 ) ( t ) = lim s t W ˜ g ( y 1 , y 2 ) ( s ) = lim s t s [ 0 , T ] \ D g W ˜ g ( y 1 , y 2 ) ( s ) = 0 ,

so W ˜ g ( y 1 , y 2 ) ( t ) = 0 , t [ 0 , T ] .

So far, we have studied the properties of the g -Wronskian as a function. Now, we turn our attention to how this concept relates to Stieltjes differential equations. To that end, let us consider the following second-order linear Stieltjes differential equation with g -continuous coefficients,

v g ( t ) + P ( t ) v g ( t ) + Q ( t ) v ( t ) = f ( t ) , t [ 0 , T ] , ( 3.4 ) v ( 0 ) = x 0 , v g ( 0 ) = v 0 , ( 3.5 )

where x 0 , v 0 C , and f , P , Q BC g ( [ 0 , T ] ; C ) . We define the concept of solution in the following terms.

Definition 3.9

A solution of (3.4) is a function v BC g 2 ( [ 0 , T ] ; C ) such that

v g ( t ) + P ( t ) v g ( t ) + Q ( t ) v ( t ) = f ( t ) , t [ 0 , T ] .

If, in addition, v ( 0 ) = x 0 , v g ( 0 ) = v 0 , then v is a solution of (3.4)–(3.5).

Remark 3.10

Observe that (3.4)–(3.5) is equivalent to the system

x g ( t ) = y ( t ) , t [ 0 , T ] , y g ( t ) = P ( t ) y ( t ) Q ( t ) x ( t ) + f ( t ) , t [ 0 , T ] , x ( 0 ) = x 0 , y ( 0 ) = v 0 ,

which we know to have a unique solution, cf. [13, Theorem 5.58]. Thus, (3.4)–(3.5) have a unique solution.

Remark 3.11

In general, we cannot ensure that a solution of (3.4)–(3.5) belongs to space BC g ( [ 0 , T ] ; C ) even when we consider P , Q , f BC g ( [ 0 , T ] ; C ) as the product of two functions in BC g ( [ 0 , T ] ; C ) is not necessarily a function in the space BC g ( [ 0 , T ] ; C ) ([2, Remark 3.16] and [3]). However, when P and Q are constants, the regularity of the solution is determined by the regularity of f [2].

In order to study the application of the variation of parameters method to obtain the solution of (3.4)–(3.5), we consider the homogeneous problem

v g ( t ) + P ( t ) v g ( t ) + Q ( t ) v ( t ) = 0 , t [ 0 , T ] , ( 3.6 ) v ( 0 ) = x 0 , v g ( 0 ) = v 0 . ( 3.7 )

We have the following lemma whose proof is straightforward from the linearity of the g -derivative.

Lemma 3.12

Let y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) be two solutions of (3.6) such that

(3.8) y 1 ( 0 ) ( y 2 ) g ( 0 ) y 2 ( 0 ) ( y 1 ) g ( 0 ) 0 .

Then, the map v = c 1 y 1 + c 2 y 2 is the unique solution of (3.6)–(3.7), where

c 1 = ( y 2 ) g ( 0 ) x 0 v 0 y 2 ( 0 ) y 1 ( 0 ) ( y 2 ) g ( 0 ) y 2 ( 0 ) ( y 1 ) g ( 0 ) , c 2 = v 0 y 1 ( 0 ) ( y 1 ) g ( 0 ) x 0 y 1 ( 0 ) ( y 2 ) g ( 0 ) y 2 ( 0 ) ( y 1 ) g ( 0 ) .

In the following Lemma – cf. [6, Theorem 5.2.3] – we will make the relationship between W g ( y 1 , y 2 ) ( t ) and W ˜ g ( y 1 , y 2 ) ( t ) explicit when y 1 ( t ) and y 2 ( t ) are solutions of the homogeneous equation.

Lemma 3.13

Let y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) be two solutions of (3.6). Then,

(3.9) W g ( y 1 , y 2 ) = ( 1 P Δ g + Q Δ g 2 ) W ˜ g ( y 1 , y 2 ) .

Moreover, if

(3.10) 1 P ( t ) Δ g ( t ) + Q ( t ) Δ g ( t ) 2 0 , t [ 0 , T ] D g ,

then,

(3.11) W ˜ g ( y 1 , y 2 ) = W ˜ g ( y 1 , y 2 ) ( 0 ) exp g ( P + Q Δ g ; ) .

In particular, if W ˜ g ( y 1 , y 2 ) ( 0 ) 0 , then W g ( y 1 , y 2 ) ( t ) 0 for every t [ 0 , T ] and the multiplicative inverse of W g ( y 1 , y 2 ) is given by

(3.12) W g ( y 1 , y 2 ) 1 = [ W ˜ g ( y 1 , y 2 ) ( 0 ) ( 1 P Δ g + Q Δ g 2 ) ] 1 exp g P + Q Δ g 1 P Δ g + Q Δ g 2 ; ,

which is a bounded function and continuous on [ 0 , T ] \ D g .

Proof

Given that y 1 , y 2 are solutions of (3.6), standard computations yield that

y 1 ( y 2 ) g y 2 ( y 1 ) g = P W ˜ g ( y 1 , y 2 ) , ( y 1 ) g ( y 2 ) g ( y 2 ) g ( y 1 ) g = Q W ˜ g ( y 1 , y 2 ) ,

from which it is clear that (3.9) holds.

Assume that (3.10) holds. In order to prove (3.11), it is enough to show that W ˜ g ( y 1 , y 2 ) is the unique solution of

(3.13) u g ( t ) = ( P ( t ) + Q ( t ) Δ g ( t ) ) u ( t ) , g -a.a. t [ 0 , T ) , u ( 0 ) = W ˜ g ( y 1 , y 2 ) ( 0 ) .

This is because W ˜ g ( y 1 , y 2 ) AC g ( [ 0 , T ] ; C ) (see Remark 3.3) and, since (3.10) holds, the unique solution in AC g ( [ 0 , T ] ; C ) of (3.13) is W ˜ g ( y 1 , y 2 ) ( 0 ) exp g ( P + Q Δ g ; t ) .

Clearly, W ˜ g ( y 1 , y 2 ) satisfies the initial condition. Given the explicit expression of W ˜ g ( y 1 , y 2 ) , which is g -absolutely continuous on [ 0 , T ] , we can compute its derivative g -a.e. in [ 0 , T ] by means of Proposition 2.5 and Remark 2.11, which yield

( W ˜ g ( y 1 , y 2 ) ) g = ( y 1 ) g ( y 2 ) g + y 1 ( y 2 ) g + ( y 1 ) g ( y 2 ) g Δ g * ( y 2 ) g ( y 1 ) g y 2 ( y 1 ) g ( y 2 ) g ( y 1 ) g Δ g * = ( P + Q Δ g * ) W ˜ g ( y 1 , y 2 ) ,

so ( W ˜ g ( y 1 , y 2 ) ) g = ( P + Q Δ g ) W ˜ g ( y 1 , y 2 ) g -a.e. in [ 0 , T ] . Now, thanks to (2.3),

W ˜ g ( y 1 , y 2 ) = W ˜ g ( y 1 , y 2 ) ( 0 ) exp g ( P + Q Δ g ; ) .

Finally, assume that (3.10) holds and W ˜ g ( y 1 , y 2 ) ( 0 ) 0 . Given that (3.9) and (3.11) hold, it is clear that W g ( y 1 , y 2 ) ( t ) 0 for every t [ 0 , T ] . Observe that by Proposition 2.17, we have that

exp g ( P + Q Δ g ; ) 1 = exp g P + Q Δ g 1 P Δ g + Q Δ g 2 ; .

Hence, given (3.9) and (3.11), it follows that

W g ( y 1 , y 2 ) 1 = ( 1 P Δ g + Q Δ g 2 ) 1 W ˜ g ( y 1 , y 2 ) 1 = [ W ˜ g ( y 1 , y 2 ) ( 0 ) ( 1 P Δ g + Q Δ g 2 ) ] 1 exp g P + Q Δ g 1 P Δ g + Q Δ g 2 ; ,

which proves (3.12). Thus, all that is left to do is show that W g ( y 1 , y 2 ) 1 is bounded and continuous at every t [ 0 , T ] \ D g . Since P + Q Δ g L g 1 ( [ 0 , T ] ; C ) and 1 + ( P + Q Δ g ) Δ g 0 , we have, using the same arguments as in the proof of Corollary 2.22, that [ 1 P Δ g + Q Δ g 2 ] 1 is a bounded function, which ensures that W g ( y 1 , y 2 ) 1 is bounded. Finally, the continuity of W g ( y 1 , y 2 ) 1 on [ 0 , T ] \ D g is a direct consequence of Proposition 2.1.□

As a direct consequence of Lemma 3.13, we have the following result ensuring the linear independence of two solutions of the homogeneous equation provided condition (3.8) is satisfied.

Corollary 3.14

Assume that (3.10) holds and let y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) be two solutions of (3.6) such that (3.8) holds. Then, W g ( y 1 , y 2 ) ( t ) 0 for all t [ 0 , T ] and, in particular, y 1 and y 2 are linearly independent.

Proof

Since W ˜ g ( y 1 , y 2 ) ( 0 ) = y 1 ( 0 ) ( y 2 ) g ( 0 ) y 2 ( 0 ) ( y 1 ) g ( 0 ) 0 , Lemma 3.13 guarantees that equation (3.11) holds, so, since (3.10) holds and exp g ( P + Q Δ g ; ) 0 , W g ( y 1 , y 2 ) ( t ) 0 for all t [ 0 , T ] . Now, the rest of the result follows from Lemma 3.6.□

4 The variation of parameters method

One of the most important applications of the g -Wronskian is its use in the variation of parameters method. This technique can be useful in two different ways: to find a linearly independent solution of the homogeneous equation (3.6) from a known solution of the same problem; or to obtain a particular solution of the nonhomogeneous problem (3.4) from two linearly independent solutions of the homogeneous problem.

4.1 Finding a solution of the homogeneous problem

The idea behind the classical variation of parameters method is to find a linearly independent solution of (3.6) with respect to a known solution of (3.6), say y 1 , that can be expressed as y 2 = φ y 1 for a certain function φ . Here we will show that a similar reasoning holds even in the context of Stieltjes calculus.

To that end, suppose that y 1 is a solution of equation (3.6) and consider y 2 = φ y 1 for a bounded g -continuous function φ twice g -differentiable. Let us deduce what kind of function φ must be assuming that y 2 is a solution of (3.6) which is linearly independent from y 1 .

First, under suitable conditions, we can use Propositions 2.5 and 2.6 to compute the first and second g -derivatives of y 2 , which yield

(4.1) ( y 2 ) g = φ g y 1 + φ ( y 1 ) g + φ g ( y 1 ) g Δ g * , ( y 2 ) g = φ g y 1 + φ ( y 1 ) g + ( 2 χ D g * ) φ g ( y 1 ) g + φ g ( y 1 ) g Δ g * + φ g ( y 1 ) g Δ g * .

So, knowing that y 1 , y 2 are solutions of equation (3.6), when substituting these expressions in (3.6), we obtain that φ must satisfy the equation

φ g ( y 1 + ( y 1 ) g Δ g * ) + φ g [ P ( y 1 + ( y 1 ) g Δ g * ) + ( 2 χ D g * ) ( y 1 ) g + ( y 1 ) g Δ g * ] = 0 .

Under suitable conditions, this is equivalent to requiring that φ g = w for a certain function w satisfying

(4.2) w g = P + ( 2 χ D g * ) ( y 1 ) g + ( y 1 ) g Δ g * y 1 + ( y 1 ) g Δ g * w = P Q Δ g * + ( y 1 ) g y 1 + ( y 1 ) g Δ g * [ ( 2 χ D g * ) P Δ g * + Q ( Δ g * ) 2 ] w ,

where we have used the fact that y 1 is a solution of (3.6) to obtain the last equality.

On the other hand, since y 2 is a solution of (3.6), we know that y 2 BC g 2 ( [ 0 , T ] ; C ) , so keeping in mind (4.1) and w = φ g , we can write ( y 2 ) g = φ ( y 1 ) g + w ( y 1 + Δ g * ( y 1 ) g ) , which should belong to BC g ( [ 0 , T ] ; C ) . In other words, we ought to have that

η w [ y 1 + Δ g * ( y 1 ) g ] BC g ( [ 0 , T ] ; C ) .

Observe that once again, under sufficient conditions, η is g -differentiable on [ 0 , T ] and, by Propositions 2.5 and 2.6, we have that

η g = w g [ y 1 + Δ g * ( y 1 ) g ] + w [ ( y 1 ) g χ D g * ( y 1 ) g + Δ g * ( y 1 ) g Δ g * ( y 1 ) g ] + w g [ ( y 1 ) g χ D g * ( y 1 ) g + Δ g * ( y 1 ) g Δ g * ( y 1 ) g ] Δ g * = w g [ y 1 + Δ g * ( y 1 ) g ] + w [ ( y 1 ) g χ D g * ( y 1 ) g ] + w g [ Δ g * ( y 1 ) g Δ g * ( y 1 ) g ] = w g [ y 1 + Δ g * ( y 1 ) g ] + w ( 1 χ D g * ) ( y 1 ) g .

Thus, keeping in mind equation (4.2) and the definition of η ,

η g = P Q Δ g * + ( y 1 ) g y 1 + ( y 1 ) g Δ g * [ ( 2 χ D g * ) P Δ g * + Q ( Δ g * ) 2 ] η + ( 1 χ D g * ) ( y 1 ) g η y 1 + ( y 1 ) g Δ g * = P Q Δ g * + ( y 1 ) g y 1 + ( y 1 ) g Δ g * ( 1 P Δ g * + Q ( Δ g * ) 2 ) η ,

which is a first-order linear equation. This means that, under suitable conditions, we can consider

η = exp g P + Q Δ g ( y 1 ) g y 1 + ( y 1 ) g Δ g * ( 1 P Δ g * + Q ( Δ g * ) 2 ) ; = exp g ( P + Q Δ g ; ) exp g ( y 1 ) g y 1 + ( y 1 ) g Δ g ; ,

where the last equality is a consequence of the product formula in Proposition 2.17 and (2.3). Furthermore, the quotient rule in Proposition 2.17 ensures that, under the corresponding conditions,

1 y 1 g = ( y 1 ) g y 1 + ( y 1 ) g Δ g * 1 y 1 ,

so, given the uniqueness of solution of first-order linear equations, it follows that

exp g ( y 1 ) g y 1 + ( y 1 ) g Δ g ; = 1 y 1 ,

which means that η can be expressed as

η = exp g ( P + Q Δ g ; ) y 1 ,

and, as a consequence,

w = exp g ( P + Q Δ g ; ) y 1 [ y 1 + ( y 1 ) g Δ g * ] .

Now, the expression of φ follows, yielding

(4.3) φ ( t ) = [ 0 , t ) exp g ( P + Q Δ g ; s ) y 1 ( s ) [ y 1 + ( y 1 ) g ( s ) Δ g ( s ) ] d μ g ( s ) , t [ 0 , T ] .

We must note that in the formal deduction of formula (4.3) that we just derived, it has been necessary to obtain the g -derivative of Δ g * (see Proposition 2.6). In the following result we will provide the explicit conditions under which we can obtain a linearly independent solution of the homogeneous problem (3.6) from a known solution of the same problem. We will see that it is not necessary to assume additional hypotheses that guarantee the g -derivability of Δ g * in order to ensure that y 2 = φ y 1 is a solution of (3.6) which is linearly independent from y 1 .

Theorem 4.1

Assume that (3.10) holds. If y 1 is a solution of (3.6) such that y 1 ( t ) 0 for every t [ 0 , T ] ; the map

t [ 0 , T ] 1 y 1 ( t ) + ( y 1 ) g ( t ) Δ g ( t )

is well-defined and bounded, and

t [ 0 , T ] 1 y 1 ( t )

belongs to BC g ( [ 0 , T ] ; C ) , then the map φ : [ 0 , T ] C given by

φ ( t ) = [ 0 , t ) exp g ( P + Q Δ g ; s ) y 1 ( t ) [ y 1 ( s ) + ( y 1 ) g ( s ) Δ g ( s ) ] d μ g ( s ) AC g ( [ 0 , T ] ; C ) ,

is well-defined, and the map y 2 φ y 1 is a solution of (3.6). Furthermore, y 1 and y 2 are linearly independent.

Proof

On the one hand, thanks to [3, Lemma 2.14], we have that the function f : [ 0 , T ] C defined as f ( t ) = exp g ( P + Q Δ g ; t ) y 1 belongs to BC g ( [ 0 , T ] ; C ) , so thanks to Corollary 2.20, y 2 = φ y 1 BC g 1 ( [ 0 , T ] ; C ) , and

( y 2 ) g = ( y 1 ) g φ + 1 y 1 exp g ( P + Q Δ g ; ) .

Now, on the other hand, thanks to Proposition 2.5,

( y 2 ) g = ( y 1 ) g φ + ( y 1 ) g exp g ( P + Q Δ g ; ) y 1 [ y 1 + ( y 1 ) g Δ g * ] + ( y 1 ) g exp g ( P + Q Δ g ; ) Δ g * y 1 [ y 1 + ( y 1 ) g Δ g * ] + y 1 ( P + Q Δ g * ) exp g ( P + Q Δ g ; ) y 1 [ y 1 + ( y 1 ) g Δ g * ] ( y 1 ) g exp g ( P + Q Δ g ; ) y 1 [ y 1 + ( y 1 ) g Δ g * ] .

Hence,

( y 2 ) g = ( y 1 ) g φ + exp g ( P + Q Δ g ; ) y 1 [ y 1 + ( y 1 ) g Δ g * ] [ ( y 1 ) g Δ g * + y 1 ( P + Q Δ g * ) ] .

If we use the fact that ( y 1 ) g = P ( y 1 ) g Q y 1 , we derive that

( y 1 ) g Δ g * + y 1 ( P + Q Δ g * ) = P ( y 1 ) g Δ g * Q y 1 Δ g * + y 1 ( P + Q Δ g * ) = P [ y 1 + ( y 1 ) g Δ g * ] .

As a consequence,

( y 2 ) g = ( y 1 ) g φ P y 1 exp g ( P + Q Δ g ; ) .

Observe that ( y 2 ) g BC g ( [ 0 , T ] ; C ) , so y 2 BC g 2 ( [ 0 , T ] ; C ) . Let us now see that y 2 satisfies equation (3.6). Indeed,

( y 2 ) g + P ( y 2 ) g + Q y 2 = ( y 1 ) g φ P y 1 exp g ( P + Q Δ g ; ) + P ( y 1 ) g φ + 1 y 1 exp g ( P + Q Δ g ; ) + Q φ y 1 = φ [ ( y 1 ) g + P ( y 1 ) g + Q y 1 ] = 0 ,

since y 1 is a solution of (3.6).

Finally, we study the linear independence by means of Lemma 3.6. Observe that

W ˜ g ( y 1 , y 2 ) = y 1 ( y 2 ) g y 2 ( y 1 ) g = y 1 [ ( y 1 ) g φ + y 1 φ g + ( y 1 ) g φ g Δ g * ] y 1 φ ( y 1 ) g = y 1 φ g [ y 1 + ( y 1 ) g Δ g * ] = exp g ( P + Q Δ g ; t ) 0 ,

which is enough to guarantee that y 1 and y 2 are linearly independent on BC g ( [ 0 , T ] ; C ) .□

Remark 4.2

Similar to Corollary 2.20, Theorem 4.1 presents some technical conditions on the map y 1 , which can be ignored if we can guarantee that y 1 is bounded away from zero. Indeed, if that is the case, it is clear that y 1 ( t ) 0 , t [ 0 , T ] , while Remark 2.21 ensures that the map ( y 1 + ( y 1 ) g Δ g ) 1 is well-defined and bounded. Following a similar reasoning, we can also see that y 1 1 is well-defined and bounded, while the fact that y 1 is g -continuous on [ 0 , T ] and y 1 ( t ) 0 for t [ 0 , T ] , are enough to ensure that y 1 1 is g -continuous on [ 0 , T ] .

4.2 Obtaining a particular solution of the nonhomogeneous problem

In a similar way to the classical case, to obtain the solution of the nonhomogeneous equation, we need to find a particular solution. In order to achieve this, we can use the method of variation of parameters, that is, we will look for a particular solution of the form

(4.4) v p = c 1 y 1 + c 2 y 2 ,

where y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) are two solutions of (3.6) and c 1 , c 2 BC g ( [ 0 , T ] ; C ) are g -differentiable at all points of [ 0 , T ] . In that case, thanks to Proposition 2.5, we have

( v p ) g = c 1 ( y 1 ) g + c 2 ( y 2 ) g + ( c 1 ) g y 1 + ( c 2 ) g y 2 + ( c 1 ) g ( y 1 ) g Δ g * + ( c 2 ) g ( y 2 ) g Δ g * .

In order to avoid that second derivatives appear on the unknowns c 1 and c 2 when computing the second derivative of v p , we will assume that

(4.5) ( c 1 ) g y 1 + ( c 2 ) g y 2 + ( c 1 ) g ( y 1 ) g Δ g * + ( c 2 ) g ( y 2 ) g Δ g * = 0 .

Thus,

( v p ) g = ( c 1 ) g ( y 1 ) g + c 1 ( y 1 ) g + ( c 2 ) g ( y 2 ) g + c 2 ( y 2 ) g + ( c 1 ) g ( y 1 ) g Δ g * + ( c 2 ) g ( y 2 ) g Δ g * .

Substituting v p in (3.4) we obtain that

(4.6) ( c 1 ) g ( y 1 ) g + ( c 2 ) g ( y 2 ) g + ( c 1 ) g ( y 1 ) g Δ g * + ( c 2 ) g ( y 2 ) g Δ g * = f .

Taking into account equations (4.5) and (4.6), we need ( c 1 ) g and ( c 2 ) g to satisfy the linear system

y 1 + ( y 1 ) g Δ g * y 2 + ( y 2 ) g Δ g * ( y 1 ) g + ( y 1 ) g Δ g * ( y 2 ) g + ( y 2 ) g Δ g * ( c 1 ) g ( c 2 ) g = 0 f .

The determinant of the matrix associated with the previous system coincides with the g -Wronskian evaluated at t * , so, if we can ensure that W g ( y 1 , y 2 ) ( t ) 0 , t [ 0 , T ] , then

(4.7) ( c 1 ) g = ( y 2 + ( y 2 ) g Δ g * ) f W g ( y 1 , y 2 ) * , ( c 2 ) g = ( y 1 + ( y 1 ) g Δ g * ) f W g ( y 1 , y 2 ) * .

In the next result, we will see that the functions

(4.8) c 1 ( t ) = [ 0 , t ) ( y 2 ( s ) + ( y 2 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) d μ g ( s ) , t [ 0 , T ] ,

(4.9) c 2 ( t ) = [ 0 , t ) ( y 1 ( s ) + ( y 1 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) d μ g ( s ) , t [ 0 , T ] ,

are such that (4.4) is a particular solution of equation (3.4). We can now state and prove the following theorem for the solutions to the initial value problem (3.4)–(3.5) using the definition of g -Wronskian. This generalizes [6, Theorem 5.2.4] where they considered constant coefficients.

Theorem 4.3

Assume that (3.10) holds. Let y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) be two solutions of (3.6) such that

W g ( y 1 , y 2 ) ( t ) 0 f o r a l l t [ 0 , T ] .

Then, the map v p : [ 0 , T ] C defined as

(4.10) v p ( t ) = y 1 ( t ) [ 0 , t ) ( y 2 ( s ) + ( y 2 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) d μ g ( s ) + y 2 ( t ) [ 0 , t ) ( y 1 ( s ) + ( y 1 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) d μ g ( s )

is a particular solution to the nonhomogeneous equation (3.4). Moreover, v ( t ) = v p ( t ) + v h ( t ) is the solution of the initial value problem (3.4)–(3.5), where

v h ( t ) = x 0 ( y 2 ) g ( 0 ) v 0 y 2 ( 0 ) W g ( y 1 , y 2 ) ( 0 ) y 1 ( t ) + v 0 y 1 ( 0 ) x 0 ( y 1 ) g ( 0 ) W g ( y 1 , y 2 ) ( 0 ) y 2 ( t ) , t [ 0 , T ] ,

is the solution of the initial value problem (3.6)–(3.7).

Proof

Thanks to Lemma 3.13,

W g ( y 1 , y 2 ) 1 = [ W ˜ g ( y 1 , y 2 ) ( 0 ) ( 1 P Δ g + Q Δ g 2 ) ] 1 exp g P + Q Δ g 1 P Δ g + Q Δ g 2 ;

is a bounded function and continuous on [ 0 , T ] \ D g . Hence, the functions

( y 2 + ( y 2 ) g Δ g ) f W g ( y 1 , y 2 ) , ( y 1 + ( y 1 ) g Δ g ) f W g ( y 1 , y 2 ) ,

satisfy the hypotheses of Lemma 2.18 and, therefore, c 1 and c 2 , given by (4.8) and (4.9), respectively, are g -absolutely continuous on [ 0 , T ] and satisfy the equations in (4.7) for all t [ 0 , T ] .

Let us now check that v p BC g 2 ( [ 0 , T ] ; C ) . Indeed, first observe that v p BC g ( [ 0 , T ] ; C ) . Now, thanks to Proposition 2.5, we have that

( v p ) g = ( y 1 ) g c 1 + ( y 2 ) g c 2 y 1 ( y 2 + ( y 2 ) g Δ g * ) f W g ( y 1 , y 2 ) * + y 2 ( y 1 + ( y 1 ) g Δ g * ) f W g ( y 1 , y 2 ) * ( y 1 ) g ( y 2 + ( y 2 ) g Δ g * ) f W g ( y 1 , y 2 ) * Δ g * + ( y 2 ) g ( y 1 + ( y 1 ) g Δ g * ) f W g ( y 1 , y 2 ) * Δ g * = ( y 1 ) g c 1 + ( y 2 ) g c 2 .

Therefore,

(4.11) ( v p ) g ( t ) = ( y 1 ) g ( t ) [ 0 , t ) ( y 2 + ( y 2 ) g Δ g ) f W g ( y 1 , y 2 ) d μ g + ( y 2 ) g ( t ) [ 0 , t ) ( y 1 + ( y 1 ) g Δ g ) f W g ( y 1 , y 2 ) d μ g ,

for all t [ 0 , T ] , so ( v p ) g BC g ( [ 0 , T ] ; C ) . Reasoning in a similar way,

( v p ) g ( t ) = ( y 1 ) g ( t ) [ 0 , t ) ( y 2 + ( y 2 ) g Δ g ) f W g ( y 1 , y 2 ) d μ g + ( y 2 ) g ( t ) [ 0 , t ) ( y 1 + ( y 1 ) g Δ g ) f W g ( y 1 , y 2 ) d μ g + f ( t ) = ( y 1 ) g ( t ) c 1 ( t ) + ( y 2 ) g ( t ) c 2 ( t ) + f ( t ) ,

for all t [ 0 , T ] . Whence, ( v p ) g BC g ( [ 0 , T ] ; C ) , which shows that v p BC g 2 ( [ 0 , T ] ; C ) .

Observe that it is immediate from equations (4.10) and (4.11) that v p ( 0 ) = ( v p ) ( 0 ) = 0 . Furthermore, v p satisfies, indeed, (3.4) since

( v p ) g + P ( v p ) g + Q v p = ( y 1 ) g c 1 + ( y 2 ) g c 2 + f + P ( ( y 1 ) g c 1 + ( y 2 ) g c 2 ) + Q ( y 1 c 1 + y 2 c 2 ) = c 1 ( ( y 1 ) g + P ( y 1 ) g + Q y 1 ) + c 2 ( ( y 2 ) g + P ( y 2 ) g + Q y 2 ) + f = f .

This proves that v p is a solution of (3.4). Now, the rest of the result follows from Lemma 3.12.□

5 Applications

In this final section, we illustrate the theory developed above with three examples: a second-order linear Stieltjes differential equation with constant coefficients, an example with variable coefficients, and the one-dimensional linear Helmholtz equation with piecewise-constant coefficients.

5.1 Second-order linear Stieltjes differential equations with constant coefficients

In [2], the authors obtained the solution of the second-order linear problem

v g ( t ) + P v g ( t ) + Q v ( t ) = f ( t ) , t [ 0 , T ] , v ( 0 ) = x 0 , v g ( 0 ) = v 0 ,

where P , Q , x 0 , v 0 C , f BC g ( [ 0 , T ] ; C ) . The approach that they took was a factorization approach. Namely, considering λ 1 , λ 2 C such that for k = 1 , 2 ,

(5.1) λ k 2 + P λ k + Q = 0 ,

(5.2) 1 + λ k Δ g ( t ) 0 , t [ 0 , T ] D g ,

they showed that a solution of the second-order problem, v , must satisfy

v g ( t ) = λ 2 v ( t ) + v 1 ( t ) , t [ 0 , T ] , v ( 0 ) = x 0 ,

where v 1 = v g λ 2 v is such that

( v 1 ) g ( t ) = λ 1 v 1 ( t ) + f ( t ) , t [ 0 , T ] , v ( 0 ) = v 0 λ 2 x 0 .

By doing so, in [2, Proposition 4.12], they obtained the explicit expression of the solution, which is given by

(5.3) v ( t ) = x 0 exp g ( λ 2 ; t ) + ( v 0 λ 2 x 0 ) exp g ( λ 2 ; t ) [ 0 , t ) exp g λ 1 λ 2 1 + λ 2 Δ g ; 1 + λ 2 Δ g d μ g + exp g ( λ 2 ; t ) [ 0 , t ) exp g λ 1 λ 2 1 + λ 2 Δ g ; 1 + λ 2 Δ g [ 0 , s ) exp g ( λ 1 ; r ) 1 f ( r ) 1 + λ 1 Δ g ( r ) d μ g ( r ) d μ g , t [ 0 , T ] .

In this section, we present an alternative method of obtaining the general solution given by (5.3) based on the variation of parameters method. This technique will be especially useful when equation x 2 + P x + Q = 0 only has a single real root λ = P 2 . We have the following result.

Corollary 5.1

Let λ 1 , λ 2 C be such that (5.1) and (5.2) hold. Then,

  • If λ 1 λ 2 , then y k = exp g ( λ k ; ) , k = 1 , 2 , are two linearly independent solutions of (3.6) which belong to BC g ( [ 0 , T ] ; C ) .

  • If λ 1 = λ 2 λ , then y 1 = exp g ( λ ; ) and

    (5.4) y 2 ( t ) = [ 0 , t ) 1 1 + λ Δ g d μ g exp g ( λ ; t ) , t [ 0 , T ] ,

    are two linearly independent solutions of (3.6) which belong to BC g ( [ 0 , T ] ; C ) .

Proof

The first case is straightforward from Corollary 3.14 and Remark 2.16. Just observe that

y 1 ( 0 ) ( y 2 ) g ( 0 ) y 2 ( 0 ) ( y 1 ) g ( 0 ) = λ 2 λ 1 0 .

Let us now consider the second case, namely, λ 1 = λ 2 = P 2 λ . In that case, we know that y 1 ( t ) = exp g ( λ ; t ) is a solution of (3.6) and it belongs to BC g ( [ 0 , T ] ; C ) , see Remark 2.16. We shall show that y 2 is the linearly independent solution given by Theorem 4.1.

First, observe that condition (3.10) is satisfied because

1 P Δ g ( t ) + Q Δ g ( t ) 2 = ( 1 + λ Δ g ( t ) ) 2 0 , t [ 0 , T ] .

Furthermore, since 1 + λ Δ g ( t ) 0 , t [ 0 , T ] D g , we have that y 1 ( t ) = exp g ( λ ; ) 0 , t [ 0 , T ] , and moreover, by Proposition 2.17, y 1 1 = exp g λ 1 + λ Δ g ; , so y 1 1 BC g ( [ 0 , T ] ; C ) .

On the other hand,

1 y 1 + ( y 1 ) g Δ g = 1 y 1 1 1 + λ Δ g = exp g λ 1 + λ Δ g ; 1 + λ Δ g

is well-defined and bounded. Hence, the hypotheses of Theorem 4.1 are satisfied which implies that y 2 = φ y 1 , with

φ ( t ) = [ 0 , t ) exp g ( P + Q Δ g ; ) y 1 [ y 1 + ( y 1 ) g Δ g ] d μ g , t [ 0 , T ] ,

is a solution of (3.6) and y 1 and y 2 are linearly independent. Let us show that y 2 can be expressed as (5.4).

First, since y 1 = exp g ( λ ; ) , we have that y = λ y 1 and, thus, y 1 + y Δ g = y 1 ( 1 + λ Δ g ) . On the other hand, since P 2 4 Q = 0 and λ = P 2 , we have that P + Q Δ g = 2 λ + λ 2 Δ g , so, using Proposition 2.17, we see that

φ ( t ) = [ 0 , t ) exp g ( 2 λ + λ 2 Δ g ; ) y 1 [ y 1 + ( y 1 ) g Δ g ] d μ g = [ 0 , t ) exp g ( λ ; ) 2 y 1 2 [ 1 + λ Δ g ] d μ g = [ 0 , t ) 1 1 + λ Δ g d μ g , t [ 0 , T ] ,

so y 2 coincides with the expression in (5.4). Finally, Corollary 2.23 ensures that y 2 belongs to BC g ( [ 0 , T ] ; C ) , which completes the proof.□

Let us analyze next the existence of a solution of the initial value problem (3.6)–(3.7). We have the following corollary.

Corollary 5.2

Let λ 1 , λ 2 C be such that (5.1) and (5.2) hold. Let y 1 , y 2 BC g 2 ( [ 0 , T ] ; C ) be the two linearly independent solutions of (3.6) provided by Corollary 5.1. Then,

  • If λ 1 λ 2 , then the map v : [ 0 , T ] C defined as

    (5.5) v ( t ) = λ 2 x 0 v 0 λ 2 λ 1 exp g ( λ 1 ; t ) + 1 λ 1 λ 2 exp g ( λ 1 ; t ) [ 0 , t ) exp g ( λ 1 ; ) 1 f 1 + λ 1 Δ g d μ g + v 0 λ 1 x 0 λ 2 λ 1 exp g ( λ 2 ; t ) 1 λ 1 λ 2 exp g ( λ 2 ; t ) [ 0 , t ) exp g ( λ 2 ; ) 1 f 1 + λ 2 Δ g d μ g ,

    is the solution of the initial value problem (3.6)–(3.7).

  • If λ 1 = λ 2 λ , then the map v : [ 0 , T ] C defined as

    (5.6) v ( t ) = x 0 exp g ( λ ; t ) exp g ( λ ; t ) [ 0 , t ) exp g ( λ ; ) 1 φ f 1 + λ Δ g + f Δ g ( 1 + λ Δ g ) 2 d μ g , + ( v 0 λ x 0 ) φ ( t ) exp g ( λ ; t ) + φ ( t ) exp g ( λ ; t ) [ 0 , t ) exp g ( λ ; ) 1 f 1 + λ Δ g d μ g ,

    is the solution of the initial value problem (3.6)–(3.7), where

    (5.7) φ ( t ) = [ 0 , t ) 1 1 + λ Δ g d μ g , t [ 0 , T ] .

Proof

The result is a direct consequence of Theorem 4.3. Indeed, let us consider the two cases separately.

  • If λ 1 λ 2 , for any s [ 0 , T ] we have that

    ( y 2 ( s ) + ( y 2 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) = + 1 λ 1 λ 2 exp g ( λ 1 ; s ) 1 f ( s ) 1 + λ 1 Δ g ( s ) , ( y 1 ( s ) + ( y 1 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) = 1 λ 1 λ 2 exp g ( λ 2 ; s ) 1 f ( s ) 1 + λ 2 Δ g ( s ) .

    Thus, Theorem 4.3 ensures that the solution is the map in (5.5).

  • On the other hand, if λ 1 = λ 2 , for any s [ 0 , T ] , we have

    ( y 2 ( s ) + ( y 2 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) = φ ( s ) exp g ( λ ; s ) 1 f ( s ) 1 + λ Δ g ( s ) exp g ( λ ; s ) 1 f ( s ) Δ g ( s ) ( 1 + λ Δ g ( s ) ) 2 , ( y 1 ( s ) + ( y 1 ) g ( s ) Δ g ( s ) ) f ( s ) W g ( y 1 , y 2 ) ( s ) = exp g ( λ ; s ) 1 f ( s ) 1 + λ Δ g ( s ) .

    Hence, Theorem 4.3 gives the solution in (5.6), which completes the result.□

Remark 5.3

Observe that the general solution in (5.3) coincides with the one in Corollary 5.2. Indeed, we show that this is the case going case by case.

  • If λ 1 λ 2 , the solution in (5.3) can be expressed, for each t [ 0 , T ] , as

    v ( t ) = x 0 exp g ( λ 2 ; t ) + v 0 λ 2 x 0 λ 1 λ 2 exp g ( λ 2 ; t ) exp g λ 1 λ 2 1 + λ 2 Δ g ; t 1 + exp g ( λ 2 ; t ) λ 1 λ 2 [ 0 , t ) exp g λ 1 λ 2 1 + λ 2 Δ g ; g ( s ) [ 0 , s ) exp g ( λ 1 ; r ) 1 f ( r ) 1 + λ 1 Δ g ( r ) d μ g ( r ) d μ g ( s ) .

    Now, if in Lemma 2.13, we take the functions given by

    w 1 ( t ) = exp g λ 1 λ 2 1 + λ 2 Δ g ; t , w 2 ( t ) = [ 0 , t ) exp g ( λ 1 ; r ) 1 f ( r ) 1 + λ 1 Δ g ( r ) d μ g ( r ) , t [ 0 , T ] ,

    we have, thanks to (2.3) and Proposition 2.17, that for each t [ 0 , T ] ,

    [ 0 , t ) exp g λ 1 λ 2 1 + λ 2 Δ g ; g ( s ) [ 0 , s ) exp g ( λ 1 ; r ) 1 f ( r ) 1 + λ 1 Δ g ( r ) d μ g ( r ) d μ g ( s ) = exp g λ 1 λ 2 1 + λ 2 Δ g ; t [ 0 , t ) exp g ( λ 1 ; s ) 1 f ( s ) 1 + λ 1 Δ g ( s ) d μ g ( s ) [ 0 , t ) exp g ( λ 2 ; s ) 1 f ( s ) 1 + λ 2 Δ g ( s ) d μ g ( s ) .

    Therefore, for each t [ 0 , T ] ,

    v ( t ) = λ 2 x 0 v 0 λ 2 λ 1 exp g ( λ 1 ; t ) + 1 λ 1 λ 2 exp g ( λ 1 ; t ) [ 0 , t ) exp g ( λ 1 ; ) 1 f 1 + λ 1 Δ g d μ g + v 0 λ 1 x 0 λ 2 λ 1 exp g ( λ 2 ; t ) 1 λ 1 λ 2 exp g ( λ 2 ; t ) [ 0 , t ) exp g ( λ 2 ; ) 1 f 1 + λ 2 Δ g d μ g .

  • On the other hand, if λ 1 = λ 2 = λ and we denote by φ the function in (5.7), the expression (5.3) can be rewritten, for each t [ 0 , T ] , as

    v ( t ) = x 0 exp g ( λ ; t ) + ( v 0 λ x 0 ) exp g ( λ ; t ) φ ( t ) + exp g ( λ ; t ) [ 0 , t ) 1 1 + λ Δ g ( s ) [ 0 , s ) exp g ( λ ; r ) 1 f ( r ) 1 + λ Δ g ( r ) d μ g ( r ) d μ g ( s ) .

    Once again, applying Lemma 2.13 to the functions w 1 = φ and

    w 2 ( t ) = [ 0 , t ) exp g ( λ ; s ) 1 f ( s ) 1 + λ Δ g ( s ) d μ g ( s ) , t [ 0 , T ] ,

    we have, thanks to (2.3) and Proposition 2.17, that for each t [ 0 , T ] ,

    [ 0 , t ) 1 1 + λ Δ g ( s ) [ 0 , s ) exp g ( λ ; r ) 1 f ( r ) 1 + λ Δ g ( r ) d μ g ( r ) d μ g ( s ) = φ ( t ) [ 0 , t ) exp g ( λ ; s ) 1 f ( s ) 1 + λ Δ g ( s ) d μ g ( s ) [ 0 , t ) φ ( s ) exp g ( λ ; s ) 1 f ( s ) 1 + λ Δ g ( s ) d μ g ( s ) [ 0 , t ) exp g ( λ ; s ) 1 f ( s ) Δ g ( s ) ( 1 + λ Δ g ( s ) ) 2 d μ g ( s ) .

    Therefore, we have that, for each t [ 0 , T ] ,

    v ( t ) = x 0 exp g ( λ ; t ) exp g ( λ ; t ) [ 0 , t ) exp g ( λ ; ) 1 φ f 1 + λ Δ g + f Δ g ( 1 + λ Δ g ) 2 d μ g , + ( v 0 λ x 0 ) φ ( t ) exp g ( λ ; t ) + φ ( t ) exp g ( λ ; t ) [ 0 , t ) exp g ( λ ; ) 1 f 1 + λ Δ g d μ g .

Several numerical examples can be found in [2, Section 6]. In this section, the authors present an application to the Stieltjes harmonic oscillator, as well as an example, in which the resonance effect appears.

5.2 An example with variable coefficients

In this section, we will see how we can apply the theory we have developed to the following problem:

v g ( t ) + P ( t ) v g ( t ) = f ( t ) , t [ 0 , T ] , ( 5.8 ) v ( 0 ) = x 0 , v g ( 0 ) = v 0 , ( 5.9 )

where x 0 , v 0 C , P , f BC g ( [ 0 , T ] ; C ) are such that 1 P ( t ) Δ g ( t ) 0 , t [ 0 , T ] D g . Observe that in this case, it is clear that y 1 ( t ) = 1 , t [ 0 , T ] , is a solution of the homogeneous equation v g ( t ) P ( t ) v g ( t ) = 0 , t [ 0 , T ] . Therefore, we can use Theorem 4.1 to obtain another linearly independent solution of the homogeneous equation, y 2 = v y 1 , with

v ( t ) = [ 0 , t ) exp g ( P ; s ) d μ g ( s ) , t [ 0 , T ] .

Thus, the solution of the initial value problem (5.8)–(5.9) can be obtained thanks to Theorem 4.3, which yields

v ( t ) = x 0 ( y 2 ) g ( 0 ) v 0 y 2 ( 0 ) W g ( y 1 , y 2 ) ( 0 ) y 1 ( t ) + v 0 y 1 ( 0 ) x 0 ( y 1 ) g ( 0 ) W g ( y 1 , y 2 ) ( 0 ) y 2 ( t ) + y 1 ( t ) [ 0 , t ) ( y 2 + ( y 2 ) g Δ g ) f W g ( y 1 , y 2 ) d μ g + y 2 ( t ) [ 0 , t ) ( y 1 + ( y 1 ) g Δ g ) f W g ( y 1 , y 2 ) d μ g , t [ 0 , T ] .

In this case, we have that for t [ 0 , T ] ,

W g ( y 1 , y 2 ) ( t ) = y 1 ( t ) ( y 2 ) g ( t ) + y 1 ( t ) ( y 2 ) g ( t ) Δ g ( t ) = exp g ( P ; t ) P ( t ) exp g ( P ; t ) Δ g ( t ) = exp g ( P ; t ) ( 1 P ( t ) Δ g ( t ) ) 0 ,

and as a consequence,

y 2 ( t ) + ( y 2 ) g ( t ) Δ g ( t ) W g ( y 1 , y 2 ) ( t ) = [ 0 , t ) exp g ( P ; s ) d μ g ( s ) + exp g ( P ; t ) Δ g ( t ) exp g ( P ; t ) ( 1 P ( t ) Δ g ( t ) ) , y 1 ( t ) + ( y 1 ) g ( t ) Δ g ( t ) W g ( y 1 , y 2 ) ( t ) = 1 exp g ( P ; t ) ( 1 P ( t ) Δ g ( t ) ) .

Therefore, for each t [ 0 , T ] ,

v ( t ) = x 0 + v 0 [ 0 , t ) exp g ( P ; s ) d μ g ( s ) [ 0 , t ) [ 0 , s ) exp g ( P ; u ) d μ g ( u ) exp g ( P ; s ) 1 f ( s ) 1 P ( s ) Δ g ( s ) d μ g ( s ) [ 0 , t ) f ( s ) Δ g ( s ) 1 P ( s ) Δ g ( s ) d μ g ( s ) + [ 0 , t ) exp g ( P ; s ) d μ g ( s ) [ 0 , t ) exp g ( P ; s ) 1 f ( s ) 1 P ( s ) Δ g ( s ) d μ g ( s ) .

Using integration by parts (Lemma 2.13), we have that for each t [ 0 , T ] ,

[ 0 , t ) [ 0 , s ) exp g ( P ; u ) d μ g ( u ) exp g ( P ; s ) 1 f ( s ) 1 P ( s ) Δ g ( s ) d μ g ( s ) = [ 0 , t ) exp g ( P ; s ) [ 0 , s ) exp g ( P ; u ) 1 f ( u ) 1 P ( u ) Δ g ( u ) d μ g ( s ) + [ 0 , t ) exp g ( P ; s ) exp g ( P ; s ) 1 f ( s ) 1 P ( s ) Δ g ( s ) Δ g ( s ) d μ g ( s ) [ 0 , t ) exp g ( P ; s ) d μ g ( s ) [ 0 , t ) exp g ( P ; s ) 1 f ( s ) 1 P ( s ) Δ g ( s ) d μ g ( s ) .

Thus, for each t [ 0 , T ] ,

v ( t ) = x 0 + v 0 [ 0 , t ) exp g ( P ; s ) d μ g ( s ) + [ 0 , t ) exp g ( P ; s ) [ 0 , s ) exp g ( P ; u ) 1 f ( u ) 1 P ( u ) Δ g ( u ) d μ g ( s ) .

Note that the previous formula is the same as the one we obtain if we introduce the variable w = v g and use [2, Proposition 4.12]. Indeed, we have that w satisfies the initial value problem

w g ( t ) + P ( t ) w ( t ) = f ( t ) , t [ 0 , T ] , w ( 0 ) = v 0 .

Therefore, by [2, Proposition 4.12],

w ( t ) = v 0 exp g ( P ; t ) + exp g ( P ; t ) [ 0 , t ) exp g ( P ; s ) 1 f ( s ) 1 P ( s ) Δ g ( s ) d μ g ( s ) , t [ 0 , T ] ,

and, as a consequence, for each t [ 0 , T ] ,

v ( t ) = x 0 + [ 0 , t ) w ( s ) d μ g ( s ) = x 0 + v 0 [ 0 , t ) exp g ( P ; s ) d μ g ( s ) + [ 0 , t ) exp g ( P ; s ) [ 0 , s ) exp g ( P ; u ) 1 f ( u ) 1 P ( u ) Δ g ( u ) d μ g ( s ) .

5.3 One-dimensional linear Helmholtz equation with piecewise-constant coefficients

In the two previous examples, we have seen how the theory we have developed allows us to recover the same solution as the one obtained using order reduction methods. In the following example, we will see that it is possible to use the techniques that we have developed throughout this work to study the one-dimensional linear Helmholtz equation with piecewise-constant coefficients given by

v g ( t ) + w 0 ( t ) 2 v ( t ) = f ( t ) , t [ 0 , T ] , ( 5.10 ) v ( 0 ) = x 0 , v g ( 0 ) = v 0 , ( 5.11 )

where x 0 , v 0 C , f BC g ( [ 0 , T ] ; C ) , and

w 0 : t [ 0 , T ] w 0 ( t ) = w 1 , t [ 0 , t 1 ] , w 2 , t ( t 1 , T ] ,

for some t 1 [ 0 , T ] D g and w 1 , w 2 R . This equation, in the case of the usual derivative, was studied in [15] in the context of inverse scattering problems. Observe that

(5.12) ( 1 ± i w 1 Δ g ( t ) ) ( 1 ± i w 2 Δ g ( t ) ) 0 , t [ 0 , T ] D g .

We have that w 0 BC g ( [ 0 , T ] ; C ) and exp g ( ± i w k ; ) BC g ( [ 0 , T ] ; C ) , k = 1 , 2 , are well-defined and nonzero on [ 0 , T ] .

We have, thanks to Remark 3.10, that (5.10)–(5.11) has a unique solution v BC g 2 ( [ 0 , T ] ; C ) . Let us see now that we can find an explicit solution of problem (5.10)–(5.11) using the method of variation of parameters. Consider the homogeneous problem

v g ( t ) + w 0 ( t ) 2 v ( t ) = 0 , t [ 0 , T ] , ( 5.13 ) v ( 0 ) = x 0 , v g ( 0 ) = v 0 , ( 5.14 )

and denote by

λ 1 1 i w 1 , λ 1 2 i w 1 , λ 2 1 i w 2 , λ 2 2 i w 2 .

Let us see that it is possible to obtain two linearly independent solutions of (5.13) of the form

y k ( t ) = exp g ( λ 1 k ; t ) χ [ 0 , t 1 ] ( t ) + ( α 1 k exp g ( λ 2 1 ; t ) + α 2 k exp g ( λ 2 2 ; t ) ) χ ( t 1 , T ] ( t ) , t [ 0 , T ] , k = 1 , 2 ,

for some α 1 k , α 2 k C , k = 1 , 2 . We have that χ [ 0 , t 1 ] , χ ( t 1 , T ] BC g ( [ 0 , T ] ; C ) . Thus, y k BC g ( [ 0 , T ] ; C ) . Moreover, thanks to Proposition 2.5 and the properties of the g -exponential map, we have that ( y k ) g ( t ) + w 0 ( t ) 2 y k ( t ) = 0 , t [ 0 , T ] \ { t 1 } , k = 1 , 2 . Now, for k = 1 , 2 , bearing in mind that exp g ( p ; t + ) = ( 1 + p ( t + ) Δ g ( t ) ) exp g ( p ; t + ) for any t D g , c.f. [12, Remark 3.3], we have that

( y k ) g ( t 1 ) = y k ( t 1 + ) y k ( t 1 ) Δ g ( t 1 ) = α 1 k ( 1 + λ 2 1 Δ g ( t 1 ) ) exp g ( λ 2 1 ; t 1 ) + α 2 k ( 1 + λ 2 2 Δ g ( t 1 ) ) exp g ( λ 2 2 ; t 1 ) exp g ( λ 1 k ; t 1 ) Δ g ( t 1 ) .

Therefore, if we want to ensure that the first g -derivative is g -continuous at t = t 1 , it must be satisfied that lim t t 1 ( y k ) g ( t ) = ( y k ) g ( t 1 ) , k = 1 , 2 , see Proposition 2.10. In particular, since the g -exponential function is g -continuous, it is left-continuous and, thus, for k = 1 , 2 , we require that

λ 1 k exp g ( λ 1 k ; t 1 ) = α 1 k ( 1 + λ 2 1 Δ g ( t 1 ) ) exp g ( λ 2 1 ; t 1 ) + α 2 k ( 1 + λ 2 2 Δ g ( t 1 ) ) exp g ( λ 2 2 ; t 1 ) exp g ( λ 1 k ; t 1 ) Δ g ( t 1 ) ,

or, equivalently,

(5.15) ( 1 + λ 1 k Δ g ( t 1 ) ) exp g ( λ 1 k ; t 1 ) = α 1 k exp g ( λ 2 1 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) + α 2 k exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 2 Δ g ( t 1 ) ) .

In the case in which (5.15) is fulfilled, we have that for k = 1 , 2 and t [ 0 , T ] ,

( y k ) g ( t ) = λ 1 k exp g ( λ 1 k ; t ) χ [ 0 , t 1 ] ( t ) + ( α 1 k λ 2 1 exp g ( λ 2 1 ; t ) + α 2 k λ 2 2 exp g ( λ 2 2 ; t ) ) χ ( t 1 , T ] ( t ) .

Observe that ( y k ) g BC g ( [ 0 , T ] ; C ) .

In a similar fashion, for k = 1 , 2 , we compute the second g -derivative at the point t = t 1 ,

( y k ) g ( t 1 ) = ( y k ) g ( t 1 + ) ( y k ) g ( t 1 ) Δ g ( t 1 ) = α 1 k ( 1 + λ 2 1 Δ g ( t 1 ) ) λ 2 1 exp g ( λ 2 1 ; t 1 ) + α 2 k ( 1 + λ 2 2 Δ g ( t 1 ) ) λ 2 2 exp g ( λ 2 2 ; t 1 ) λ 1 k exp g ( λ 1 k ; t 1 ) Δ g ( t 1 ) .

Reasoning analogously to the previous case, for k = 1 , 2 , in order for the second g -derivative of y k to be g -continuous at the point t = t 1 , it must be satisfied that

( λ 1 k ) 2 exp g ( λ 1 k ; t 1 ) = λ 1 k exp g ( λ 1 k ; t 1 ) + α 1 k λ 2 1 exp g ( λ 2 1 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) + α 2 k λ 2 2 exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 2 Δ g ( t 1 ) ) Δ g ( t 1 ) .

Again, for k = 1 , 2 , this is equivalent to

(5.16) ( 1 + λ 1 k Δ g ( t 1 ) ) λ 1 k exp g ( λ 1 k ; t 1 ) = α 1 k λ 2 1 exp g ( λ 2 1 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) + α 2 k λ 2 2 exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 2 Δ g ( t 1 ) ) ,

so, in the case in which (5.16) is fulfilled, we obtain, for t [ 0 , T ] ,

( y k ) g ( t ) = ( λ 1 k ) 2 exp g ( λ 1 k ; t ) χ [ 0 , t 1 ] ( t ) + ( α 1 k ( λ 2 1 ) 2 exp g ( λ 2 1 ; t ) + α 2 k ( λ 2 2 ) 2 exp g ( λ 2 2 ; t ) ) χ ( t 1 , T ] ( t ) .

Observe that ( y k ) g BC g ( [ 0 , T ] ; C ) .

Conditions (5.15) and (5.16) give us the following linear system of equations:

(5.17) exp g ( λ 2 1 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 2 Δ g ( t 1 ) ) λ 2 1 exp g ( λ 2 1 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) λ 2 2 exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 2 Δ g ( t 1 ) ) α 1 k α 2 k = ( 1 + λ 1 k Δ g ( t 1 ) ) exp g ( λ 1 k ; t 1 ) λ 1 k ( 1 + λ 1 k Δ g ( t 1 ) ) exp g ( λ 1 k ; t 1 ) .

Note that, thanks to (5.12), the determinant of the matrix A M 2 × 2 ( C ) of the previous system is such that

det ( A ) = exp g ( λ 2 1 ; t 1 ) exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) ( 1 + λ 2 2 Δ g ( t 1 ) ) ( λ 2 2 λ 2 1 ) 0 .

Therefore,

(5.18) α 1 k = exp g ( λ 1 k ; t 1 ) exp g ( λ 2 2 ; t 1 ) ( 1 + λ 1 k Δ g ( t 1 ) ) ( 1 + λ 2 2 Δ g ( t 1 ) ) ( λ 2 2 λ 1 k ) exp g ( λ 2 1 ; t 1 ) exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) ( 1 + λ 2 2 Δ g ( t 1 ) ) ( λ 2 2 λ 2 1 ) = exp g ( λ 1 k ; t 1 ) ( 1 + λ 1 k Δ g ( t 1 ) ) ( λ 2 2 λ 1 k ) exp g ( λ 2 1 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) ( λ 2 2 λ 2 1 ) ,

(5.19) α 2 k = exp g ( λ 2 1 ; t 1 ) exp g ( λ 1 k ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) ( 1 + λ 1 k Δ g ( t 1 ) ) ( λ 1 k λ 2 1 ) exp g ( λ 2 1 ; t 1 ) exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 1 Δ g ( t 1 ) ) ( 1 + λ 2 2 Δ g ( t 1 ) ) ( λ 2 2 λ 2 1 ) = exp g ( λ 1 k ; t 1 ) ( 1 + λ 1 k Δ g ( t 1 ) ) ( λ 1 k λ 2 1 ) exp g ( λ 2 2 ; t 1 ) ( 1 + λ 2 2 Δ g ( t 1 ) ) ( λ 2 2 λ 2 1 ) .

We have the following result.

Corollary 5.4

Let t 1 [ 0 , T ] D g , w 1 , w 2 R and consider α 1 k , α 2 k C as in (5.18) and (5.19). Then, the maps

y k = exp g ( λ 1 k ; ) χ [ 0 , t 1 ] + ( α 1 k exp g ( λ 2 1 ; ) + α 2 k exp g ( λ 2 2 ; ) ) χ ( t 1 , T ] , k = 1 , 2 ,

are two linearly independent solutions of the homogeneous equation (5.13). In particular,

(5.20) v h ( t ) = λ 1 2 x 0 v 0 λ 1 2 λ 1 1 y 1 ( t ) + v 0 λ 1 1 x 0 λ 1 2 λ 1 1 y 2 ( t ) , t [ 0 , T ] ,

is the solution of the initial value problem (5.13)–(5.14).

Proof

Given the reasoning above, it is clear that y 1 , y 2 are two solutions of (5.13), so let us show that y 1 , y 2 are linearly independent. Indeed, by Lemma 3.13, we have that

W g ( y 1 , y 2 ) = ( 1 + w 0 2 Δ g 2 ) W ˜ g ( y 1 , y 2 ) = ( 1 + w 0 2 Δ g 2 ) W ˜ g ( y 1 , y 2 ) ( 0 ) exp g ( w 0 2 Δ g ; ) .

Now, given that by definition W ˜ g ( y 1 , y 2 ) = y 1 ( y 2 ) g y 2 ( y 1 ) g , we have, by condition (5.12), that

W g ( y 1 , y 2 ) = ( λ 1 2 λ 1 1 ) ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) 0 ,

so Lemma 3.6 ensures that y 1 , y 2 are linearly independent.□

We have the following result for the nonhomogeneous problem (5.10)–(5.11).

Corollary 5.5

Let f BC g ( [ 0 , T ] ; C ) , t 1 [ 0 , T ] D g , w 1 , w 2 R , and consider α 1 k , α 2 k C as in (5.18) and (5.19). Then, the solution of the initial value problem (5.10)–(5.11) is given by

v = v p + v h ,

where v h is the solution of the homogeneous problem (5.13)–(5.14) given by (5.20), and

v p = v p 1 χ [ 0 , t 1 ] + v p 2 χ ( t 1 , T ] ,

is a particular solution of (5.10) with v p 1 , v p 2 : [ 0 , T ] C defined as

v p 1 ( t ) = y 1 ( t ) ( λ 1 2 λ 1 1 ) [ 0 , t ) ( y 2 + ( y 2 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g + y 2 ( t ) ( λ 1 2 λ 1 1 ) [ 0 , t ) ( y 1 + ( y 1 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g ,

v p 2 ( t ) = ( α 1 1 exp g ( λ 2 1 ; t ) + α 2 1 exp g ( λ 2 2 ; t ) ) λ 1 1 λ 1 2 [ 0 , t 1 ) exp g ( λ 1 1 ; ) 1 f 1 + λ 1 1 Δ g d μ g + exp g ( λ 1 1 ; t 1 ) 1 f ( t 1 ) Δ g ( t 1 ) 1 + λ 1 1 Δ g ( t 1 ) + α 1 2 ( t 1 , t ) exp g ( λ 2 1 ; ) ( 1 + λ 2 2 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g + α 2 2 ( t 1 , t ) exp g ( λ 2 2 ; ) ( 1 + λ 2 1 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g ( α 1 2 exp g ( λ 2 1 ; t ) + α 2 2 exp g ( λ 2 2 ; t ) ) λ 1 1 λ 1 2 [ 0 , t 1 ) exp g ( λ 1 2 ; ) 1 f 1 + λ 1 2 Δ g d μ g + exp g ( λ 1 2 ; t 1 ) 1 f ( t 1 ) Δ g ( t 1 ) 1 + λ 1 2 Δ g ( t 1 ) + α 1 1 ( t 1 , t ) exp g ( λ 2 1 ; ) ( 1 + λ 2 2 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g + α 2 1 ( t 1 , t ) exp g ( λ 2 2 ; ) ( 1 + λ 2 1 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g .

Proof

First, observe that the hypotheses of Theorem 4.3 are satisfied for y 1 and y 2 given by Corollary 5.4. Hence, we have that for each t [ 0 , T ] ,

v p ( t ) = y 1 ( t ) λ 1 2 λ 1 1 [ 0 , t ) ( y 2 + ( y 2 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g + y 2 ( t ) λ 1 2 λ 1 1 [ 0 , t ) ( y 1 + ( y 1 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g

is a particular solution of (5.10). Now, observe that, for t [ 0 , t 1 ] ,

[ 0 , t ) ( y 2 + ( y 2 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g = [ 0 , t ) exp g ( λ 1 2 ; ) ( 1 + λ 1 2 Δ g ) f ( 1 + λ 1 1 λ 1 2 Δ 2 g ) exp g ( λ 1 1 λ 1 2 Δ g ; ) d μ g = [ 0 , t ) exp g ( λ 1 2 ; ) f ( 1 + λ 1 1 Δ g ) exp g ( λ 1 1 λ 1 2 Δ g ; ) d μ g = [ 0 , t ) f 1 + λ 1 1 Δ g exp g ( λ 1 2 ; ) exp g λ 1 1 λ 1 2 Δ g 1 + λ 1 1 λ 1 2 Δ 2 g ; d μ g = [ 0 , t ) f 1 + λ 1 1 Δ g exp g λ 1 2 λ 1 1 λ 1 2 Δ g 1 + λ 1 1 λ 1 2 Δ 2 g λ 1 1 ( λ 1 2 ) 2 Δ 2 g 1 + λ 1 1 λ 1 2 Δ 2 g ; d μ g = [ 0 , t ) f 1 + λ 1 1 Δ g exp g λ 1 2 λ 1 1 λ 1 2 Δ g 1 + λ 1 1 λ 1 2 Δ 2 g ; d μ g = [ 0 , t ) f 1 + λ 1 1 Δ g exp g λ 1 1 + λ 1 1 λ 1 1 Δ g 1 λ 1 1 λ 1 1 Δ 2 g ; d μ g = [ 0 , t ) f 1 + λ 1 1 Δ g exp g λ 1 1 1 + λ 1 1 Δ g ; d μ g = [ 0 , t ) f 1 + λ 1 1 Δ g exp g ( λ 1 1 ; ) 1 d μ g .

Analogously, for t [ 0 , t 1 ] ,

[ 0 , t ) ( y 1 + ( y 1 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g = [ 0 , t ) exp g ( λ 1 2 ; ) 1 f 1 + λ 1 2 Δ g d μ g .

Therefore, for any t [ 0 , t 1 ] ,

v p ( t ) = exp g ( λ 1 1 ; t ) λ 1 1 λ 1 2 [ 0 , t ) exp g ( λ 1 1 ; ) 1 f 1 + λ 1 1 Δ g d μ g exp g ( λ 1 2 ; t ) λ 1 1 λ 1 2 [ 0 , t ) exp g ( λ 1 2 ; ) 1 f 1 + λ 1 2 Δ g d μ g .

On the other hand, given t ( t 1 , T ) ,

v p ( t ) = y 1 ( t ) λ 1 1 λ 1 2 [ 0 , t 1 ) exp g ( λ 1 1 ; ) 1 f 1 + λ 1 1 Δ g d μ g + exp g ( λ 1 1 ; t 1 ) 1 f ( t 1 ) Δ g ( t 1 ) 1 + λ 1 1 Δ g ( t 1 ) + ( t 1 , t ) ( y 2 + ( y 2 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g y 2 ( t ) λ 1 1 λ 1 2 [ 0 , t 1 ) exp g ( λ 1 2 ; ) 1 f 1 + λ 1 2 Δ g d μ g + exp g ( λ 1 2 ; t 1 ) 1 f ( t 1 ) Δ g ( t 1 ) 1 + λ 1 2 Δ g ( t 1 ) + ( t 1 , t ) ( y 1 + ( y 1 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) d μ g .

We have that

( y 2 + ( y 2 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) = ( α 1 2 exp g ( λ 2 1 ; ) + α 2 2 exp g ( λ 2 2 ; ) ) + ( α 1 2 λ 2 1 exp g ( λ 2 1 ; ) + α 2 2 λ 2 2 exp g ( λ 2 2 ; ) ) Δ g ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) f = α 1 2 exp g ( λ 2 1 ; ) ( 1 + λ 2 1 Δ g ) ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) f + α 2 2 exp g ( λ 2 2 ; ) ( 1 + λ 2 2 Δ g ) ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) f = α 1 2 exp g ( λ 2 1 ; ) ( 1 + λ 2 2 Δ g ) exp g ( w 0 2 Δ g ; ) f + α 2 2 exp g ( λ 2 2 ; ) ( 1 + λ 2 1 Δ g ) exp g ( w 0 2 Δ g ; ) f .

Analogously,

( y 1 + ( y 1 ) g Δ g ) f ( 1 + w 0 2 Δ g 2 ) exp g ( w 0 2 Δ g ; ) = α 1 1 exp g ( λ 2 1 ; ) ( 1 + λ 2 2 Δ g ) exp g ( w 0 2 Δ g ; ) f + α 2 1 exp g ( λ 2 2 ; ) ( 1 + λ 2 1 Δ g ) exp g ( w 0 2 Δ g ; ) f .

Therefore, for any t ( t 1 , T ]

v p ( t ) = ( α 1 1 exp g ( λ 2 1 ; t ) + α 2 1 exp g ( λ 2 2 ; t ) ) λ 1 1 λ 1 2 [ 0 , t 1 ) exp g ( λ 1 1 ; ) 1 f 1 + λ 1 1 Δ g d μ g + exp g ( λ 1 1 ; t 1 ) 1 f ( t 1 ) Δ g ( t 1 ) 1 + λ 1 1 Δ g ( t 1 ) + α 1 2 ( t 1 , t ) exp g ( λ 2 1 ; ) ( 1 + λ 2 2 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g + α 2 2 ( t 1 , t ) exp g ( λ 2 2 ; ) ( 1 + λ 2 1 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g ( α 1 2 exp g ( λ 2 1 ; t ) + α 2 2 exp g ( λ 2 2 ; t ) ) λ 1 1 λ 1 2 [ 0 , t 1 ) exp g ( λ 1 2 ; ) 1 f 1 + λ 1 2 Δ g d μ g + exp g ( λ 1 2 ; t 1 ) 1 f ( t 1 ) Δ g ( t 1 ) 1 + λ 1 2 Δ g ( t 1 ) + α 1 1 ( t 1 , t ) exp g ( λ 2 1 ; ) ( 1 + λ 2 2 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g + α 2 1 ( t 1 , t ) exp g ( λ 2 2 ; ) ( 1 + λ 2 1 Δ g ) exp g ( w 0 2 Δ g ; ) f d μ g ,

which completes the proof.□

In the following example, we obtain an explicit solution of problem (5.13)–(5.14) for a particular choice of a derivator g .

Example 5.6

Let x 0 , v 0 C , w 1 , w 2 R , T > 1 and consider the derivator,

g ( t ) = t , t 1 , t + δ , t > 1 ,

for some δ > 0 . It is clear that D g = { 1 } and Δ g ( 1 ) = δ . Moreover, given an element λ C such that 1 + λ δ 0 , we have that

exp g ( λ ; t ) = e λ t , 0 t 1 , ( 1 + λ δ ) e λ t , 1 < t T .

In this case, the solution of system (5.17) is

α 1 1 = e i ( w 1 w 2 ) ( w 1 + w 2 ) ( 1 + i δ w 1 ) 2 w 2 ( 1 + i δ w 2 ) , α 2 1 = e i ( w 1 + w 2 ) ( w 2 w 1 ) ( 1 + i δ w 1 ) 2 w 2 ( 1 i δ w 2 ) , α 1 2 = e i ( w 1 + w 2 ) ( w 2 w 1 ) ( 1 i δ w 1 ) 2 w 2 ( 1 + i δ w 2 ) , α 2 2 = e i ( w 2 w 1 ) ( w 1 + w 2 ) ( 1 i δ w 1 ) 2 w 2 ( 1 i δ w 2 ) .

Therefore, the solution of the corresponding problem (5.13)–(5.14) is given by the following expression:

v h ( t ) = w 1 x 0 i v 0 2 w 1 e i w 1 t χ [ 0 , 1 ] ( t ) + e i ( w 1 w 2 ) ( w 1 + w 2 ) ( 1 + i δ w 1 ) 2 w 2 e i w 2 t + e i ( w 1 + w 2 ) ( w 2 w 1 ) ( 1 + i δ w 1 ) 2 w 2 e i w 2 t χ ( 1 , T ] ( t ) + w 1 x 0 + i v 0 2 w 1 e i w 1 t χ [ 0 , 1 ] ( t ) + e i ( w 1 + w 2 ) ( w 2 w 1 ) ( 1 i δ w 1 ) 2 w 2 e i w 2 t + e i ( w 1 w 2 ) ( w 1 + w 2 ) ( 1 i δ w 1 ) 2 w 2 e i w 2 t χ ( 1 , T ] ( t ) = w 1 x 0 i v 0 2 w 1 e i w 1 t + w 1 x 0 + i v 0 2 w 1 e i w 1 t χ [ 0 , 1 ] ( t ) + w 1 x 0 i v 0 2 w 1 e i ( w 1 w 2 ) ( w 1 + w 2 ) ( 1 + i δ w 1 ) 2 w 2 e i w 2 t + w 1 x 0 + i v 0 2 w 1 e i ( w 1 w 2 ) ( w 1 + w 2 ) ( 1 i δ w 1 ) 2 w 2 e i w 2 t χ ( 1 , T ] ( t ) + w 1 x 0 i v 0 2 w 1 e i ( w 1 + w 2 ) ( w 2 w 1 ) ( 1 + i δ w 1 ) 2 w 2 e i w 2 t + w 1 x 0 + i v 0 2 w 1 e i ( w 1 + w 2 ) ( w 2 w 1 ) ( 1 i δ w 1 ) 2 w 2 e i w 2 t χ ( 1 , T ] ( t ) ,

for t [ 0 , T ] . Observe that, given t [ 0 , T ] ,

v ˜ h ( t ) lim δ 0 v h ( t ) = w 1 x 0 i v 0 2 w 1 e i w 1 t + w 1 x 0 + i v 0 2 w 1 e i w 1 t χ [ 0 , 1 ] ( t ) + w 1 x 0 i v 0 2 w 1 e i ( w 1 w 2 ) ( w 1 + w 2 ) 2 w 2 e i w 2 t + w 1 x 0 + i v 0 2 w 1 e i ( w 1 w 2 ) ( w 1 + w 2 ) 2 w 2 e i w 2 t χ ( 1 , T ] ( t ) + w 1 x 0 i v 0 2 w 1 e i ( w 1 + w 2 ) ( w 2 w 1 ) 2 w 2 e i w 2 t + w 1 x 0 + i v 0 2 w 1 e i ( w 1 + w 2 ) ( w 2 w 1 ) 2 w 2 e i w 2 t χ ( 1 , T ] ( t ) ,

is such that v ˜ h { v C 1 ( [ 0 , T ] ; R ) : v AC ( [ 0 , T ] ; R ) } is the solution of the classical one-dimensional linear Helmholtz equation with piecewise-constant coefficients

( v ˜ h ) ( t ) = v 0 + [ 0 , t ] w 0 2 v ˜ h d m , t [ 0 , T ] ,

where m is the Lebesgue measure in [ 0 , T ] and v ˜ h ( 0 ) = x 0 . Observe that, for δ = 0 , the g -derivative coincides with the usual derivative. It is also remarkable that v ˜ h and ( v ˜ h ) are continuous functions, whereas the second derivative ( v ˜ h ) undergoes jumps at the points of discontinuity of w 0 [16]. This behavior can be seen in Figures 1, 2, and 3.

Figure 1 
                  Solution of the homogeneous problem (5.13)–(5.14) for different values of 
                        
                           
                           
                              δ
                           
                           \delta 
                        
                     . Observe that for 
                        
                           
                           
                              δ
                              =
                              0
                           
                           \delta =0
                        
                     , the 
                        
                           
                           
                              g
                           
                           g
                        
                     -continuity is equivalent to the continuity in the usual sense.
Figure 1

Solution of the homogeneous problem (5.13)–(5.14) for different values of δ . Observe that for δ = 0 , the g -continuity is equivalent to the continuity in the usual sense.

Figure 2 
                  First 
                        
                           
                           
                              g
                           
                           g
                        
                     -derivative of the solution of (5.13)–(5.14) for different values of 
                        
                           
                           
                              δ
                           
                           \delta 
                        
                     . Observe that for 
                        
                           
                           
                              δ
                              =
                              0
                           
                           \delta =0
                        
                     , the 
                        
                           
                           
                              g
                           
                           g
                        
                      derivative is equivalent to the usual derivative and the first derivative of the solution is continuous in the usual sense.
Figure 2

First g -derivative of the solution of (5.13)–(5.14) for different values of δ . Observe that for δ = 0 , the g derivative is equivalent to the usual derivative and the first derivative of the solution is continuous in the usual sense.

Figure 3 
                  Second 
                        
                           
                           
                              g
                           
                           g
                        
                     -derivative of the solution of (5.13)–(5.14) for different values of 
                        
                           
                           
                              δ
                           
                           \delta 
                        
                     . Observe that the second 
                        
                           
                           
                              g
                           
                           g
                        
                     -derivative is 
                        
                           
                           
                              g
                           
                           g
                        
                     -continuous for all 
                        
                           
                           
                              δ
                              >
                              0
                           
                           \delta \gt 0
                        
                     , whereas for 
                        
                           
                           
                              δ
                              =
                              0
                           
                           \delta =0
                        
                     , the second derivative 
                        
                           
                           
                              
                                 
                                    
                                       (
                                       
                                          
                                             
                                                
                                                   
                                                      v
                                                   
                                                   
                                                      ˜
                                                   
                                                
                                             
                                             
                                                h
                                             
                                          
                                       
                                       )
                                    
                                 
                                 
                                    ″
                                 
                              
                              =
                              
                                 
                                    
                                       (
                                       
                                          
                                             
                                                
                                                   
                                                      v
                                                   
                                                   
                                                      ˜
                                                   
                                                
                                             
                                             
                                                h
                                             
                                          
                                       
                                       )
                                    
                                 
                                 
                                    g
                                 
                                 
                                    ″
                                 
                              
                           
                           {\left({\widetilde{v}}_{h})}^{^{\prime\prime} }={\left({\widetilde{v}}_{h})}_{g}^{^{\prime\prime} }
                        
                      has a discontinuity point at 
                        
                           
                           
                              t
                              =
                              1
                           
                           t=1
                        
                     .
Figure 3

Second g -derivative of the solution of (5.13)–(5.14) for different values of δ . Observe that the second g -derivative is g -continuous for all δ > 0 , whereas for δ = 0 , the second derivative ( v ˜ h ) = ( v ˜ h ) g has a discontinuity point at t = 1 .

Remark 5.7

This last example suggests that there is some form of continuity of the solution of problem (5.13)–(5.14) with respect to g . Indeed, this is in general the case. Let B be the Borel σ -algebra associated with the usual topology of [ a , b ] R and let M ( [ a , b ] , B ) be the space of all signed measures of bounded variation defined on B . The total variation is a norm on M ( [ a , b ] , B ) with which this space is a Banach space [17, Theorem 4.6.1]. If we now fix a bounded Borel function f : [ a , b ] C , then the map T : ( M ( [ a , b ] , B ) , ) C such that T μ = f d μ is a bounded linear functional. Indeed, for every μ M ( [ a , b ] , B ) ,

T μ = f d μ f d μ f μ .

Finally, every function of bounded variation (for instance g ) corresponds to a unique signed measure of bounded variation μ g such that μ g ( [ c , d ) ) = μ g ( d ) μ g ( c ) and μ g = Var g [ a , b ] , the total variation of g on [ a , b ] . Furthermore, we have that g g ( a ) + Var g [ c , d ) is a norm on the space of functions of bounded variation and we conclude that the map T : g f d μ g is continuous with that norm. In this way, if we have a sequence of derivators ( g n ) n N converging to a derivator g in the bounded variation norm (such would be the case in Example 5.6 with different values of δ ), then the sequences of solutions ( u n ) n N of the respective problems converge to the solution u of the problem with g .

  1. Funding information: Francisco J. Fernández and F. Adrián F. Tojo were supported by a research grant of the Agencia Estatal de Investigacion, Spain, Grant PID2020-113275GB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, by the “European Union” and Xunta de Galicia, grant ED431C 2023/12 for Competitive Reference Research Groups (2023-2026). Ignacio Márquez Albés was funded by the Czech Academy of Sciences (RVO 67985840).

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and consented to its submission to the journal, written and reviewed all the results and approved the final version of the manuscript.

  3. Conflict of interest: The authors state no conflict of interest.

References

[1] J. M. Hoene-Wroński, Réfutation de la Théorie des Fonctions Analytiques de Lagrange, Blankenstein, Paris, 1812. Search in Google Scholar

[2] F. J. Fernández, I. Márquez Albés, and F. A. F. Tojo, On first and second-order linear Stieltjes differential equations, J. Math. Anal. Appl. 511 (2022), no. 1, 126010. 10.1016/j.jmaa.2022.126010Search in Google Scholar

[3] F. J. Fernández, I. Márquez Albés, and F. A. F. Tojo, Consequences of the Product Rule in Stieltjes Differentiability, 2022, DOI: https://doi.org/10.48550/arXiv.2205.10090.Search in Google Scholar

[4] M. Frigon and R. López Pouso, Theory and applications of first-order systems of Stieltjes differential equations, Adv. Nonlinear Anal. 6 (2017), no. 1, 13–36. 10.1515/anona-2015-0158Search in Google Scholar

[5] M. Bohner and A. Peterson, Dynamic Equations on Time Scales. An Introduction with Applications, Birkhäuser, Basel, 2001. 10.1007/978-1-4612-0201-1Search in Google Scholar

[6] F. Larivière, Sur les solutions d’équations différentielles de Stieltjes du premier et du deuxième ordre, Master’s thesis, Université de Montréal, 2019. Search in Google Scholar

[7] W. Rudin, Real and Complex Analysis, McGraw-Hill, Singapore, 1987. Search in Google Scholar

[8] E. Schechter, Handbook of Analysis and Its Foundations, Academic Press, San Diego, California, 1997. 10.1016/B978-012622760-4/50002-9Search in Google Scholar

[9] F. E. Burk, A Garden of Integrals, The Mathematical Association of America, Washington D.C., 2007. 10.7135/UPO9781614442097Search in Google Scholar

[10] R. López Pouso and A. Rodríguez, A new unification of continuous, discrete, and impulsive calculus through Stieltjes derivatives, Real Anal. Exchange 40 (2014/15), no. 2, 319–353. 10.14321/realanalexch.40.2.0319Search in Google Scholar

[11] R. López Pouso and I. Márquez Albés, Resolution methods for mathematical models based on differential equations with Stieltjes derivatives, Electron. J. Qual. Theory Differ. Equ. 72 (2019), 1–15. 10.14232/ejqtde.2019.1.72Search in Google Scholar

[12] I. Márquez Albés, Notes on the linear equation with Stieltjes derivatives, Electron. J. Qual. Theory Differ. Equ. 42 (2021), 1–18. 10.14232/ejqtde.2021.1.42Search in Google Scholar

[13] I. Márquez Albés, Differential Problems with Stieltjes Derivatives and Applications, Ph.D. thesis, Universidade de Santiago de Compostela, 2021. Search in Google Scholar

[14] G. A. Monteiro, A. Slavík, and M. Tvrdy, Kurzweil-Stieltjes Integral: Theory and applications, World Scientific, Singapore, 2018. Search in Google Scholar

[15] S. Bugarija, P. C. Gibson, G. Hu, P. Li, and Y. Zhao, Inverse scattering for the one-dimensional Helmholtz equation with piecewise constant wave speed, Inverse Problems 36 (2020), no. 7, 075008. 10.1088/1361-6420/ab89c4Search in Google Scholar

[16] G. Baruch, G. Fibich, and S. Tsynkov, High-order numerical method for the nonlinear Helmholtz equation with material discontinuities in one space dimension, J. Comput. Phys. 227 (2007), no. 1, 820–850. 10.1016/j.jcp.2007.08.022Search in Google Scholar

[17] V. I. Bogachev, Measure Theory, 1st ed., vol. 1, Springer, Berlin, 2007. 10.1007/978-3-540-34514-5Search in Google Scholar

Received: 2024-02-05
Revised: 2024-04-18
Accepted: 2024-04-30
Published Online: 2024-06-19

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Special Issue on Contemporary Developments in Graph Topological Indices
  2. On the maximum atom-bond sum-connectivity index of graphs
  3. Upper bounds for the global cyclicity index
  4. Zagreb connection indices on polyomino chains and random polyomino chains
  5. On the multiplicative sum Zagreb index of molecular graphs
  6. The minimum matching energy of unicyclic graphs with fixed number of vertices of degree two
  7. Special Issue on Convex Analysis and Applications - Part I
  8. Weighted Hermite-Hadamard-type inequalities without any symmetry condition on the weight function
  9. Scattering threshold for the focusing energy-critical generalized Hartree equation
  10. (pq)-Compactness in spaces of holomorphic mappings
  11. Characterizations of minimal elements of upper support with applications in minimizing DC functions
  12. Some new Hermite-Hadamard-type inequalities for strongly h-convex functions on co-ordinates
  13. Global existence and extinction for a fast diffusion p-Laplace equation with logarithmic nonlinearity and special medium void
  14. Extension of Fejér's inequality to the class of sub-biharmonic functions
  15. On sup- and inf-attaining functionals
  16. Regularization method and a posteriori error estimates for the two membranes problem
  17. Rapid Communication
  18. Note on quasivarieties generated by finite pointed abelian groups
  19. Review Articles
  20. Amitsur's theorem, semicentral idempotents, and additively idempotent semirings
  21. A comprehensive review of the recent numerical methods for solving FPDEs
  22. On an Oberbeck-Boussinesq model relating to the motion of a viscous fluid subject to heating
  23. Pullback and uniform exponential attractors for non-autonomous Oregonator systems
  24. Regular Articles
  25. On certain functional equation related to derivations
  26. The product of a quartic and a sextic number cannot be octic
  27. Combined system of additive functional equations in Banach algebras
  28. Enhanced Young-type inequalities utilizing Kantorovich approach for semidefinite matrices
  29. Local and global solvability for the Boussinesq system in Besov spaces
  30. Construction of 4 x 4 symmetric stochastic matrices with given spectra
  31. A conjecture of Mallows and Sloane with the universal denominator of Hilbert series
  32. The uniqueness of expression for generalized quadratic matrices
  33. On the generalized exponential sums and their fourth power mean
  34. Infinitely many solutions for Schrödinger equations with Hardy potential and Berestycki-Lions conditions
  35. Computing the determinant of a signed graph
  36. Two results on the value distribution of meromorphic functions
  37. Zariski topology on the secondary-like spectrum of a module
  38. On deferred f-statistical convergence for double sequences
  39. About j-Noetherian rings
  40. Strong convergence for weighted sums of (α, β)-mixing random variables and application to simple linear EV regression model
  41. On the distribution of powered numbers
  42. Almost periodic dynamics for a delayed differential neoclassical growth model with discontinuous control strategy
  43. A new distributionally robust reward-risk model for portfolio optimization
  44. Asymptotic behavior of solutions of a viscoelastic Shear beam model with no rotary inertia: General and optimal decay results
  45. Silting modules over a class of Morita rings
  46. Non-oscillation of linear differential equations with coefficients containing powers of natural logarithm
  47. Mutually unbiased bases via complex projective trigonometry
  48. Hyers-Ulam stability of a nonlinear partial integro-differential equation of order three
  49. On second-order linear Stieltjes differential equations with non-constant coefficients
  50. Complex dynamics of a nonlinear discrete predator-prey system with Allee effect
  51. The fibering method approach for a Schrödinger-Poisson system with p-Laplacian in bounded domains
  52. On discrete inequalities for some classes of sequences
  53. Boundary value problems for integro-differential and singular higher-order differential equations
  54. Existence and properties of soliton solution for the quasilinear Schrödinger system
  55. Hermite-Hadamard-type inequalities for generalized trigonometrically and hyperbolic ρ-convex functions in two dimension
  56. Endpoint boundedness of toroidal pseudo-differential operators
  57. Matrix stretching
  58. A singular perturbation result for a class of periodic-parabolic BVPs
  59. On Laguerre-Sobolev matrix orthogonal polynomials
  60. Pullback attractors for fractional lattice systems with delays in weighted space
  61. Singularities of spherical surface in R4
  62. Variational approach to Kirchhoff-type second-order impulsive differential systems
  63. Convergence rate of the truncated Euler-Maruyama method for highly nonlinear neutral stochastic differential equations with time-dependent delay
  64. On the energy decay of a coupled nonlinear suspension bridge problem with nonlinear feedback
  65. The limit theorems on extreme order statistics and partial sums of i.i.d. random variables
  66. Hardy-type inequalities for a class of iterated operators and their application to Morrey-type spaces
  67. Solving multi-point problem for Volterra-Fredholm integro-differential equations using Dzhumabaev parameterization method
  68. Finite groups with gcd(χ(1), χc (1)) a prime
  69. Small values and functional laws of the iterated logarithm for operator fractional Brownian motion
  70. The hull-kernel topology on prime ideals in ordered semigroups
  71. ℐ-sn-metrizable spaces and the images of semi-metric spaces
  72. Strong laws for weighted sums of widely orthant dependent random variables and applications
  73. An extension of Schweitzer's inequality to Riemann-Liouville fractional integral
  74. Construction of a class of half-discrete Hilbert-type inequalities in the whole plane
  75. Analysis of two-grid method for second-order hyperbolic equation by expanded mixed finite element methods
  76. Note on stability estimation of stochastic difference equations
  77. Trigonometric integrals evaluated in terms of Riemann zeta and Dirichlet beta functions
  78. Purity and hybridness of two tensors on a real hypersurface in complex projective space
  79. Classification of positive solutions for a weighted integral system on the half-space
  80. A quasi-reversibility method for solving nonhomogeneous sideways heat equation
  81. Higher-order nonlocal multipoint q-integral boundary value problems for fractional q-difference equations with dual hybrid terms
  82. Noetherian rings of composite generalized power series
  83. On generalized shifts of the Mellin transform of the Riemann zeta-function
  84. Further results on enumeration of perfect matchings of Cartesian product graphs
  85. A new extended Mulholland's inequality involving one partial sum
  86. Power vector inequalities for operator pairs in Hilbert spaces and their applications
  87. On the common zeros of quasi-modular forms for Γ+0(N) of level N = 1, 2, 3
  88. One special kind of Kloosterman sum and its fourth-power mean
  89. The stability of high ring homomorphisms and derivations on fuzzy Banach algebras
  90. Integral mean estimates of Turán-type inequalities for the polar derivative of a polynomial with restricted zeros
  91. Commutators of multilinear fractional maximal operators with Lipschitz functions on Morrey spaces
  92. Vector optimization problems with weakened convex and weakened affine constraints in linear topological spaces
  93. The curvature entropy inequalities of convex bodies
  94. Brouwer's conjecture for the sum of the k largest Laplacian eigenvalues of some graphs
  95. High-order finite-difference ghost-point methods for elliptic problems in domains with curved boundaries
  96. Riemannian invariants for warped product submanifolds in Q ε m × R and their applications
  97. Generalized quadratic Gauss sums and their 2mth power mean
  98. Euler-α equations in a three-dimensional bounded domain with Dirichlet boundary conditions
  99. Enochs conjecture for cotorsion pairs over recollements of exact categories
  100. Zeros distribution and interlacing property for certain polynomial sequences
  101. Random attractors of Kirchhoff-type reaction–diffusion equations without uniqueness driven by nonlinear colored noise
  102. Study on solutions of the systems of complex product-type PDEs with more general forms in ℂ2
  103. Dynamics in a predator-prey model with predation-driven Allee effect and memory effect
  104. A note on orthogonal decomposition of 𝔰𝔩n over commutative rings
  105. On the δ-chromatic numbers of the Cartesian products of graphs
  106. Binomial convolution sum of divisor functions associated with Dirichlet character modulo 8
  107. Commutator of fractional integral with Lipschitz functions related to Schrödinger operator on local generalized mixed Morrey spaces
  108. System of degenerate parabolic p-Laplacian
  109. Stochastic stability and instability of rumor model
  110. Certain properties and characterizations of a novel family of bivariate 2D-q Hermite polynomials
  111. Stability of an additive-quadratic functional equation in modular spaces
  112. Monotonicity, convexity, and Maclaurin series expansion of Qi's normalized remainder of Maclaurin series expansion with relation to cosine
  113. On k-prime graphs
  114. On the existence of tripartite graphs and n-partite graphs
  115. Classifying pentavalent symmetric graphs of order 12pq
  116. Almost periodic functions on time scales and their properties
  117. Some results on uniqueness and higher order difference equations
  118. Coding of hypersurfaces in Euclidean spaces by a constant vector
  119. Cycle integrals and rational period functions for Γ0+(2) and Γ0+(3)
  120. Degrees of (L, M)-fuzzy bornologies
  121. A matrix approach to determine optimal predictors in a constrained linear mixed model
  122. On ideals of affine semigroups and affine semigroups with maximal embedding dimension
  123. Solutions of linear control systems on Lie groups
  124. A uniqueness result for the fractional Schrödinger-Poisson system with strong singularity
  125. On prime spaces of neutrosophic extended triplet groups
  126. On a generalized Krasnoselskii fixed point theorem
  127. On the relation between one-sided duoness and commutators
  128. Non-homogeneous BVPs for second-order symmetric Hamiltonian systems
  129. Erratum
  130. Erratum to “Infinitesimals via Cauchy sequences: Refining the classical equivalence”
  131. Corrigendum
  132. Corrigendum to “Matrix stretching”
  133. Corrigendum to “A comprehensive review of the recent numerical methods for solving FPDEs”
Downloaded on 12.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2024-0018/html
Scroll to top button