Home Averaging method in optimal control problems for integro-differential equations
Article Open Access

Averaging method in optimal control problems for integro-differential equations

  • Roksolana Lakhva , Roza Uteshova EMAIL logo , Oleksandr Stanzhytskyi and Viktoria Mogylova
Published/Copyright: June 13, 2025

Abstract

The averaging method is applied to the study of optimal control problems for systems of integro-differential equations with rapidly oscillating coefficients and a small parameter. The original problem is associated with an averaged optimal control problem, formulated for a system of ordinary differential equations, which significantly simplifies the analysis. It is proven that as the small parameter tends to zero, the quality criterion, optimal control, and optimal trajectory of the original problem converge to those of the averaged problem.

MSC 2010: 49J15; 49J21; 34K11; 34K33

1 Introduction

In this study, we apply the averaging method to the study of optimal control problems for systems of integro-differential equations with rapidly oscillating coefficients and a small parameter. The averaging method is one of the most widely used and effective approaches for analyzing nonlinear dynamical systems. Originally proposed by Krylov and Bogolyubov for ordinary differential equations, this method was later developed and applied to various problems. In particular, it has been employed in the context of integro-differential systems in [1,2], and further extended to boundary value problems for such systems in [3].

Moreover, the averaging method has been effectively employed in the study of optimal control problems. The central idea is to replace the original control problem with a simpler averaged problem, whose optimal solutions are “almost” optimal for the original problem. For systems of ordinary differential equations, this approach was developed in [4,5]. For impulsive optimal control systems with both finite and infinite horizons, it was applied in [6,7]. Optimal control problems using the averaging method for systems of functional-differential equations were studied in [8].

In this work, we apply the averaging method to the analysis of optimal control problems for systems of integro-differential equations. Such equations arise as mathematical models for various processes in the natural sciences, including population dynamics [9], chemical kinetics, and fluid dynamics [10,11]. We consider both a nonlinear optimal control problem for a Volterra-type integro-differential system and a linear control problem. A key role in our study is played by lemmas on the averaging of systems of integro-differential equations, where the right-hand sides depend on a control functional parameter. The proximity estimates obtained for exact and averaged solutions are uniform with respect to control functions from a set of admissible controls. This allows us to establish the closeness between the optimal solutions of the exact and averaged problems. Notably, the averaged system is already a system of autonomous ordinary differential equations, which significantly simplifies its study in the context of optimal control.

The study consists of an introduction and three sections. In Section 2, we present a rigorous formulation of the problem in both the linear and nonlinear cases and state the main results of the work. Section 3 serves an auxiliary purpose, proving the necessary averaging lemmas mentioned above. The main results are proved in Section 4. Finally, examples illustrating the obtained results are provided at the end of the study.

2 Problem statement

2.1 Optimal control problem, nonlinear with respect to the control, for a system of integro-differential equations with rapidly oscillating parameters

We consider the nonlinear control problem for a system of integro-differential equations with rapidly oscillating parameters:

(1) x ˙ ε = X t ε , x ε ( t ) , 0 t φ ( t , s , x ε ( s ) ) d s , u ( t ) , x ε ( 0 ) = x 0 ,

with the quality criterion

(2) J ε [ u ] = 0 T L ( t , x ε ( t ) , u ( t ) ) d t + Φ ( x ε ( T ) ) inf ,

over the interval [ 0 , T ] , where ε > 0 is a small parameter, T > 0 is a given constant, x is the state vector in R d , u ( t ) is the m -dimensional control vector such that u ( t ) W R m , d , m = 1 , 2 , 3 , , Φ ( x ) is a given function.

The function x ε ( t , u ) denotes the solution of the Cauchy problems (1) and (2), corresponding to the control u ( t ) . For simplicity of notation, in the following discussion, we omit the explicit dependence on u and ε and denote this solution as x ( t ) .

We assume that there exists a function X 0 ( x , u ) such that for all x R d and u W , the following limit exists uniformly:

(3) lim ε 0 0 t X τ ε , x , φ 1 ( τ , x ) , u X 0 ( x , u ) d τ = 0 ,

where

φ 1 ( t , x ) = 0 t φ ( t , s , x ) d s , t , s [ 0 , T ] , x R d .

The optimal control problems (1) and (2) with rapidly oscillating coefficients correspond to a simpler optimal control problem

(4) ξ ˙ = X 0 ( ξ , u ( t ) ) , ξ ( 0 ) = x 0 ,

with the quality criterion

(5) J 0 [ u ] = 0 T L ( t , ξ ( t ) , u ( t ) ) d t + Φ ( ξ ( T ) ) inf .

For problems (1) and (2), we assume that the following conditions hold:

  1. The admissible controls are considered to be m -dimensional vector functions u ( ) such that u ( ) U , where U is a compact set in L 2 ( 0 , T ) ;

  2. The function X ( t , x , y , u ) is defined and jointly continuous in all its variables in the domain Q 0 = { t 0 , x R d , y R n , u W } , and satisfies:

    1. a linear growth condition with respect to x , y in Q 0 ; that is, there exists a constant M > 0 such that

      X ( t , x , y , u ) M ( 1 + x + y ) ,

      for any ( t , x , y , u ) Q 0 ;

    2. a Lipschitz condition with constant λ in Q 0 ; that is,

      X ( t , x , y , u ) X ( t , x 1 , y 1 , u 1 ) λ ( x x 1 + y y 1 + u u 1 ) ,

      for all ( t , x , y , u ) , ( t , x 1 , y 1 , u 1 ) Q 0 ;

  3. The function φ ( t , s , x ) is defined and continuous in the domain Q 1 = { t [ 0 , T ] , s [ 0 , T ] , x R d } , takes on the values in R n , and satisfies the linear growth condition and the Lipschitz condition with respect to x , i.e., there exists L φ > 0 such that

    φ ( t , s , x ) L φ ( 1 + x ) and φ ( t , s , x ) φ ( t , s , x 1 ) L φ x x 1 ;

  4. There exists the limit (3) uniformly in x R d and u W ;

  5. The function L ( t , x , u ) is defined in the domain Q 2 = { t [ 0 , T ] , x R d , u W } , and

    1. L ( t , x , u ) is uniformly continuous in x R d with respect to t [ 0 , T ] and u W ;

    2. L ( t , x , u ) satisfies the Lipschitz condition with respect to u in Q 2 , with constant λ > 0 ;

    3. The function Φ : R d R is continuous in x .

The conditions (C2), (C3), and Theorems 3.1 [12] and 2.2 [13] imply that for any admissible control u ( t ) , there exists a unique solution x ( t , u ) of the Cauchy problem on the whole interval [ 0 , T ] . It is hence obvious that problems (1) and (4) are valid for all admissible controls.

The main result of this subsection is the theorem that establishes the relationship between the optimal control and the quality criteria of the exact problems (1), (2) and the averaged problems (4), (5). We set

J ε * = inf u ( ) U J ε [ u ] , J 0 * = inf u ( ) U J 0 [ u ] .

Theorem 2.1

Let conditions (C1)–(C5) hold. Then, problems (1), (2), and (4), (5) have solutions ( x ε * ( t ) , u ε * ( t ) ) and ( ξ * ( t ) , u * ( t ) ) , respectively, and

  1. J ε * J 0 * as ε 0 ;

  2. for any η > 0 , there exists ε 0 such that for ε < ε 0 ,

    J ε * J ε [ u * ] < η ,

    i.e., the optimal control of the averaged problem is nearly optimal for the exact problem;

  3. there exists a sequence ε n 0 , n , such that

    (6) x ε n * ( t ) ξ * ( t ) uniformly o n [ 0 , T ] ,

    and

    (7) u ε n * ( ) u * ( ) in L 2 ( 0 , T ) .

Furthermore, if the averaged problems (4), (5) have a unique solution, then convergence (6) and (7) holds for all ε n 0 .

2.2 Optimal control problem, linear with respect to the control, for a system of integro-differential equations with rapidly oscillating parameters

We also consider the control problem with rapidly oscillating parameters, that is, linear with respect to the control input:

(8) x ˙ ε ( t ) = f t ε , x ε ( t ) , 0 t φ ( t , s , x ε ( s ) ) d s + f 1 ( x ε ( t ) ) u ( t ) , x ( 0 ) = x 0 ,

with the quality criterion

(9) J ε [ u ] = 0 T [ A ( t , x ε ( t ) ) + B ( t , u ( t ) ) ] d t + Φ ( x ε ( T ) ) inf ,

over the interval [ 0 , T ] , where ε > 0 is a small parameter, T > 0 is a given constant, x R d is the state vector, and u ( t ) is the m -dimensional control vector belonging to a functional set.

If the following limit exists uniformly with respect to x R d :

(10) lim ε 0 0 t f τ ε , x , φ 1 ( τ , x ) f 0 ( x ) d τ = 0 ,

with

φ 1 ( t , x ) = 0 t φ ( t , s , x ) d s , t , s [ 0 , T ] ,

then the optimal control problems (8), (9) with rapidly oscillating coefficients correspond to a simpler control problem on the interval [ 0 , T ] :

(11) ξ ˙ = f 0 ( ξ ) + f 1 ( ξ ) u ( t ) , ξ ( 0 , u ( 0 ) ) = x 0 ,

with the corresponding quality criterion

(12) J 0 [ u ] = 0 T [ A ( t , ξ ( t ) ) + B ( t , u ( t ) ) ] d t + Φ ( ξ ( T ) ) inf .

The main result is the proof of the convergence of the minimal values of the quality criterion, optimal controls, and optimal trajectories of the exact problems (8) and (9) to the corresponding minimal values of the quality criterion, optimal controls, and trajectories of the averaged problems.

We assume that the following conditions are met for problems (8) and (9):

  1. The admissible control is an m -dimensional vector function u ( ) L p ( ( 0 , T ) ; V ) , p > 1 , taking on the values in a closed convex set V R m ;

  2. The function f ( t , x , y ) is defined and jointly continuous in all its variables in the domain Q 3 = { t 0 , x R d , y R n } ; the n × m matrix function f 1 ( x ) is defined for x R d , and

    1. f ( t , x , y ) satisfies the linear growth condition with constant M in the domain Q 3 , i.e., f ( t , x , y ) M ( 1 + x + y ) for all ( t , x , y ) Q 3 ;

    2. f ( t , x , y ) and f 1 ( x ) satisfy, with respect to x , the Lipschitz condition with constant λ > 0 in their domains;

  3. Function φ ( t , s , x ) is defined and continuous in the domain Q 4 = { t [ 0 , T ] , s [ 0 , T ] , x R d } , takes on the values in the space R n , and satisfies, with respect to x , the linear growth condition and the Lipschitz condition; that is, there exists some L φ > 0 such that

    φ ( t , s , x ) φ ( t , s , x 1 ) L φ x x 1 and φ ( t , s , x ) L φ ( 1 + x ) ;

  4. Limit (10) exists uniformly in x R d ;

  5. The scalar functions A ( t , x ) and B ( t , u ) are defined for t [ 0 , T ] , x R d , u V , and jointly continuous in all their variables, and

    1. A ( t , x ) 0 , B ( t , u ) a u p with a constant a > 0 , for all t [ 0 , T ] , and the function B ( t , u ) is convex with respect to u V ;

    2. The function Φ : R d R is non-negative and continuous in x .

The main result here is the following theorem on a relationship between the optimal triples of the exact and averaged problems.

Theorem 2.2

Let conditions (C6)–(C10) hold. Then, problems (8), (9) and (11), (12) have solutions ( x ε * ( t ) , u ε * ( t ) ) and ( ξ * ( t ) , u * ( t ) ) , respectively, and

  1. J ε * J 0 * as ε 0 ;

  2. for any η > 0 , there exists ε 0 such that

    J ε * J ε [ u * ] < η

    holds for ε < ε 0 ;

  3. there exists a sequence ε n 0 , n , such that

    (13) x ε n * ( t ) ξ * ( t ) uniformly o n [ 0 , T ] ,

    and

    (14) u ε n * ( ) w u * ( ) weakly i n L p ( 0 , T ) .

Furthermore, if the averaged problems (11), (12) have a unique solution, then convergence (13) and (14) holds for all ε 0 .

3 Averaging lemmas

This section is devoted to proving lemmas on the closeness of solutions of the original optimal control system and the solutions of the corresponding averaged system in both the nonlinear-in-controls case and the linear-in-controls case.

Lemma 3.1

Let conditions (C1)–(C4) hold. Then, given any η > 0 , there exists ε 0 = ε 0 ( η ) such that for 0 < ε ε 0 , the solutions of the Cauchy problems (1) and (4) satisfy the estimate

(15) x ( t , u ) ξ ( t , u ) η ,

for all t [ 0 , T ] and all admissible controls u ( t ) .

Remark 3.1

In this lemma, it is important that estimate (15) is uniform for all admissible controls u .

Proof

Let us choose an arbitrary η > 0 and fix it. For any ε > 0 and any admissible control u ( t ) , we estimate the difference between x ( t , u ) and ξ ( t , u ) . For simplicity, we denote x ( t , u ) = x ( t ) and ξ ( t , u ) = ξ ( t ) . We also omit the dependence of x ( t ) on ε .

Since U is compact in L 2 ( 0 , T ) , for the given η , there exists a finite η e λ 4 λ -net u 1 ( t ) , , u N ( t ) , where N = N ( η ) . Thus, for the chosen control u ( t ) , there exists a representative u j ( t ) from the net such that

(16) u ( ) u j ( ) L 2 η 4 λ e λ .

Again, since U is compact in L 2 ( 0 , T ) , there exists K > 0 such that all admissible controls u ( t ) satisfy the inequality

(17) 0 T u ( t ) d t K .

By (C2a) and (C3),

x ( t ) x 0 + M T + M 0 T ( x ( s ) + L φ 0 s ( 1 + x ( τ ) ) d τ ) d s .

From this, using an analog of the Gronwall-Bellman inequality, we obtain

(18) x ( t ) C ,

where C = C ( T ) . Similarly, we obtain the estimate ξ ( t ) C .

Hence, it follows from conditions (C2) and (C3) that

x ( t ) ξ ( t ) 0 t X s ε , x ( s ) , 0 s φ ( s , τ , x ( τ ) ) d τ , u ( s ) X 0 ( ξ ( s ) , u ( s ) ) d s 0 t X s ε , x ( s ) , 0 s φ ( s , τ , x ( τ ) ) d τ , u ( s ) X s ε , x ( s ) , 0 s φ ( s , τ , x ( τ ) ) d τ , u j ( s ) d s + 0 t X 0 ( ξ ( s ) , u ( s ) ) X 0 ( ξ ( s ) , u j ( s ) ) d s + 0 t X s ε , x ( s ) , 0 s φ ( s , τ , x ( τ ) ) d τ , u j ( s ) X 0 ( ξ ( s ) , u j ( s ) ) d s 0 t X s ε , x ( s ) , 0 s φ ( s , τ , x ( τ ) ) d τ , u j ( s ) X 0 ( ξ ( s ) , u j ( s ) ) d s + 2 λ 0 T u ( s ) u j ( s ) 2 d s 1 2 .

We then obtain

(19) x ( t ) ξ ( t ) I 1 + η 2 e λ T .

Let us now estimate I 1 using conditions (C2), (C3), and (C4). Note that these conditions imply that the function X 0 satisfies the Lipschitz condition. We have:

(20) I 1 0 t X s ε , x ( s ) , 0 s φ ( s , τ , x ( τ ) ) d τ , u j ( s ) X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u j ( s ) d s + 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u j ( s ) X 0 ( ξ ( s ) , u j ( s ) ) d s 0 t ( λ x ( s ) ξ ( s ) + 0 s x ( t ) ξ ( t ) L φ d τ ) d s + 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u j ( s ) X 0 ( ξ ( s ) , u j ( s ) ) d s .

Since every function in L 2 ( 0 , T ) can be approximated in the L 2 -norm by a continuous function, and every continuous function on a closed interval can be approximated by a piecewise constant function, we choose for u j ( t ) a continuous function u c ( t ) and a piecewise constant function u p ( t ) such that the following inequalities hold:

(21) u j u c L 2 < η 16 λ e λ T ,

(22) u c ( t ) u p ( t ) L 2 < η 16 λ e λ T ,

for t [ 0 , T ] .

Using (21) and (22), we estimate the last integral in (20):

0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u j ( s ) X 0 ( ξ ( s ) , u j ( s ) ) d s = 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u j ( s ) X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u c ( s ) d s + 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u c ( s ) X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u p ( s ) d s + 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u p ( s ) d s + 0 t ( X 0 ( ξ ( s ) , u c ( s ) ) X 0 ( ξ ( s ) , u j ( s ) ) ) d s + 0 t ( X 0 ( ξ ( s ) , u p ( s ) ) X 0 ( ξ ( s ) , u c ( s ) ) ) d s 0 t X 0 ( ξ ( s ) , u p ( s ) ) d s λ 0 T u j ( s ) u c ( s ) 2 d s 1 2 + λ 0 T u c ( s ) u p ( s ) 2 d s 1 2 + 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u p ( s ) X 0 ( ξ ( s ) , u p ( s ) ) d s + λ 0 T u c ( s ) u j ( s ) 2 d s 1 2 + λ 0 T u p ( s ) u c ( s ) 2 d s 1 2 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u p ( s ) X 0 ( ξ ( s ) , u p ( s ) ) d s + η 4 e λ T .

Let us consider the last integral in this inequality. We have:

0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u p ( s ) X 0 ( ξ ( s ) , u p ( s ) ) d s 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u p ( s ) X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( s ) ) d τ , u p ( s ) d s + 0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( s ) ) d τ , u p ( s ) X 0 ( ξ ( s ) , u p ( s ) ) d s = I 2 + I 3 .

We estimate the integral I 2 by dividing the interval [ 0 , T ] by the points { t k } 0 R ( t 0 = 0 , t R = T ) in such a way that all components of the vector function u p ( t ) have constant values on each subinterval [ t k , t k + 1 ) , that is, u p ( t ) = u p ( t k ) for t [ t k , t k + 1 ) . Here, the natural R = R ( η ) is fixed for a given choice of η .

Now, we choose a natural n and divide the interval [ 0 , T ] into n equal parts by the points t i = i n ( i = 0 , , n ) . Suppose n is large enough so that each subinterval [ t k , t k + 1 ) contains the points t i . As a result, we obtain n intervals [ t i , t i + 1 ) . If for some k and i we have t i < t k < t i + 1 , the interval [ t i , t i + 1 ) is split into two subintervals: [ t i , t k ) and [ t k , t i + 1 ) . Thus, the interval [ 0 , T ] is divided into no more than n + R subintervals, each with length not exceeding 1 n . The partition points are again denoted by t i , and the total number of intervals [ t i , t i + 1 ) is denoted by K = K ( η ) . Clearly, K n + R , and u p ( t ) = u p ( t i ) for t [ t i , t i + 1 ) . Let us denote ξ i = ξ ( t i ) and u p ( t i ) = u p i . Then,

I 2 i = 0 K 1 t i t i + 1 X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ , u p i X s ε , ξ i , 0 s φ ( s , τ , ξ i ) d τ , u p i d s + i = 0 K 1 t i t i + 1 X s ε , ξ i , 0 s φ ( s , τ , ξ i ) d τ , u p i X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( s ) ) d τ , u p i d s i = 0 K 1 λ t i t i + 1 ξ ( s ) ξ i d s + t i t i + 1 0 s L φ ξ ( τ ) ξ i d τ d s + i = 0 K 1 λ t i t i + 1 ξ i ξ ( s ) d s + t i t i + 1 0 s L φ ξ i ξ ( s ) d τ d s 2 i = 0 K 1 λ M T ( 1 + C ) n 2 1 + t i t i + 1 d s 0 s L φ d τ λ M T ( 1 + C ) n + R n 2 1 + L φ T n .

Now, for the chosen η > 0 , there exists a number η > 0 such that for all ε > 0 , the following holds:

I 2 η 8 e λ T .

We now fix the chosen n and estimate the integral I 3 . To do this, we split it over the interval [ 0 , T ] into a sum of integrals:

0 t X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( s ) ) d τ , u p ( s ) X 0 ( ξ ( s ) , u p ( s ) ) d s i = 0 K 1 t i t i + 1 X s ε , ξ ( s ) , 0 s φ ( s , τ , ξ ( s ) ) d τ , u p i X s ε , ξ i , 0 s φ ( s , τ , ξ i ) d τ , u p i d s + i = 0 K 1 t i t i + 1 [ X 0 ( ξ ( s ) , u p i ) X 0 ( ξ i , u p i ) ] d s + i = 0 K 1 t i t i + 1 X s ε , ξ i , 0 s φ ( s , τ , ξ i ) d τ , u p i X 0 ( ξ i , u p i ) d s i = 0 K 1 λ t i t i + 1 ξ ( s ) ξ i d s + t i t i + 1 0 s L φ ξ ( s ) ξ i d τ d s + i = 0 K 1 λ t i t i + 1 ξ ( s ) ξ i d s + I 4 .

Let us now estimate the integral I 4 . We obtain

I 4 = i = 0 K 1 t i t i + 1 X s ε , ξ i , 0 s φ ( s , τ , ξ i ) d τ , u p i X 0 ( ξ i , u p i ) d s .

In terms of φ 1 ( t , x ) , we have

t i t i + 1 X s ε , ξ i , φ 1 ( s , ξ i ) , u p i X 0 ( ξ i , u p i ) d s = 0 t i + 1 X s ε , ξ i , φ 1 ( s , ξ i ) , u p i X 0 ( ξ i , u p i ) d s 0 t i X s ε , ξ i , φ 1 ( s , ξ i ) , u p i X 0 ( ξ i , u p i ) d s .

Due to condition (3), each term on the right-hand side of the last equality tends to zero as ε 0 . Since K is fixed, by choosing a sufficiently small ε , it is possible to achieve the inequality

I 4 η 16 e λ T .

Hence,

I 3 η 8 e λ T .

Similarly, for I 2 , the following inequality can be obtained:

I 1 λ 0 t x ( s ) ξ ( s ) d s + 0 t 0 s L φ x ( τ ) ξ ( τ ) d τ d s + η 4 e λ T η 2 e λ T .

The reasoning outlined above can be applied to each function u 1 ( t ) , u 2 ( t ) , , u n ( t ) from the constructed grid. Due to its finiteness, ε 0 can be chosen uniformly for each function from the grid.

Thus, from inequalities (16)–(19), (22), and the last two estimates for the integrals I 1 and I 2 , it follows that inequality (15) holds uniformly for all admissible controls, which proves the lemma.□

Lemma 3.2

Let conditions (C6)–(C9) hold. If u ε n w u 0 weakly in L p ( 0 , T ) as ε 0 , then the solution x ε ( t ) of the Cauchy problem (8) with u ( t ) = u ε ( t ) converges uniformly on [ 0 , T ] to the solution ξ ( t ) of the corresponding Cauchy problem (11) with control u ( t ) = u 0 ( t ) , i.e.,

x ε ( t ) ξ ( t ) , ε 0

uniformly in t [ 0 , T ] .

Proof

Let us rewrite (8) in the integral form

x ε ( t ) = x 0 + 0 t f s ε , x ε ( s ) , 0 s φ ( s , τ , x ε ( τ ) ) d τ d s + 0 t f 1 ( x ε ( s ) ) u ε ( s ) d s .

Without loss of generality we can assume T = 1 . We have

(23) x ε ( t ) x 0 + 0 t M ( 1 + x ε ( s ) + 0 s L φ ( 1 + x ε ( τ ) ) d τ ) d s + 0 t ( f 1 ( x ε ( s ) ) f 1 ( 0 ) + f 1 ( 0 ) ) u ε ( s ) d s x 0 + 0 t ( M + L φ + f 1 ( 0 ) u ε ( s ) ) d s + 0 t ( M + λ u ε ( s ) ) x ε ( s ) d s + L φ 0 t 0 s x ε ( τ ) d τ d s .

Applying the generalized Gronwall-Bellman inequality to (23), we obtain

x ε ( t ) ( x 0 + M + L φ + f 1 ( 0 ) 0 t u ε ( s ) d s ) e M + λ 0 t u ε ( s ) d s + L φ .

Let M * = M + L φ , then

(24) x ε ( t ) ( x 0 + M * + f 1 ( 0 ) u ε L p ) e M * + λ u ε L p .

From the weak convergence of u ε , it follows that u ε is strongly bounded, i.e., sup ε > 0 u ε L p < . This, together with (24), implies the existence of a constant C > 0 such that

(25) x ε ( t ) C ,

for all ε > 0 and t [ 0 , 1 ] .

Now, for any t 1 < t 2 , where t 1 , t 2 [ 0 , 1 ] , we have

x ε ( t 2 ) x ε ( t 1 ) t 1 t 2 M ( 1 + C + L φ 0 s ( 1 + C ) d τ ) d s + t 1 t 2 ( f 1 ( 0 ) + λ C ) u ε ( s ) d s M ( 1 + C ) ( t 2 t 1 ) + M L φ ( 1 + C ) ( t 2 t 1 ) + ( f 1 ( 0 ) + λ C ) t 1 t 2 u ε ( s ) p d s 1 p ( t 2 t 1 ) 1 q ,

where 1 p + 1 q = 1 .

From the last inequality, it follows that the family x ε n ( t ) is equicontinuous on [ 0 , 1 ] , and taking into account (25), it is also compact.

Let x ε n ( t ) be a sequence that converges uniformly to some function ξ ( t ) as ε n 0 . We will show that ξ ( t ) is a solution of the Cauchy problem with the control u ( t ) = u 0 ( t ) . We have

x ε n ( t ) = x 0 + 0 t f s ε n , x ε n ( s ) , 0 s φ ( s , τ , x ε n ( τ ) ) d τ d s + 0 t f 1 ( x ε n ( s ) ) u ε n ( s ) d s .

Let us consider the following expression:

(26) 0 t f s ε n , x ε n ( s ) , 0 s φ ( s , τ , x ε n ( τ ) ) d τ f 0 ( ξ ( s ) ) + f 1 ( x ε n ( s ) ) u ε n ( s ) f 1 ( ξ ( s ) ) u 0 ( s ) d s 0 t f s ε n , x ε n ( s ) , 0 s φ ( s , τ , x ε n ( τ ) ) d τ f s ε n , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ d s + 0 t f s ε n , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ f 0 ( ξ ( s ) ) d s + 0 t [ f 1 ( x ε n ( s ) ) u ε n ( s ) f 1 ( ξ ( s ) ) u ε n ( s ) + f 1 ( ξ ( s ) ) u ε n ( s ) f 1 ( ξ ( s ) ) u 0 ( s ) ] d s .

The first term in (26), due to conditions (C7) and (C8), admits the following estimate:

λ 0 t x ε n ( s ) ξ ( s ) + 0 s L φ x ε n ( τ ) ξ ( τ ) d τ d s sup t [ 0 , 1 ] x ε n ( t ) ξ ( t ) λ ( 1 + L φ ) 0 , ε n 0 .

For the last term in (26), we obtain the estimate

(27) 0 t [ ( f 1 ( x ε n ( s ) ) f 1 ( ξ ( s ) ) ) u ε n ( s ) + f 1 ( ξ ( s ) ) ( u ε n ( s ) u 0 ( s ) ) ] d s 0 t ( f 1 ( x ε n ( s ) ) f 1 ( ξ ( s ) ) ) u ε n ( s ) d s + 0 t f 1 ( ξ ( s ) ) ( u ε n ( s ) u 0 ( s ) ) d s .

Taking into account (25) and the continuity of the function f 1 , and using the weak convergence of u ε n to u 0 in L p ( 0 , 1 ) , we obtain that the last term in (27) tends to 0.

We now estimate the first term in (27). Under condition (C7b), it holds that

(28) 0 t ( f 1 ( x ε n ( s ) ) f 1 ( ξ ( s ) ) ) u ε n ( s ) d s sup t [ 0 , 1 ] x ε n ( s ) ξ ( s ) 0 t u ε n ( s ) d s sup t [ 0 , 1 ] x ε n ( s ) ξ ( s ) u ε n L p .

Taking into account sup t [ 0 , 1 ] x ε n ( s ) ξ ( s ) 0 , as well as the uniform boundedness of u ε n L p , we conclude that, due to (28), the first term in (27) tends to 0.

Let us now estimate the second term in the right-hand side of (26), which we denote by I 1 . We will show that for any η > 0 , there exists ε n 0 such that for ε n < ε n 0 , the following inequality holds:

I 1 = 0 t f s ε n , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ f 0 ( ξ ( s ) ) d s < η .

To show this, we choose a natural number k and divide the interval [ 0 , 1 ] into k equal parts using the points t i = i k ( i = 0 , , k ) , i.e. t i + 1 t i 1 k . We denote the total number of the intervals [ t i , t i + 1 ) by κ = κ ( η ) . Due to the uniform continuity of ξ ( t ) on [ 0 , 1 ] , for η > 0 one can specify k such that following estimate will be valid:

(29) ξ ( t i + 1 ) ξ ( t i ) < η 2 λ ( 2 + L φ ) .

Let us fix such k and denote ξ ( t i ) = ξ i . We have

I 1 i = 0 κ 1 t i t i + 1 f s ε n , ξ ( s ) , 0 s φ ( s , τ , ξ ( τ ) ) d τ f s ε n , ξ i , 0 s φ ( s , τ , ξ i ) d τ + f s ε n , ξ i , 0 s φ ( s , τ , ξ i ) d τ f 0 ( ξ i ) + f 0 ( ξ i ) f 0 ( ξ ( s ) ) d s i = 0 κ 1 t i t i + 1 f s ε n , ξ ( s ) , 0 s φ ( s , τ , ξ ( s ) ) d τ f s ε n , ξ i , 0 s φ ( s , τ , ξ i ) d τ + f 0 ( ξ i ) f 0 ( ξ ( s ) ) d s + i = 0 κ 1 t i t i + 1 f s ε n , ξ i , 0 s φ ( s , τ , ξ i ) d τ f 0 ( ξ i ) d s λ i = 0 κ 1 t i t i + 1 ξ ( s ) ξ i + L φ 0 t ξ ( τ ) ξ i d τ + ξ i ξ ( s ) d s + i = 0 κ 1 t i t i + 1 f s ε n , ξ i , 0 s φ ( s , τ , ξ i ) d τ f 0 ( ξ i ) d s .

It follows from (29) that

λ i = 0 κ 1 t i t i + 1 ξ ( s ) ξ i + L φ 0 t ξ ( τ ) ξ i d τ + ξ i ξ ( s ) d s η 2 .

Let I 11 denote the following expression:

I 11 = i = 0 κ 1 t i t i + 1 f s ε n , ξ i , 0 s φ ( s , τ , ξ i ) d τ f 0 ( ξ i ) d s .

In terms of φ 1 ( t , x ) , we have

i = 1 κ 1 t i t i + 1 f s ε n , ξ i , φ 1 ( s , ξ i ) f 0 ( ξ i ) d s = i = 1 κ 1 0 t i + 1 f s ε n , ξ i , φ 1 ( s , ξ i ) f 0 ( ξ i ) d s 0 t i f s ε n , ξ i , φ 1 ( s , ξ i ) f 0 ( ξ i ) d s .

Due to (10), for each i , there exists ε n i such that for ε n < ε n i , the following inequalities hold:

0 t i f s ε n , ξ i , φ 1 ( s , ξ i ) f 0 ( ξ i ) d s η 4 k .

Since k is fixed, the number of such integrals is finite. Let ε η = min { ε n 1 , , ε n k } . Then, for ε n < ε η , we obtain

I 11 η 2 .

Thus,

I 1 η .

The latter means that ξ ( t ) is the solution of the Cauchy problem (8). Consequently, the uniform convergence x ε n ξ ( t ) as ε n 0 implies convergence to the solution of the Cauchy problem (8). Since ξ ( t ) is the unique solution, the entire sequence x ε converges to ξ ( t ) , which completes the proof of this lemma.□

4 Proof of main theorems

4.1 Nonlinear case

Proof of Theorem 2.1

For simplicity, we will again assume T = 1 . Let us first prove the existence of solutions. To do this, we will establish the continuity of J ε [ u ] with respect to u for each ε > 0 .

Let u 1 ( t ) , u 2 ( t ) be any admissible controls for problem (1), (2), and let x ( t , u 1 ) , x ( t , u 2 ) be the corresponding trajectories.

Using condition (C2) and Gronwall’s inequality, we obtain

(30) sup t [ 0 , 1 ] x ( t , u 1 ) x ( t , u 2 ) λ u 1 u 2 L 2 e λ .

Therefore,

(31) J ε [ u 1 ] J ε [ u 2 ] 0 1 L ( t , x ( t , u 1 ) , u 1 ( t ) ) L ( t , x ( t , u 2 ) , u 1 ( t ) ) + L ( t , x ( t , u 2 ) , u 1 ( t ) ) L ( t , x ( t , u 2 ) , u 2 ( t ) ) d t + Φ ( x ( 1 , u 1 ) ) Φ ( x ( 1 , u 2 ) ) λ u 1 u 2 L 2 + 0 1 L ( t , x ( t , u 1 ) , u 1 ( t ) ) L ( t , x ( t , u 2 ) , u 1 ( t ) ) d t + Φ ( x ( 1 , u 1 ) ) Φ ( x ( 1 , u 2 ) ) .

Now, using estimate (18), which is uniform for all admissible u ( t ) , we conclude that x ( t , u ) remains within the ball B C of radius C centered at zero for all t [ 0 , 1 ] .

According to assumption (C5a) and Cantor’s theorem, the function L ( t , x , u ) is uniformly continuous in x B c , uniformly with respect to t [ 0 , 1 ] and u W . Similarly, Φ is uniformly continuous in x B C . Therefore, from (30) and (31), it follows that J ε [ u ] is continuous in the L 2 -norm.

A similar argument establishes the continuity of the functional J 0 [ u ] with respect to u .

Now, considering the compactness of the set of admissible controls, we establish the existence of optimal solutions ( x ε * ( t ) , u ε * ( t ) ) and ( ξ * ( t ) , u * ( t ) ) of problems (1), (2) and (4), (5), respectively. This proves the existence of optimal solutions for both the exact and the averaged problems.

Let us now prove statement ( i ) , namely, that J ε * J 0 * as ε 0 . We choose an arbitrary η > 0 and fix it. Then, we have

(32) J ε * J ε [ u * ] = J 0 * + J ε [ u * ] J 0 [ u * ] .

However,

(33) J ε [ u * ] J 0 [ u * ] 0 1 L ( t , x ( t , u * ) , u * ( t ) ) L ( t , ξ ( t ) , u * ( t ) ) d t + Φ ( x ( 1 , u * ) ) Φ ( ξ ( 1 ) ) .

By Lemma 3.1, we have

(34) max t [ 0 , 1 ] x ( t , u * ) ξ * ( t ) 0 , ε 0 .

Taking into account the uniform continuity of the function L ( t , x , u ) with respect to x B c , uniformly in t [ 0 , 1 ] and u W , it follows from (33), (34), and condition (C5) that there exists ε 0 > 0 such that for ε < ε 0 , we have

J ε [ u * ] J 0 < η .

Hence, from (32) we obtain

(35) J ε * < J 0 * + η .

On the other hand, for ε < ε 0 , we obtain

J 0 * J 0 [ u ε * ] = J ε * + ( J 0 [ u ε * ] J ε [ u ε * ] ) .

However, similarly to (35), we have

J ε [ u ε * ] J 0 [ u ε * ] < η .

Consequently,

(36) J 0 * < J ε * + η .

It follows from (35) and (36) that J ε * J 0 * as ε 0 , which proves statement ( i ) of Theorem 2.1.

Statement ( i i ) of Theorem 2.1 follows directly from the fact that

J ε * J ε [ u * ] J ε * J 0 * + J 0 [ u * ] J ε [ u * ] .

We proceed to the proof of statement ( i i i ) . Since U is compact in L 2 ( 0 , 1 ) , we can extract a subsequence u ε n * that converges in L 2 ( 0 , 1 ) . Let

(37) lim ε n 0 u ε n * = u 0 .

Let us now consider the auxiliary systems

z ˙ ε n = X t ε n , z ε n ( t ) , 0 t φ ( t , s , z ε n ( s ) ) d s , u 0 ( t ) , z ε n ( 0 ) = x 0 ,

and

(38) ξ ˙ = X 0 ( ξ , u 0 ( t ) ) , ξ ( 0 ) = x 0 .

By (30), we have

(39) sup t [ 0 , 1 ] x ε n * ( t ) z ε n ( t ) 0 , ε n 0

and, by Lemma 3.1,

sup t [ 0 , 1 ] z ε n ( t ) ξ ( t ) 0 , ε n 0 .

Hence, it follows from (38) and (39) that

(40) sup t [ 0 , 1 ] x ε n * ( t ) ξ ( t ) 0 , ε n 0 .

Therefore,

(41) J ε n * = J ε n [ u ε n * ] = 0 1 L ( t , x ε n * ( t ) , u ε n * ( t ) ) d t + Φ ( x ε n * ( 1 ) ) = 0 1 L ( t , x ε n * ( t ) , u 0 ( t ) ) d t + Φ ( x ε n * ( 1 ) ) + 0 1 [ L ( t , x ε n * ( t ) , u ε n * ( t ) ) L ( t , x ε n * ( t ) , u 0 ( t ) ) ] d t .

Condition (C5b) and (37) imply that the last term in (41) approaches 0 as ε n 0 .

By letting ε n 0 in (41), and using (40), we obtain

J 0 * = 0 1 L ( t , ξ ( t ) , u 0 ( t ) ) d t + Φ ( ξ ( 1 ) ) .

Hence, ( ξ ( t ) , u 0 ( t ) ) is the optimal solution of the averaged problems (4), (5), which proves statement ( i i i ) .

If problems (4), (5) has a unique solution, then the above reasoning implies that any converging sequence ( u ε n * ( t ) , x ε n * ( t ) ) tends to the same limit. This completes the proof of the final statement of the theorem.□

4.2 Linear case

Proof

We again set T = 1 and consider the problem on [ 0 , 1 ] .

The existence of an optimal solution ( x ε * ( t ) , u ε * ( t ) ) for each ε > 0 is established in a standard way by extracting a weakly convergent minimizing sequence u ε ( n ) ( t ) , converging to u ε * ( t ) , and then passing to the limit. This approach relies on the lower semicontinuity of the integral 0 1 B ( t , u ( t ) ) d t with respect to u , which follows from the convexity of B ( t , u ) .

The fact that u ε * ( t ) belongs to the set V for each t [ 0 , 1 ] follows from Mazur’s lemma [14], as well as from the convexity and closedness of the set V .

The existence of an optimal pair ( ξ * ( t ) , u * ( t ) ) for problems (11), (12) is proved in a similar manner.

Thus,

J ε * = J ε [ u ε * ] = 0 1 [ A ( t , x ε * ( t ) ) + B ( t , u ε * ( t ) ) ] d t + Φ ( x ε * ( 1 ) ) .

Let u ¯ be an arbitrary constant vector from V . Clearly, the control u ( t ) u ¯ is admissible for problems (8), (9). Then, for each ε > 0 , we have

J ε * = J ε [ u ε * ] J ε [ u ¯ ] .

Similarly to the derivation of estimate (18), one can show the existence of a constant C 1 , independent of ε , such that

x ε ( t , u ¯ ) C 1

for t [ 0 , 1 ] . Then, from the continuity of A , B , and Φ , it follows that there exists a constant C 2 , independent of ε , such that J ε * C 2 . Therefore,

(42) J ε * C 2

for all positive ε . From condition (C10) and (42), we obtain

0 1 u ε * ( t ) p d t C 2 a .

Thus, the set u ε * is weakly compact in L p ( 0 , 1 ) . Let u ε n * ( t ) be a sequence of optimal controls that weakly converges to u 0 ( t ) . From Mazur’s lemma, it follows that u 0 ( t ) V for t [ 0 , 1 ] , meaning that u 0 ( t ) is an admissible control.

Let y ( t ) be the solution of the Cauchy problem (11) with u ( t ) = u 0 ( t ) . By Lemma 3.2, the solution x ε n ( t , u ε n * ) of the Cauchy problem (8) converges uniformly, with respect to t [ 0 , 1 ] , to y ( t ) for ε n 0 .

For any η > 0 , we have

(43) J ε n * J ε n [ u * ] = J 0 [ u * ] + J ε n [ u * ] J 0 [ u * ] = J 0 * + J ε n [ u * ] J 0 [ u * ] .

Again, according to Lemma 3.2, the solution x ε n ( t , u * ) of the Cauchy problem (8) converges uniformly, with respect to t [ 0 , 1 ] , to ξ * ( t ) as ε n 0 . Hence,

J ε n [ u * ] J 0 [ u * ] 0 1 A ( t , x ε n ( t , u * ) A ( t , ξ * ( t ) ) ) d t + Φ ( x ε n ( 1 , u * ) ) Φ ( ξ ( 1 ) ) 0 , ε n 0 .

Thus, for any η > 0 , there exists ε ¯ such that, for ε n < ε ¯ ,

(44) J ε n [ u * ] J 0 [ u * ] < η .

This, together with (43), implies

(45) J ε n * J 0 * + η .

On the other hand, we have

(46) J 0 * J 0 [ u ε n * ] = J ε n * + J 0 [ u ε n * ] J ε n [ u ε n * ] .

Let us consider an auxiliary system

(47) z ˙ n = f 0 ( z n ) + f 1 ( z n ) u ε n *

and system

(48) y ˙ = f 0 ( y ) + f 1 ( y ) u 0 .

Applying Lemma 3.2 to systems (47) and (48), we obtain

sup t [ 0 , 1 ] z n ( t ) y ( t ) 0 , n .

From this, taking into account the uniform convergence of x ε n * to y , it follows that

sup t [ 0 , 1 ] x ε n * ( t ) z n ( t ) 0 , n .

Hence,

J ε n [ u ε n * ] J 0 [ u ε n * ] 0 1 A ( t , x ε n * ( t ) ) A ( t , z n ( t ) ) d t + 0 1 A ( t , z n ( t ) ) A ( t , y ( t ) ) d t + Φ ( x ε n * ( 1 ) ) Φ ( y ( 1 ) ) + Φ ( x ε n * ( 1 ) ) Φ ( y ( 1 ) ) 0 , n ,

due to the uniform continuity of A ( t , x ) on the compact and the obvious estimates

sup t [ 0 , 1 ] x ε n * ( t ) C 3 , sup t [ 0 , 1 ] z n ( t ) C 3

for some constant C 3 > 0 independent of n .

Thus, for an arbitrary η > 0 , there exists ε ¯ such that

J ε n [ u ε n * ] J 0 [ u ε n * ] < η .

Consequently, by (46), we obtain

(49) J 0 * J ε n * + η ,

for ε n < ε ¯ 1 .

Then, if ε n < min { ε ¯ , ε ¯ 1 } , it follows from (45) and (49) that J 0 * J ε n * < η , which means

(50) J ε n * J 0 * , ε n 0 .

Since a convergent subsequence { u ε m * } can be chosen from any sequence in the family of controls { u ε * } , for which relation (50) holds analogously to the above, we obtain

(51) J ε * J 0 * , ε 0 ,

which proves statement ( i ) of the theorem.

Now, let us prove statement ( i i ) . Since x ε ( t , u * ) converges to ξ * ( t ) , uniformly with respect to for t [ 0 , 1 ] , as ε 0 , we obtain the inequality by arguments similar to those used in the derivation of estimate (44):

(52) J ε [ u * ] J 0 [ u * ] < η ,

which holds for any η > 0 for sufficiently small ε . Therefore,

J ε * J ε [ u * ] J ε * J 0 * + J ε [ u * ] J 0 [ u * ] .

From (51) and (52), statement ( i i ) follows.

Now, let us prove statement ( i i i ) . To do so, we will show that ( y ( t ) , u 0 ( t ) ) is indeed the optimal solution of problems (8) and (9). We have

J ε n * = 0 1 [ A ( t , x ε n * ( t ) ) + B ( t , u ε n * ( t ) ) ] d t + Φ ( x ε n * ( 1 ) ) .

Letting n and taking into account (51) and condition (C10), we obtain

J 0 * = 0 1 A ( t , y ( t ) ) d t + lim ε n 0 0 1 B ( t , u ε n * ( t ) ) d t + Φ ( y ( 1 ) ) 0 1 [ A ( t , y ( t ) ) + B ( t , u 0 ( t ) ) ] d t + Φ ( y ( 1 ) ) .

From this, it follows that ( y ( t ) , u 0 ( t ) ) is an optimal pair.

The final statement of the theorem is proved similar to the corresponding statement in Theorem 2.1.□

5 Examples

Example 1 (Weakly nonlinear regulator). Consider the following optimal control problem:

(53) x ˙ ( t ) = f t ε x + f 1 t ε , x ( t ) , 0 t φ ( t , s , x ( s ) ) d s + f 2 ( t ) u ( t ) , x ( 0 ) = x 0 ,

where t [ 0 , T ] , x R d , u R m , with the quality criterion

(54) J ε [ u ] = 0 T [ ( C ( t ) x ε ( t ) , x ε ( t ) ) + ( F ( t ) u ( t ) , u ( t ) ) ] d t + ( D x ε ( T ) , x ( T ) ) inf ,

where C ( t ) and D are symmetric non-negative definite d × d matrices, F ( t ) is a positive definite m × m matrix, f ( t ) is a d × d matrix, f 1 ( t , x , y ) is a d -dimensional vector function defined for t [ 0 , T ] , x R d , y R n , and f 2 ( t ) is a n × m matrix.

Since the terms in functional (54) are quadratic forms, this problem is referred to as an optimal control problem for a weakly nonlinear oscillator. The classical linear case has been studied, for example, in [15].

We assume that the functions f 1 and φ satisfy conditions C 7 and C 8 , and functions f and f 2 are continuous.

By introducing a small positive parameter, this problem is reduced to an optimal control problem for a weakly nonlinear oscillator. The classical and linear cases have been studied. We consider a function φ L p ( Ω ) . Function f 1 ( t , x ) and φ are assumed to be measurable functions, satisfying conditions (C7) and (C8).

Let φ 1 ( t , x ) = 0 t φ ( t , s , x ) d s . Suppose that the following limits exist uniformly with respect to x R d and u R m :

lim ε 0 0 t f τ ε A 0 d τ = 0 ,

lim ε 0 0 t f 1 τ ε , x , φ 1 ( τ , x ) d τ = 0 .

We associate the optimal control problems (53) and (54) with the corresponding averaged problem

(55) ξ ˙ = A 0 ξ + f 2 ( t ) u , J 0 [ u ] = 0 T [ ( C ( t ) ξ ( t ) , ξ ( t ) ) + ( F ( t ) u ( t ) , u ( t ) ) ] d t + ( D ξ ( T ) , ξ ( T ) ) inf .

Problem (55) is a classical linear regulator problem. It is well known that its solution reduces to the matrix Riccati equation. In particular, when f 2 , C , and F are constants, this equation is autonomous, and in the one-dimensional case, it can be solved exactly. Consequently, the averaged problem (55) is solvable. The proven theorem then states that the optimal control found for the averaged problem is “almost” optimal for the original problem.

The following example is illustrative and demonstrates the convergence of the optimal controls and trajectories of the original problem to those of the averaged problem.

Example 2. We consider the optimal control problem

(56) x ε ˙ = sin t ε 0 t ( x ε ( s ) cos s ) d s + u , x ε ( 0 ) = 1 , t [ 0 , 1 ] , J ε [ u ] = 0 1 ( x ε ( t ) u ( t ) ) 2 d t inf .

Here φ 1 ( t , x ) = 0 t x cos s d s = x sin t . Then, according to (10), we have

lim ε 0 0 t x sin s ε sin s d s = 1 2 x ε 1 ε sin 1 ε 1 t ε 1 + ε sin 1 ε + 1 t = 0 .

So, the averaged problem is as follows:

(57) ξ ˙ = u , ξ ( 0 ) = 1 , J [ u ] = 0 1 ( ξ ( t ) u ( t ) ) 2 d t inf .

The optimal control of problem (57) is obviously u * ( t ) = ξ * ( t ) , where ξ * ( t ) is the solution of the Cauchy problem

d ξ * d t = ξ * , ξ * ( 0 ) = 1 .

Hence, u * ( t ) = e t .

For the initial problem (56), it is also obvious that x ε * ( t ) = u ε * ( t ) , where x ε * ( t ) is the solution of the Cauchy problem

(58) x ˙ ε = sin t ε 0 t x ε ( s ) cos s d s + x ε , x ε ( 0 ) = 1 .

The graphs and numerical illustrations below demonstrate the convergence of the solution of problem (58) toward the function e t as ε 0 (Figure 1 and Table 1).

Figure 1 
               Convergence of the solution 
                     
                        
                        
                           
                              
                                 x
                              
                              
                                 ε
                              
                           
                           
                              (
                              
                                 t
                              
                              )
                           
                        
                        {x}_{\varepsilon }\left(t)
                     
                   of the original problem (58) to the solution 
                     
                        
                        
                           
                              
                                 ξ
                              
                              
                                 *
                              
                           
                           
                              (
                              
                                 t
                              
                              )
                           
                           =
                           
                              
                                 e
                              
                              
                                 t
                              
                           
                        
                        {\xi }^{* }\left(t)={e}^{t}
                     
                   of the averaged problem (57) as 
                     
                        
                        
                           ε
                           →
                           0
                        
                        \varepsilon \to 0
                     
                  .
Figure 1

Convergence of the solution x ε ( t ) of the original problem (58) to the solution ξ * ( t ) = e t of the averaged problem (57) as ε 0 .

Table 1

Numerical comparison between the solutions of the original problem (58) and the averaged problem (57): values of x ε ( t ) , e t , and x ε ( t ) e t at selected points

ε t 0.20 0.40 0.60 0.80 1.00
e t 1.221403 1.491825 1.822119 2.225541 2.718282
ε = 1 0 2 x ε ( t ) 1.220604 1.495096 1.829428 2.226669 2.706371
ε = 1 0 4 x ε ( t ) 1.218997 1.485621 1.813555 2.217434 2.707980
ε = 1 0 2 x ε e t 7.985 × 1 0 4 3.272 × 1 0 3 7.309 × 1 0 3 1.128 × 1 0 3 1.191 × 1 0 2
ε = 1 0 4 x ε e t 2.405 × 1 0 3 6.203 × 1 0 3 8.564 × 1 0 3 8.107 × 1 0 3 1.030 × 1 0 2
  1. Funding information: This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP23485618). The research of O. Stanzhytskyi was partially supported by Grant No. 0124U001412 provided by the Ministry of Education and Science of Ukraine.

  2. Author contributions: All authors have equally contributed to this work. All authors read and approved the final manuscript.

  3. Conflict of interest: The authors state no conflict of interest.

References

[1] A. N. Filatov, Averaging in systems of integral and integro-differential equations, in: Research in Analytical Mechanics, Nauka, Tashkent, 1965. [in Russian]. Search in Google Scholar

[2] A. N. Filatov and A. G. Umarov, On averaging method in systems of integro-differential equations, Dokl. Akad. Nauk 327 (1985), no. 6, 12–14. [in Russian]. Search in Google Scholar

[3] O. M. Stanzhytskyi, S. G. Karakenova, and R. E. Uteshova, Averaging method and boundary value problems for systems of Fredholm integro-differential equations, Nonlinear Dyn. Syst. Theory 21 (2021), no. 1, 100–113. Search in Google Scholar

[4] A. N. Stanzhitskii and T. V. Dobrozdii, Study of optimal control problems on the half-line by the averaging method, Differ. Equ. 47 (2011), no. 2, 264–277, DOI: https://doi.org/10.1134/S0012266111020121. 10.1134/S0012266111020121Search in Google Scholar

[5] T. V. Nosenko and O. M. Stanzhytskyi, Averaging method in some problems of optimal control, Nonlinear Oscil. 48 (2008), no. 11, 539–547, DOI: https://doi.org/10.1007/s11072-009-0049-5. 10.1007/s11072-009-0049-5Search in Google Scholar

[6] T. V. Koval’chuk, V. V. Mohyl’ova, and T. V. Shovkoplyas, Averaging method in problems of optimal control over impulsive systems, J. Math. Sci. (N.Y.) 247 (2020), no. 2, 314–327, DOI: https://doi.org/10.1007/s10958-020-04804-2. 10.1007/s10958-020-04804-2Search in Google Scholar

[7] T. V. Koval’chuk, V. V. Mogylova, O. M. Stanzhytskyi, and T. V. Shovkoplyas, Application of the averaging method to the problems of optimal control of the impulse systems, Carpathian Math. Publ. 12 (2020), no. 2, 504–521, DOI: https://doi.org/10.15330/cmp.12.2.504-521. 10.15330/cmp.12.2.504-521Search in Google Scholar

[8] V. I. Kravets’, T. V. Koval’chuk, V. V. Mohyl’ova, and O. M. Stanzhytskyi, Application of the method of averaging to the problems of optimal control over functional-differential equations, Ukrainian Math. J. 70 (2018), no. 2, 232–242, DOI: https://doi.org/10.1007/s11253-018-1497-9. 10.1007/s11253-018-1497-9Search in Google Scholar

[9] J. F. M. Al-Omari and S. A. Gourley, A nonlocal reaction-diffusion model for a single species with stage structure and distributed maturation delay, Eur. J. Appl. Math. 16 (2005), 37–51. 10.1017/S0956792504005716Search in Google Scholar

[10] A. Alawneh, K. Al-Khaled, and M. Al-Towaiq, Reliable algorithms for solving integro-differential equations with applications, Int. J. Comput. Math. 87 (2010), no. 7, 1538–1554, DOI: https://doi.org/10.1080/00207160802385818. 10.1080/00207160802385818Search in Google Scholar

[11] H. R. Thieme, A model for the spatio-spread of an epidemic, J. Math. Biol. 4 (1977), 337–351, DOI: https://doi.org/10.1007/BF00275082. 10.1007/BF00275082Search in Google Scholar PubMed

[12] V. Mogylova, R. Lakhva, and V. Kravets, Optimal control problem for systems of integro-differential equations, J. Math. Sci. (N.Y.) 282 (2024), no. 6, 983–1007, DOI: https://doi.org/10.1007/s10958-024-07229-3. 10.1007/s10958-024-07229-3Search in Google Scholar

[13] R. Lakhva, Z. Khaletska, and V. Mogylova, The optimal control problem for systems of integro-differential equations with finite and infinite horizon, Georgian Math. J. (in press), 2024, DOI: https://doi.org/10.1515/gmj-2024-2065. 10.1515/gmj-2024-2065Search in Google Scholar

[14] K. Yosida, Functional Analysis, Springer-Verlag, Berlin, New York, 1980. Search in Google Scholar

[15] W. H. Fleming and H. M. Soner, Controlled Markov Processes and Viscosity Solutions, vol. 25, Springer Science & Business Media, New York, 2006. Search in Google Scholar

Received: 2025-02-06
Revised: 2025-05-05
Accepted: 2025-05-12
Published Online: 2025-06-13

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Special Issue on Contemporary Developments in Graphs Defined on Algebraic Structures
  2. Forbidden subgraphs of TI-power graphs of finite groups
  3. Finite group with some c#-normal and S-quasinormally embedded subgroups
  4. Classifying cubic symmetric graphs of order 88p and 88p 2
  5. Simplicial complexes defined on groups
  6. Two-sided zero-divisor graphs of orientation-preserving and order-decreasing transformation semigroups
  7. Further results on permanents of Laplacian matrices of trees
  8. Special Issue on Convex Analysis and Applications - Part II
  9. A generalized fixed-point theorem for set-valued mappings in b-metric spaces
  10. Research Articles
  11. Dynamics of particulate emissions in the presence of autonomous vehicles
  12. The regularity of solutions to the Lp Gauss image problem
  13. Exploring homotopy with hyperspherical tracking to find complex roots with application to electrical circuits
  14. The ill-posedness of the (non-)periodic traveling wave solution for the deformed continuous Heisenberg spin equation
  15. Some results on value distribution concerning Hayman's alternative
  16. 𝕮-inverse of graphs and mixed graphs
  17. A note on the global existence and boundedness of an N-dimensional parabolic-elliptic predator-prey system with indirect pursuit-evasion interaction
  18. On a question of permutation groups acting on the power set
  19. Chebyshev polynomials of the first kind and the univariate Lommel function: Integral representations
  20. Blow-up of solutions for Euler-Bernoulli equation with nonlinear time delay
  21. Spectrum boundary domination of semiregularities in Banach algebras
  22. Statistical inference and data analysis of the record-based transmuted Burr X model
  23. A modified predictor–corrector scheme with graded mesh for numerical solutions of nonlinear Ψ-caputo fractional-order systems
  24. Dynamical properties of two-diffusion SIR epidemic model with Markovian switching
  25. Classes of modules closed under projective covers
  26. On the dimension of the algebraic sum of subspaces
  27. Periodic or homoclinic orbit bifurcated from a heteroclinic loop for high-dimensional systems
  28. On tangent bundles of Walker four-manifolds
  29. Regularity of weak solutions to the 3D stationary tropical climate model
  30. A new result for entire functions and their shifts with two shared values
  31. Freely quasiconformal and locally weakly quasisymmetric mappings in metric spaces
  32. On the spectral radius and energy of the degree distance matrix of a connected graph
  33. Solving the quartic by conics
  34. A topology related to implication and upsets on a bounded BCK-algebra
  35. On a subclass of multivalent functions defined by generalized multiplier transformation
  36. Local minimizers for the NLS equation with localized nonlinearity on noncompact metric graphs
  37. Approximate multi-Cauchy mappings on certain groupoids
  38. Multiple solutions for a class of fourth-order elliptic equations with critical growth
  39. A note on weighted measure-theoretic pressure
  40. Majorization-type inequalities for (m, M, ψ)-convex functions with applications
  41. Recurrence for probabilistic extension of Dowling polynomials
  42. Unraveling chaos: A topological analysis of simplicial homology groups and their foldings
  43. Global existence and blow-up of solutions to pseudo-parabolic equation for Baouendi-Grushin operator
  44. A characterization of the translational hull of a weakly type B semigroup with E-properties
  45. Some new bounds on resolvent energy of a graph
  46. Carmichael numbers composed of Piatetski-Shapiro primes in Beatty sequences
  47. The number of rational points of some classes of algebraic varieties over finite fields
  48. Singular direction of meromorphic functions with finite logarithmic order
  49. Pullback attractors for a class of second-order delay evolution equations with dispersive and dissipative terms on unbounded domain
  50. Eigenfunctions on an infinite Schrödinger network
  51. Boundedness of fractional sublinear operators on weighted grand Herz-Morrey spaces with variable exponents
  52. On SI2-convergence in T0-spaces
  53. Bubbles clustered inside for almost-critical problems
  54. Classification and irreducibility of a class of integer polynomials
  55. Existence and multiplicity of positive solutions for multiparameter periodic systems
  56. Averaging method in optimal control problems for integro-differential equations
  57. On superstability of derivations in Banach algebras
  58. Investigating the modified UO-iteration process in Banach spaces by a digraph
  59. The evaluation of a definite integral by the method of brackets illustrating its flexibility
  60. Existence of positive periodic solutions for evolution equations with delay in ordered Banach spaces
  61. Tilings, sub-tilings, and spectral sets on p-adic space
  62. The higher mapping cone axiom
  63. Continuity and essential norm of operators defined by infinite tridiagonal matrices in weighted Orlicz and l spaces
  64. A family of commuting contraction semigroups on l 1 ( N ) and l ( N )
  65. Pullback attractor of the 2D non-autonomous magneto-micropolar fluid equations
  66. Maximal function and generalized fractional integral operators on the weighted Orlicz-Lorentz-Morrey spaces
  67. On a nonlinear boundary value problems with impulse action
  68. Normalized ground-states for the Sobolev critical Kirchhoff equation with at least mass critical growth
  69. Decompositions of the extended Selberg class functions
  70. Subharmonic functions and associated measures in ℝn
  71. Some new Fejér type inequalities for (h, g; α - m)-convex functions
  72. The robust isolated calmness of spectral norm regularized convex matrix optimization problems
  73. Multiple positive solutions to a p-Kirchhoff equation with logarithmic terms and concave terms
  74. Joint approximation of analytic functions by the shifts of Hurwitz zeta-functions in short intervals
  75. Green's graphs of a semigroup
  76. Some new Hermite-Hadamard type inequalities for product of strongly h-convex functions on ellipsoids and balls
  77. Infinitely many solutions for a class of Kirchhoff-type equations
  78. On an uncertainty principle for small index subgroups of finite fields
Downloaded on 5.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2025-0167/html
Scroll to top button