Startseite Continuous-time limits of multi-period cost-of-capital margins
Artikel Open Access

Continuous-time limits of multi-period cost-of-capital margins

  • Hampus Engsner ORCID logo und Filip Lindskog ORCID logo EMAIL logo
Veröffentlicht/Copyright: 17. April 2020
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

We consider multi-period cost-of-capital valuation of a liability cash flow subject to repeated capital requirements that are partly financed by capital injections from capital providers with limited liability. Limited liability means that, in any given period, the capital provider is not liable for further payment in the event that the capital provided at the beginning of the period turns out to be insufficient to cover both the current-period payments and the updated value of the remaining cash flow. The liability cash flow is modeled as a continuous-time stochastic process on [ 0 , T ] . The multi-period structure is given by a partition of [ 0 , T ] into subintervals, and on the corresponding finite set of times, a discrete-time cost-of-capital-margin process is defined. Our main objective is the analysis of existence and properties of continuous-time limits of discrete-time cost-of-capital-margin processes corresponding to a sequence of partitions whose meshes tend to zero. Moreover, we provide explicit expressions for the limit processes when cash flows are given by Itô diffusions and processes with independent increments.

MSC 2010: 91G50; 60B12; 91B30

1 Introduction

1.1 Multi-period cost-of-capital valuation

This paper focuses on multi-period cost-of-capital valuation of a cumulative liability cash flow L = { L t } t [ 0 , T ] subject to repeated capital requirements at the beginning of each time period, where the time periods form a partition of [ 0 , T ] . Here T is a time after which no cash flow occurs. In line with current regulatory frameworks, the time periods may be one-year periods. However, we will investigate the effects of varying the number and lengths of the periods and consider sequences of partitions of [ 0 , T ] whose meshes tend to 0. That is, we will analyze continuous-time limits of discrete-time cost-of-capital valuations of the liability cash flow L. In what follows, all cash flows are discounted by a given numéraire, or equivalently, denoted in units of this numéraire. A classical bank account numéraire, a rolling one-period bond, may be a natural choice.

The valuation approach presented in this paper can be applied to arbitrary liability cash flows, including those that depend on the values of traded financial instruments, by considering (partial) replication by an appropriate choice of replicating portfolio. The value of the liability cash flow is then the sum of the market price of the replicating portfolio and the cost-of-capital margin for the residual liability cash flow (corresponding to the replication error). Market-consistent valuation requires that if the liability cash flow is fully replicable, then the replicating portfolio must replicate the liability cash flow perfectly. Consequently, for such liability cash flows, the residual liability cash flow is zero at all times, and the value is simply the market price of the replicating portfolio. Replication is not the focus of this paper. Therefore, the liability cash flow should in what follows be interpreted as either the residual liability cash flow that remains after partial replication or a liability cash flow that is not replicable by financial replication instruments.

In order to clarify the economic motivation of the valuation setup, let T be a positive integer, and consider times 0 , 1 , , T . The multi-period cost-of-capital margin for the liability cash flow L = { L t } t [ 0 , T ] is based on considering a hypothetical transfer of the liability cash flow at time 0 from the company currently liable for this cash flow to another company whose single purpose is to manage the run-off of the liability. The company receiving the liability cash flow has no own funds but receives an amount V 0 together with the liability. In order to meet the externally imposed capital requirements associated with the liability, according to the regulatory environment, the receiver of the liability cash flow requests capital injections from a capital provider who will make the capital injection if the expected return is good enough. The mathematical problem arising is the determination of V 0 : this value is not a priori given but rather a value implied by the repeated financing of the capital requirements by a capital provider demanding compensation for providing capital injections.

Let V t denote the cost-of-capital margin for the liability cash flow { L s } s ( t , T ] , that is beyond time t. In particular, V T = 0 . Assume that the amount V t is available at time t and that the required capital is VaR t , p ( - Δ L t + 1 - V t + 1 ) > V t , where Δ L t + 1 :- L t + 1 - L t is the accumulated cash flow during the time period ( t , t + 1 ] and VaR t , p is the risk measure value-at-risk conditional on the information available at time t. A capital provider is asked to provide the difference VaR t , p ( - Δ L t + 1 - V t + 1 ) - V t between the required and the available capital. If this capital is provided, then, in return, the capital provider receives the amount ( VaR t , p ( - Δ L t + 1 - V t + 1 ) - Δ L t + 1 - V t + 1 ) + at time t + 1 , where ( x ) + :- max { x , 0 } . The rationale for the amount ( VaR t , p ( - Δ L t + 1 - V t + 1 ) - Δ L t + 1 - V t + 1 ) + is the following. The capital provider is entitled to any excess capital at time t + 1 above what is needed for the one-period payment Δ L t + 1 plus V t + 1 that is needed to match the cost-of-capital margin for the remaining liability cash flow from time t + 1 onwards. If at time t + 1 there is a deficit of capital so that Δ L t + 1 + V t + 1 > VaR t , p ( - Δ L t + 1 - V t + 1 ) , then the VaR t , p ( - Δ L t + 1 - V t + 1 ) units of the numéraire asset are paid to the creditors/policyholders and the iterative procedure for managing the run-off of the liability cash flow stops.

A capital provider will accept providing the capital at time t if the expected return is good enough, in the sense that

(1.1) 𝔼 t [ ( VaR t , p ( - Δ L t + 1 - V t + 1 ) - Δ L t + 1 - V t + 1 ) + ] VaR t , p ( - Δ L t + 1 - V t + 1 ) - V t 1 + η t ,

where 𝔼 t denotes conditional expectation with respect to the information at time t, and η t 0 is the excess expected rate of return (above that of the numéraire asset) at time t on the capital provided until time t + 1 . In what follows, we will simply consider { η t } t = 0 T - 1 to be a given stochastic process.

Given the liability cash flow { L t } t [ 0 , T ] and a discrete-time stochastic process { η t } t = 0 T - 1 , and assuming competition between potential capital providers so that better than acceptable conditions are not granted, the acceptability condition (1.1) immediately gives the following backward recursion for { V t } t = 0 T :

V t = VaR t , p ( - Δ L t + 1 - V t + 1 ) - 1 1 + η t 𝔼 t [ ( VaR t , p ( - Δ L t + 1 - V t + 1 ) - Δ L t + 1 - V t + 1 ) + ] , V T = 0 .

Notice that the values { V t } t = 0 T are not a priori given but rather the solution to the above recursion given a model for both { L t } t [ 0 , T ] and { η t } t = 0 T - 1 . We refer to Remark 4 for comments on other more elaborate acceptability criteria for risk averse capital providers.

The above approach to valuation of insurance liabilities is based on the framework presented in [15] establishing a theoretical basis for the current regulatory frameworks for the insurance industry, in particular Solvency 2. A balance sheet interpretation of the involved quantities is as follows. Suppose that at time t the liability cash flow in run-off corresponding to obligations to policyholders can be decomposed into a sum of a fully replicable cash flow with market price π t and a nonhedgable residual liability cash flow with value V t . At time t, after the capital provider has invested capital to ensure continuation of the run-off, the value of the assets of the reference undertaking is the sum of the market price π t of the portfolio replicating the replicable part of the liabilities and the buffer capital VaR t , p ( - Δ L t + 1 - V t + 1 ) for nonhedgable risks prescribed by the regulator. The difference between the asset value and the value of the liabilities to policyholders is thus SCR t :- VaR t , p ( - Δ L t + 1 - V t + 1 ) - V t which matches precisely the capital investment. Including the value SCR t of the reference undertaking’s liability to the capital provider we see that the asset value balances the liability value. We refer to [15] for further details.

1.2 Related literature

The valuation framework presented in Section 1.1 is based on [15] aiming to establish the theoretical basis for the current regulatory frameworks for the insurance industry. The approach to multi-period cost-of-capital valuation above was studied in [7] for general risk measures and acceptability criteria. The choice of one-year periods corresponds to the current regulatory solvency frameworks under which both banks and insurance companies operate, and is in line with accounting practice. However, it is quite reasonable to consider the financing of liability cash flows subject to repeated capital requirements by capital injections at a higher frequency. Moreover, by letting the length of the time periods tend to zero, we may derive explicit continuous-time formulas for the limiting cost-of-capital-margin process whereas solutions to discrete-time backward recursions of the above type can often only be obtained numerically. It is also interesting to analyze which features of a liability cash flow vanish and which persist in the limit process from discrete-time valuation to continuous-time valuation as the mesh of the partition of time periods tends to zero.

There are similarities with our objectives here and works, such as [22, 16, 14], analyzing continuous-time dynamic risk measures (or risk-adjusted values) which can be represented as limits of discrete-time risk measures in multi-period models. However, there are also major differences. A detailed comparison of our setup with that considered in [16, 22] is found in Remark 3. The aim in [14] is different from ours: there the objective is the construction and analysis of dynamic risk measures expressed in terms of backward stochastic differential equations (BSDEs) that arise as limits of iterated spectral risk measures. Notice that a spectral risk measure is a form of coherent and convex risk measure.

The vast majority of works on dynamic risk measurement consider coherent or convex risk measures, and many of them aim for a representation of the risk measure in terms of a solution to a BSDE. See [1, 2, 3, 4, 8, 10, 13, 18, 19, 20] and references therein. The discrete-time cost-of-capital-margin process { V t } t = 0 T from the multi-period cost-of-capital framework above shares most of the properties of the multi-period risk-adjusted values in [2]; precise statements are found in [7]. In particular, the properties called time-consistency and recursiveness hold. However, the limited liability of the capital provider, which is an essential economic property, makes the cost-of-capital margin V 0 , seen as a functional applied to a cumulative liability cash flow, lack the sub/super-additivity property in general. This fact holds irrespectively of whether VaR t , p is replaced by a coherent alternative. Therefore, the general representation results for dynamic coherent or convex risk measures, or similarly for multi-period risk-adjusted values, are not available in our multi-period cost-of-capital valuation setup. This fact makes the mathematical analysis here very different from that in works on dynamic coherent and convex risk measures.

1.3 Outline

Section 2 presents basic results for discrete-time cost-of-capital-margin processes for a given continuous-time liability cash flow. In this setting, the cost-of-capital-margin process is defined on a time grid

0 = τ 0 < < τ m = T

corresponding to an arbitrary partition τ of [ 0 , T ] .

Section 3 presents the main results of this paper on existence and properties of continuous-time limits of a sequence of discrete-time cost-of-capital-margin processes for a given continuous-time liability cash flow. The continuous-time limit, defined in Definition 4, arises by considering an arbitrary sequence { τ m } m = 1 of partitions whose meshes tend to 0.

Theorem 2 gives mild conditions under which the continuous-time cost-of-capital margin of a sum of two cash flows decomposes into a sum of the corresponding two continuous-time cost-of-capital margins. Moreover, it gives mild conditions under which the continuous-time cost-of-capital-margin process of a cash flow degenerates into a process of conditional expectations of the remaining cash flow.

Theorem 3 presents a wide class of Itô processes that satisfy the conditions of Theorem 2 under which the continuous-time cost-of-capital-margin process is a process of conditional expectations of the remaining cash flow.

Theorem 4 derives the continuous-time limit of discrete-time cost-of-capital-margin processes when the underlying liability cash flow is given by an additive process with a jump component, with Lévy processes and compound Poisson processes driven by inhomogeneous Poisson processes as special cases.

All proofs of the main results are found in Section 4. Section 5 contains technical lemmas used in the proofs of the main results that may also be of independent interest.

2 Discrete-time cost-of-capital-margin processes

In this section, we will present the mathematical framework for the multi-period cost-of-capital margin for a continuous-time cumulative liability cash flow.

Fix T > 0 , and consider a filtered, complete probability space ( Ω , , 𝔽 , ) , where 𝔽 :- { t } t [ 0 , T ] satisfies the so-called usual conditions; see e.g. [17, Chapter 1]. Write L 0 ( t ) ( L 1 ( t ) ) for the set of t -measurable (integrable) random variables. Write L 0 ( 𝔽 ) ( L 1 ( 𝔽 ) ) for the set of 𝔽 -adapted stochastic processes X with X t L 0 ( t ) ( X t L 1 ( t ) ) for every t [ 0 , T ] . Moreover, X L 0 ( 𝔽 ) is said to be càdlàg if, for all ω in the complement of a null set, the sample paths t X t ( ω ) are right-continuous with left limits. For t < u and Y u -measurable, we use subscript t to denote conditioning on t ,

F t , Y ( y ) :- t ( Y y ) :- ( Y y t ) , 𝔼 t [ Y ] :- 𝔼 [ Y t ] .

Relations between random variables are to be understood in the -almost sure sense. We consider an arbitrary partition of [ 0 , T ] into subintervals and discrete-time value processes evaluated at the time points corresponding to the given partition. We will call a set of points τ :- { τ k } k = 0 m with 0 = τ 0 < < τ m = T a partition of the time interval [ 0 , T ] . For any such partition, we denote by δ k :- τ k + 1 - τ k the lengths of the subintervals, and mesh ( τ ) :- max 0 k m - 1 δ k .

For Y L 0 ( τ k + δ k ) , value-at-risk of Y at the level 1 - α δ k ( 0 , 1 ) conditional on τ k is defined as

VaR τ k , 1 - α δ k ( Y ) :- F τ k , - Y - 1 ( α δ k ) :- ess inf { m L 0 ( τ k ) : F τ k , - Y ( m ) α δ k } .

The discrete-time cost-of-capital-margin process for a liability cash flow is defined in Definition 2 below in terms of a backward recursion of the kind presented in Section 1.1 and the following one-step valuation mapping.

Definition 1.

For Y L 1 ( τ k + δ k ) , α δ k ( 0 , 1 ) and a nonnegative η τ k L 0 ( τ k ) , the one-step valuation mapping is defined as

φ τ k δ k ( Y ) :- VaR τ k , 1 - α δ k ( - Y ) - 1 1 + η τ k 𝔼 τ k [ ( VaR τ k , 1 - α δ k ( - Y ) - Y ) + ] .

Remark 1.

Notice that the backward recursion for the discrete-time cost-of-capital-margin process { V t } t = 0 T in Section 1.1 may be expressed as V t = φ t 1 ( L t + 1 - L t + V t + 1 ) , V T = 0 , and corresponds to partitioning [ 0 , T ] into subintervals of lengths one.

In relevant applications, 1 - α δ k , η τ k are both close to 0: η τ k is the expected excess rate of return, above that for the numéraire asset, for the capital provider on the provided capital ensuring solvency at time τ k , α δ k is (typically, noting that F t - Y ( F t , - Y - 1 ( α δ k ) - ) α δ k F t - Y ( F t , - Y - 1 ( α δ k ) ) ) the probability that the provided solvency capital at time τ k is found sufficient at time τ k + 1 = τ k + δ k .

An alternative expression for φ τ k δ k ( Y ) , see Lemma 2, is

φ τ k δ k ( Y ) = 1 1 + η τ k ( 𝔼 τ k [ Y ] - ( 1 - α δ k ) ES τ k , 1 - α δ k ( - Y ) + ( 1 - α δ k + η τ k ) VaR τ k , 1 - α δ k ( - Y ) ) ,

where ES τ k , 1 - α δ k denotes the expected shortfall conditional on τ k defined as

ES τ k , 1 - α δ ( - Y ) :- 1 1 - α δ 0 1 - α δ VaR τ k , p ( - Y ) d p .

Theorem 1.

Consider a partition τ of [ 0 , T ] , 0 = τ 0 < < τ m = T . For each k, φ τ k δ k is a mapping from L 1 ( F τ k + δ k ) to L 1 ( F τ k ) satisfying

  1. if λ L 1 ( τ k ) and Y L 1 ( τ k + 1 ) , then φ τ k δ k ( Y + λ ) = φ τ k δ k ( Y ) + λ ,

  2. if Y , Y ~ L 1 ( τ k + 1 ) and Y Y ~ , then φ τ k δ k ( Y ) φ τ k δ k ( Y ~ ) ,

  3. φ τ k δ k ( 0 ) = 0 .

Based on the statement of the above theorem, we may define the discrete-time cost-of-capital-margin process { V t τ ( L ) } t τ of a continuous-time cumulative liability cash flow L L 1 ( 𝔽 ) . By Theorem 1,

{ V t τ ( L ) } t τ L 1 ( { t } t τ ) .

Definition 2.

Given L L 1 ( 𝔽 ) and a partition τ of [ 0 , T ] , 0 = τ 0 < < τ m = T , the cost-of-capital-margin (CoCM) process { V t τ ( L ) } t τ of L with respect to the partition τ and filtration 𝔽 is defined in terms of a sequence of one-step valuation mappings defined in Definition 1 as follows:

(2.1) V τ k τ ( L ) :- φ τ k δ k ( Δ L τ k + 1 + V τ k + 1 τ ( L ) ) , V T τ ( L ) :- 0 ,

where Δ L τ k + 1 :- L τ k + 1 - L τ k .

Remark 2.

From Theorem 1, it follows that

V 0 τ ( L ) = φ 0 δ 0 φ τ m - 1 δ m - 1 ( L T ) ,

where denotes composition of mappings. In particular, if L , L ~ L 1 ( 𝔽 ) satisfy L T L ~ T , then V 0 τ ( L ) V 0 τ ( L ~ ) . Moreover, since each φ τ k δ k inherits positive homogeneity, i.e. φ τ k δ k ( c Y ) = c φ τ k δ k ( Y ) for c 0 , from VaR τ k , 1 - α δ k , V 0 τ ( c L ) = c V 0 τ ( L ) for c 0 .

In order to analyze continuous-time limits of sequences of discrete-time CoCM processes, we will need a stability property with respect to small perturbations of α δ k in the conditional risk measure VaR τ k , 1 - α δ k . Therefore, we introduce the notion of lower and upper one-step CoCM mappings and, in Definition 3, lower and upper discrete-time CoCM processes.

For β ( 0 , 1 ) and Y L 1 ( τ k + δ k ) , the lower and upper one-step valuation mappings are defined as

φ ˇ τ k δ k , β ( Y ) :- φ τ k δ k ( Y ) + 1 - α δ k + η τ k 1 + η τ k ( VaR τ k , 1 - α δ k + δ k 1 + β ( - Y ) - VaR τ k , 1 - α δ k ( - Y ) ) ,
φ ^ τ k δ k , β ( Y ) :- φ τ k δ k ( Y ) + 1 - α δ k + η τ k 1 + η τ k ( VaR τ k , 1 - α δ k - δ k 1 + β ( - Y ) - VaR τ k , 1 - α δ k ( - Y ) ) .

By the same arguments as in the proof of Theorem 1, φ ˇ τ k δ k , β and φ ^ τ k δ k , β are mappings from L 1 ( τ k + δ k ) to L 1 ( τ k ) . In particular, the lower and upper CoCM process { V ˇ t τ , β ( L ) } t τ and { V ^ t τ , β ( L ) } t τ are defined as follows.

Definition 3.

Given L L 1 ( 𝔽 ) and a partition τ of [ 0 , T ] , 0 = τ 0 < < τ m = T , the lower and upper CoCM processes { V ˇ t τ , β ( L ) } t τ and { V ^ t τ , β ( L ) } t τ of L with respect to the partition τ and filtration 𝔽 are given by

V ˇ τ k τ , β ( L ) :- φ ˇ τ k δ k , β ( Δ L τ k + 1 + V ˇ τ k + 1 τ , β ( L ) ) , V ˇ T τ , β ( L ) :- 0 ,
V ^ τ k τ , β ( L ) :- φ ^ τ k δ k , β ( Δ L τ k + 1 + V ^ τ k + 1 τ , β ( L ) ) , V ^ T τ , β ( L ) :- 0 .

Notice that V ˇ t τ , β ( L ) V t τ ( L ) V ^ t τ , β ( L ) for all t τ .

The purpose of this paper is to study the behavior of the discrete-time CoCM processes when varying the partition τ of [ 0 , T ] and in particular the convergence to and properties of continuous-time limit processes appearing when letting the mesh of the partition tend to zero. Serious modeling of the random sequence { η τ k } k = 0 m - 1 of cost-of-capital rates requires modeling of how the risk aversion of the capital provider varies over time and also mechanisms for competition between capital providers. This aspect of the multi-period cost-of-capital margin is outside the scope of the current paper. We will throughout the remainder of this paper make the simplifying assumption that η τ k η δ k is nonrandom and depends only on the length δ k of the subinterval [ τ k , τ k + 1 ) and not on τ k .

Assumption 1.

η τ k η δ k is nonrandom and depends only on the length δ k of the subinterval [ τ k , τ k + 1 ) .

We will consider sequences { τ m } m = 1 of partitions of [ 0 , T ] , 0 = τ m , 0 < < τ m , m = T , with

lim m mesh ( τ m ) = 0 .

With δ m , k :- τ m , k + 1 - τ m , k , we further assume the existence of sequences { α δ m , k } k = 0 m - 1 and { η δ m , k } k = 0 m - 1 , of nonrandom elements α δ m , k , η δ m , k ( 0 , 1 ) such that, for some α , η ( 0 , 1 ) ,

(2.2) lim m sup k m - 1 | 1 - α δ m , k δ m , k + log ( α ) | = lim m sup k m - 1 | η δ m , k δ m , k - log ( 1 + η ) | = 0 .

In order to make economic sense, the limits

lim m k = 0 m - 1 α δ m , k , lim m k = 0 m - 1 ( 1 + η δ m , k )

should exist finitely and be strictly positive. For the first limit, the interpretation is that the probability that the repeated capital injections are sufficient throughout the time period [ 0 , T ] is some number strictly between zero and one. Similarly, for the second limit, the interpretation is that the capital provider’s aggregate expected return on the repeated capital injections is finite. It is shown in Lemma 11 that (2.2) implies

lim m k = 0 m - 1 α δ m , k = α T , lim m k = 0 m - 1 ( 1 + η δ m , k ) = ( 1 + η ) T .

Remark 3.

In our setup, as well as in [16, 22], discrete-time multi-period risk-adjusted values, given a partition τ = { τ k } k = 0 m of [ 0 , T ] , may be expressed as

Φ τ k τ ( X ) = φ τ k , τ k + 1 ( Φ τ k + 1 τ ( X ) ) , Φ T τ ( X ) = X T ,

where the mapping φ τ k , τ k + 1 is a mapping from a subspace of L 0 ( τ k + 1 ) to a subspace of L 0 ( τ k ) . In our setting,

X = L , Φ τ k τ ( X ) = L τ k + V τ k τ ( L ) , φ τ k , τ k + 1 = φ τ k δ k ,

where δ k = τ k + 1 - τ k and

φ τ k δ k ( Y ) = VaR τ k , 1 - α δ k ( - Y ) - 1 1 + η δ k 𝔼 τ k [ ( VaR τ k , 1 - α δ k ( - Y ) - Y ) + ] = 1 1 + η δ k ( 𝔼 τ k [ Y ] - ( 1 - α δ k ) ES τ k , 1 - α δ k ( - Y ) + ( 1 - α δ k + η δ k ) VaR τ k , 1 - α δ k ( - Y ) )

with 1 - α δ - δ log α and η δ δ log ( 1 + η ) as δ 0 . Notice from the last expression above for φ τ k δ k ( Y ) that, for very small values η δ k , φ τ k δ k ( Y ) < 𝔼 τ k [ Y ] . This inequality is a consequence of the limited liability for the capital provider. Notice also that φ τ k δ k ( Y ) > 𝔼 τ k [ Y ] is equivalent to

ES τ k , 1 - α δ k ( - Y ) - VaR τ k , 1 - α δ k ( - Y ) < η δ k 1 - α δ k ( VaR τ k , 1 - α δ k ( - Y ) - 𝔼 τ k [ Y ] ) ,

which, for 1 - α δ k ( 0 , 1 2 ) small and reasonable models for Y, holds for moderately large values η δ k .

In [16], actuarial valuation rules are extended from discrete to continuous time. Modified to our setting where financial values are expressed in the numéraire, the valuation rule most similar to the one considered here is

(2.3) φ τ k , τ k + 1 ( Y ) = 𝔼 τ k [ Y ] + η δ k VaR τ k , 1 - α ( - Y - 𝔼 τ k [ - Y ] ) = ( 1 - η δ k ) 𝔼 τ k [ Y ] + η δ k VaR τ k , 1 - α ( - Y ) ,

where η , α ( 0 , 1 ) are fixed constants. Notice that φ τ k , τ k + 1 ( Y ) > 𝔼 τ k [ Y ] if VaR τ k , 1 - α ( - Y ) > 𝔼 τ k [ Y ] . Although the expressions for φ τ k , τ k + 1 may appear similar, they are fundamentally different. The mapping φ τ k , τ k + 1 in [16] is a priori given by an actuarial valuation rule whereas, in our case, it is the result of the capital providers’ acceptability condition for financing the repeated capital requirements, taking the capital providers’ limited liability into account.

In [22], a negative liability value corresponds to a positive value in our setting, and vice versa. The mappings φ τ k , τ k + 1 in [22], modified to our sign convention, are of the form

φ τ k , τ k + 1 ( Y ) = ( 1 - δ k ) 𝔼 τ k [ Y ] - δ k F τ k ( - Y δ k ) ,

and F τ k may be chosen as

F τ i ( Y ) = 𝔼 τ i [ Y ] - η VaR τ i , 1 - α ( Y - 𝔼 τ i [ Y ] ) ,

which gives

φ τ k , τ k + 1 ( Y ) = ( 1 - η δ k ) 𝔼 τ k [ Y ] + η δ k VaR τ k , 1 - α ( - Y ) ,

which coincides with (2.3).

Notice that, in [16, 22], the quantities 𝔼 τ k [ Y ] and VaR τ k , 1 - α ( - Y ) appearing in the mapping φ τ k , τ k + 1 ( Y ) are scaled appropriately in order to obtain convergence of discrete-time value processes to continuous-time value processes, and α and η are constants that do not depend on the partition of the time interval [ 0 , T ] . In our setting, the sequences { α δ k } and { η δ k } are chosen so that, regardless of the partition of [ 0 , T ] , there is a reasonable nontrivial probability of successful financing of the capital requirements through the entire time period and a reasonable expected excess return to the capital providers for providing capital.

Remark 4.

There are natural alternatives to the capital provider’s acceptability criterion (1.1) considered in this paper. Consider a sequence { U t } t = 0 T - 1 of so-called conditional monetary utility functions, where, for each t, U t is a mapping from L 1 ( t + 1 ) to L 1 ( t ) satisfying

  1. if λ L 1 ( t ) and Y L 1 ( t + 1 ) , then U t ( Y + λ ) = U t ( Y ) + λ ,

  2. if Y , Y ~ L 1 ( τ k + 1 ) and Y Y ~ , then U t ( Y ) U t ( Y ~ ) ,

  3. U t ( 0 ) = 0 .

Assume that, at time t, the capital provider will accept providing capital if the utility of the excess capital at time t + 1 ,

U t ( ( VaR t , p ( - Δ L t + 1 - V u , t + 1 ) - Δ L t + 1 - V u , t + 1 ) + ) ,

is not smaller than the utility of the capital asked for at time t,

U t ( VaR t , p ( - Δ L t + 1 - V u , t + 1 ) - V u , t ) .

Equating these expressions and solving for V u , t gives the backward recursion

V u , t = φ u , t ( Δ L t + 1 + V u , t + 1 ) ,

where

φ u , t ( Y ) = VaR t , p ( - Y ) - U t ( ( VaR t , p ( - Y ) - Y ) + ) .

It is shown in [7, Proposition 1] that φ u , t satisfies φ u , t ( 0 ) = 0 , the monotonicity property φ u , t ( Y ) φ u , t ( Y ~ ) if Y Y ~ , and the translation-invariance property: if λ L 1 ( t ) and Y L 1 ( t + 1 ) , then φ u , t ( Y + λ ) = φ u , t ( Y ) + λ , i.e. the analogue of Theorem 1 holds.

One example of a conditional monetary utility function is the certainty equivalent Y u t - 1 ( 𝔼 t [ u t ( Y ) ] ) when u t is an exponential utility function u t ( y ) = - γ t exp { - y γ t } . In general, certainty equivalents obtained from (strictly increasing and concave) utility functions u t do not satisfy the translation invariance property in Remark 4i of conditional monetary utility functions; see e.g. [9, Proposition 2.46].

3 Continuous-time cost-of-capital-margin processes

This section contains the main results of the paper. We first define the continuous-time limit of a sequence of discrete-time CoCM processes for a given continuous-time liability cash flow, where the sequence of discrete-time CoCM processes corresponds to a sequence of partitions of [ 0 , T ] with meshes tending to zero.

Definition 4.

Let { τ m } m = 1 be a sequence of partitions of [ 0 , T ] , 0 = τ m , 0 < < τ m , m = T , such that

lim m mesh ( τ m ) = 0 .

Let L L 0 ( 𝔽 ) . If there exists { V t ( L ) } t [ 0 , T ] L 0 ( 𝔽 ) that is càdlàg such that

sup t τ m | V t τ m ( L ) - V t ( L ) | 0 a.s. as m ,

then { V t ( L ) } t [ 0 , T ] is the continuous-time limit of the sequence of discrete-time CoCM processes { V t τ m ( L ) } t τ m .

The càdlàg property of the limit { V t ( L ) } t [ 0 , T ] in Definition 4 ensures uniqueness of the limit for a given sequence of partitions. This is shown in Remark 5. Typically, we want a continuous-time limit that does not depend on the choice of sequence of partitions. This stronger property of the limit will be established in the results that follow.

Remark 5.

Consider a sequence of partitions { τ m } m = 1 and two corresponding càdlàg continuous-time limits { V t ( L ) } t [ 0 , T ] and { V ~ t ( L ) } t [ 0 , T ] . Clearly, V T ( L ) = V ~ T ( L ) = 0 . Take any t [ 0 , T ) and a sequence { k m } m = 1 of natural numbers such that τ m , k m t as m . From Definition 4,

| V τ m , k m τ m ( L ) - V τ m , k m ( L ) | 0 , | V τ m , k m τ m ( L ) - V ~ τ m , k m ( L ) | 0 a.s. as m ,

and by the càdlàg property,

| V τ m , k m ( L ) - V t ( L ) | 0 , | V ~ τ m , k m ( L ) - V ~ t ( L ) | 0 a.s. as m .

For all m,

| V t ( L ) - V ~ t ( L ) | | V τ m , k m τ m ( L ) - V t ( L ) | + | V τ m , k m τ m ( L ) - V ~ t ( L ) | | V τ m , k m τ m ( L ) - V τ m , k m ( L ) | + | V τ m , k m ( L ) - V t ( L ) | + | V τ m , k m τ m ( L ) - V ~ τ m , k m ( L ) | + | V ~ τ m , k m ( L ) - V ~ t ( L ) | .

Letting m establishes uniqueness of the continuous-time limit.

Remark 6.

If L L 1 ( 𝔽 ) , then { 𝔼 t [ L T ] } t [ 0 , T ] is a martingale with a unique càdlàg modification [17, p. 8]. In particular, if L L 1 ( 𝔽 ) is càdlàg, then { 𝔼 t [ L T - L t ] } t [ 0 , T ] is a possible candidate for the continuous-time limit of discrete-time CoCM processes. This particular limit will appear naturally for a wide class of processes L; see Theorem 2 below.

Recall from Section 2 that

φ τ m , k δ m , k ( Y ) = 1 1 + η δ m , k ( 𝔼 τ m , k [ Y ] - ( 1 - α δ m , k ) ES τ m , k , 1 - α δ m , k ( - Y ) + ( 1 - α δ m , k + η δ m , k ) VaR τ m , k , 1 - α δ m , k ( - Y ) )

and, by (2.1),

(3.1) V τ m , k τ ( L ) = φ τ m , k δ m , k φ τ m , m - 1 δ m , m - 1 ( L T - L τ m , k ) .

Motivated by economic arguments, we have assumed that 1 - α δ m , k and η δ m , k are both of order δ m , k . For some stochastic processes, precise details are provided below; the aggregate contribution to the value V τ m , k τ ( L ) from ES τ m , i , 1 - α δ m , i and VaR τ m , i , 1 - α δ m , i for i > k will be asymptotically negligible as m . In this case, asymptotically as m , (3.1) collapses into a composition of conditional expectations which, by the tower property of conditional expectations, is simply a conditional expectation of the remaining cash flow. Heuristically, cash flow models of e.g. diffusion-process type give asymptotically negligible risk ( VaR and ES ) contributions to the cost-of-capital margin, whereas cash flow models allowing for jumps (with sufficiently high probability) give nonnegligible risk contributions to the cost-of-capital margin. Precise statements are found in Theorems 2, 3 and 4 below.

Notice also from the representation of the one-step CoCM mapping φ τ m , k δ m , k that a discrete-time CoCM is not additive, V τ m , k τ ( X + L ) V τ m , k τ ( X ) + V τ m , k τ ( L ) , and not even subadditive in general. Theorem 2 below gives sufficient conditions under which the continuous-time limit is additive, i.e. V t ( X + L ) = V t ( X ) + V t ( L ) . This property does not hold in general.

The following technical result, Lemma 1, is a key result for proving convergence of a sequence of discrete-time CoCM process to a continuous-time limit process. Its main feature is that it enables explicit control of error terms appearing in the sequence of recursions leading to the continuous-time limit process. The reason why this result is placed here and not in Section 5 is that it is instructive to highlight the statement of Lemma 1 which provides the induction step that is key to proving Theorem 2 below.

We use the notation f ( δ ) o ( δ ) and g ( δ ) o ( 1 ) for functions f , g satisfying

lim δ 0 f ( δ ) δ = 0 and lim δ 0 g ( δ ) = 0 .

Lemma 1.

Let L = { L t } t [ 0 , T ] , X = { X t } t [ 0 , T ] with L , X L 1 ( F ) . Suppose that there exist constants δ 0 ( 0 , 1 2 ) , γ ( 0 , 2 ) , ε ( 0 , 1 ) and C 1 , C 2 > 0 such that, for δ ( 0 , δ 0 ) and for any y δ ( γ - ε γ ) / 2 and any t [ 0 , T - δ ] ,

(3.2) t ( | X t + δ - X t | > y ( 1 + | X t | ) ) C 1 δ 2 y - 2 / γ ,
(3.3) t ( | X t + δ 2 - X t 2 | > y ( 1 + | X t | ) 2 ) C 1 δ 2 y - 2 / γ ,
(3.4) t ( | 𝔼 t + δ [ X T - X t + δ ] - 𝔼 t [ X T - X t ] | > y ( 1 + | X t | ) ) C 1 δ 2 y - 2 / γ ,
(3.5) | 𝔼 t [ X t + δ 2 - X t 2 ] | C 2 δ ( 1 + X t 2 ) .

Let τ be a partition of [ 0 , T ] , 0 = τ 0 < < τ m = T , and let 0 < ε ′′ < ε < ε . If, for some i { 0 , , m - 1 } , there exists a constant A τ i + 1 0 such that

(3.6) V ^ τ i + 1 τ , ε ( X + L ) 𝔼 τ i + 1 [ X T - X τ i + 1 ] + A τ i + 1 ( 1 + X τ i + 1 2 ) + V ^ τ i + 1 τ , ε ′′ ( L ) ,
(3.7) V ˇ τ i + 1 τ , ε ( X + L ) 𝔼 τ i + 1 [ X T - X τ i + 1 ] - A τ i + 1 ( 1 + X τ i + 1 2 ) + V ˇ τ i + 1 τ , ε ′′ ( L ) ,

then, for τ i + 1 - τ i sufficiently small,

V ^ τ i τ , ε ( X + L ) 𝔼 τ i [ X T - X τ i ] + A τ i ( 1 + X τ i 2 ) + V ^ τ i τ , ε ′′ ( L ) ,
V ˇ τ i τ , ε ( X + L ) 𝔼 τ i [ X T - X τ i ] - A τ i ( 1 + X τ i 2 ) + V ˇ τ i τ , ε ′′ ( L ) ,

where, for some constant B > 0 and f ( δ ) o ( δ ) ,

A τ i = A τ i + 1 ( 1 + B ( τ i + 1 - τ i ) ) + f ( τ i + 1 - τ i ) .

The following result, which relies strongly on Lemma 1, gives mild sufficient conditions under which the continuous-time CoCM process V ( X + L ) of a sum of two cash flows X and L decomposes into a sum V ( X ) + V ( L ) of the continuous-time CoCM processes of the two cash flows. Moreover, by considering the special case L = 0 , it gives sufficient conditions under which the continuous-time CoCM collapses into a conditional expectation of the remaining cash flow: V t ( X ) = 𝔼 t [ X T - X t ] . Notice that, in Theorem 2 below, we make no assumptions about independence or some form of dependence between the processes L and X.

Theorem 2.

Let L = { L t } t [ 0 , T ] and X = { X t } t [ 0 , T ] with L , X L 1 ( F ) and càdlàg. Suppose that there exist constants δ 0 ( 0 , 1 2 ) , γ ( 0 , 2 ) , ε ( 0 , 1 ) and C 1 , C 2 > 0 such that, for δ ( 0 , δ 0 ) and for any y δ ( γ - ε γ ) / 2 and any t [ 0 , T - δ ] , X satisfies conditions (3.2)–(3.5) in Lemma 1. Suppose that, for some β 2 ( 0 , ε ) and any sequence { τ m } m = 1 of partitions of [ 0 , T ] , 0 = τ m , 0 < < τ m , m = T with lim m mesh ( τ m ) = 0 ,

sup t τ m | V t τ m ( L ) - V t ( L ) | 0 a.s. 𝑎𝑠 m ,
sup t τ m | V ^ t τ m , β 2 ( L ) - V ˇ t τ m , β 2 ( L ) | 0 a.s. 𝑎𝑠 m .

If further sup t [ 0 , T ] X t 2 < a.s., then, for any β 1 ( β 2 , ε ) and any sequence { τ m } m = 1 of partitions of [ 0 , T ] , 0 = τ m , 0 < < τ m , m = T , such that lim m mesh ( τ m ) = 0 ,

sup t τ m | V t τ m ( X + L ) - 𝔼 t [ X T - X t ] - V t ( L ) | 0 a.s. 𝑎𝑠 m ,
sup t τ m | V ^ t τ m , β 1 ( X + L ) - V ˇ t τ m , β 1 ( X + L ) | 0 a.s. 𝑎𝑠 m .

Notice the following consequence of repeated application of Theorem 2.

Corollary 1.

Let L = { L t } t [ 0 , T ] and X ( k ) = { X t ( k ) } t [ 0 , T ] , k = 1 , , n , with L , X ( k ) L 1 ( F ) and càdlàg such that the requirements of Theorem 2 hold for each pair ( L , X ( k ) ) , k = 1 , , n . Then

sup t τ m | V t τ m ( k = 1 n X ( k ) + L ) - k = 1 n 𝔼 t [ X T ( k ) - X t ( k ) ] - V t ( L ) | 0 a.s. 𝑎𝑠 m .

Next we present an example of a wide class of stochastic processes X which satisfies conditions (3.2)–(3.5) in Lemma 1 and Theorem 2. These processes are strong solutions to stochastic differential equations driven by Brownian motion; see (3.10) below.

Let μ , σ : [ 0 , ) × be jointly measurable and satisfy the uniform Lipschitz type growth conditions

(3.8) μ ( t , x ) 2 + σ ( t , x ) 2 < K 1 ( 1 + x 2 ) ,
(3.9) | μ ( t , x ) - μ ( t , y ) | + | σ ( t , x ) - σ ( t , y ) | < K 1 | x - y |

for some constant K 1 > 0 . Let W be an -valued 𝔽 -adapted Brownian motion, and consider the stochastic differential equation

(3.10) d X t = μ ( t , X t ) d t + σ ( t , X t ) d W t , X 0 = x 0 .

Conditions (3.8) and (3.9) ensure that (3.10) has a unique strong solution which is a strong Markov process (see [5, Appendix E, Theorem E7]). Moreover, (3.8) and (3.9) together imply that the solution X to (3.10) is in L 1 ( 𝔽 ) .

The following result gives sufficient conditions on the process { X t } t [ 0 , T ] , which is the strong solution to (3.10), under which { X t } t [ 0 , T ] satisfies the conditions in Theorem 2.

Theorem 3.

Let { X t } t [ 0 , T ] be the strong solution to (3.10) with μ and σ satisfying (3.8) and (3.9), and set u ( t , x ) :- E [ X T X t = x ] . If there exists K 2 > 0 such that u satisfies either

(3.11) | u ( t , x ) - u ( s , y ) | < K 2 ( | t - s | 1 / 2 ( 1 + | x | ) + | x - y | )

for all ( t , x ) , ( s , y ) [ 0 , T ] × R with | t - s | 1 , or

(3.12) | x u ( t , x ) | < K 2

for all ( t , x ) [ 0 , T ] × R , then { X t } t [ 0 , T ] satisfies (3.2)–(3.5) for γ = 1 2 and any ε ( 0 , 1 ) .

Below, we give an example of a fairly general class of Itô processes for which both (3.11) and (3.12) hold.

Example 1.

Consider a process X given by the SDE

d X t = ( a ( t ) + b ( t ) X t ) d t + σ ( t , X t ) d W t , X 0 = x 0 .

The functions a and b are assumed to be continuous, and σ is assumed to satisfy the usual Lipschitz and linear growth conditions, ensuring existence of a strong solution. Then u ( t , x ) is given by the Feynman–Kac equation

u t + ( a ( t ) + b ( t ) x ) u x + σ 2 ( t , x ) 2 2 u x 2 = 0 ,

which has the easily verifiable solution u ( t , x ) = A ( t ) + B ( t ) x , where

B ( t ) = exp { - t T b ( s ) d s } , A ( t ) = t T a ( s ) B ( s ) d s .

Clearly, u satisfies (3.12). Furthermore, u satisfies (3.11):

| u ( t , x ) - u ( s , y ) | | A ( t ) - A ( s ) | + | B ( s ) | | x - y | + | B ( t ) - B ( s ) | | x | K 2 ( | t - s | ( 1 + | x | ) + | x - y | ) K 2 ( | t - s | 1 / 2 ( 1 + | x | ) + | x - y | )

for | t - s | 1 , where K 2 :- max { K A , K B , max t [ 0 , T ] | B ( t ) | } with K A , K B denoting the Lipschitz constants for A and B.

Due to the independent-increment property, additive processes (see [21, Chapter 2]) provide examples of stochastic processes { L t } t [ 0 , T ] for which the sequence of discrete-time CoCM processes converges to an explicit continuous-time limit { V t ( L ) } t [ 0 , T ] , where V t ( L ) is not equal to 𝔼 t [ L T - L t ] .

Theorem 4.

Let { L t } t [ 0 , T ] be an R -valued additive process in L 0 ( F ) with the system of generating triplets { ( σ t 2 , ν t , γ t ) } t [ 0 , T ] . For each t [ 0 , T ] , let σ ˙ t 2 and γ ˙ t be constants, and let ν ˙ t be a measure on R { 0 } whose restrictions to sets bounded away from 0 are finite. Consider the following statements.

  1. For each t [ 0 , T ] ,

    1 δ ( σ s + δ 2 - σ s 2 , ν s + δ - ν s , γ s + δ - γ s ) ( σ ˙ t 2 , ν ˙ t , γ ˙ t ) 𝑎𝑠 δ 0 , s t ,

    where the convergence in the second component means that

    lim δ 0 , s t 1 δ { 0 } f ( x ) ( ν s + δ - ν s ) ( d x ) = { 0 } f ( x ) ν ˙ t ( d x )

    for bounded and continuous functions vanishing in a neighborhood of 0.

  2. For each t [ 0 , T ) ,

    lim ε 0 lim sup δ 0 , s t [ - ε , ε ] x 2 1 δ ( ν s + δ - ν s ) ( d x ) = 0 .

  3. [ 0 , T ] × ( 0 , ) ( t , x ) ν ˙ t ( x , ) ( 0 , ) is continuous, and x ν ˙ t ( x , ) is strictly decreasing on ( 0 , ) .

  4. For some q ¯ ( 0 , ) and for each t [ 0 , T ] , there exists q t ( q ¯ , ) solving ν ˙ t ( q t , ) = - log α .

  5. sup δ ( 0 , T ] sup t [ 0 , T - δ ] δ - 1 𝔼 [ ( Δ L t + δ ) 2 ] < .

Let { τ m } m = 1 be a sequence of partitions of [ 0 , T ] , 0 = τ m , 0 < < τ m , m = T , such that lim m mesh ( τ m ) = 0 . Let { V t τ m ( L ) } t τ m be given by (2.1) for τ = τ m , with sequences { α δ m , k } and { η δ m , k } satisfying (2.2). If (i)(v) hold, then L L 1 ( F ) and

(3.13) sup t τ m | V t τ m ( L ) - 𝔼 [ L T - L t ] - t T K L ( s ) d s | 0 a.s. 𝑎𝑠 m ,

where K L ( t ) is given by

(3.14) K L ( t ) = log ( 1 + η ) q t - q t ν ˙ t ( x , ) d x .

Moreover, for β ( 0 , ) ,

sup t τ m | V ^ t τ m , β ( L ) - V ˇ t τ m , β ( L ) | 0 a.s. 𝑎𝑠 m .

For the special case of Lévy processes or processes obtained by nonrandom time changes of Lévy processes, Theorem 4 simplifies considerably. Notice that a Lévy process is an additive process with system of generating triplets { ( σ t 2 , ν t , γ t ) } = { ( t σ 2 , t ν , t γ ) } .

Corollary 2.

Let { L ~ t } t [ 0 , ) be an R -valued Lévy process with generating triplet { ( σ 2 , ν , γ ) } and with respect to a filtration G = { G t } t [ 0 , ) . Consider the following statements.

  1. x 2 ν ( d x ) < and lim ε 0 [ - ε , ε ] x 2 ν ( d x ) = 0 .

  2. x ν ( x , ) is continuous and strictly decreasing on ( 0 , ) .

Let λ : [ 0 , T ] ( 0 , ) be continuous, let μ : [ 0 , T ] ( 0 , ) be given by μ ( t ) = 0 t λ ( s ) d s , and let { L t } t [ 0 , T ] be given by L t = L ~ μ ( t ) . Now consider { L t } t [ 0 , T ] with respect to the filtration F :- { G μ ( t ) } t [ 0 , T ] . Let { τ m } m = 1 be a sequence of partitions of [ 0 , T ] , 0 = τ m , 0 < < τ m , m = T , such that lim m mesh ( τ m ) = 0 . Let { V t τ m ( L ) } t τ m be given by (2.1) for τ = τ m , with sequences { α δ m , k } and { η δ m , k } satisfying (2.2). If (i) and (ii) hold and, for some q ¯ ( 0 , ) and for each t [ 0 , T ) , there exists q t > q ¯ solving λ ( t ) ν ( q t , ) = - log α , then (3.13) holds, where

K L ( t ) = log ( 1 + η ) q t - λ ( t ) q t ν ( x , ) d x .

Example 2.

A compound Poisson process driven by a Poisson process N with mean-value function t μ ( t ) is an additive process with representation L t = k = 1 N t Z k , where N is independent of the iid sequence { Z k } . The Lévy measure of L t is μ ( t ) P Z , where P Z is the common distribution of the variables Z k . In the setting of Corollary 2, L ~ is a compound Poisson process driven by a homogeneous Poisson process N ~ with unit intensity and representation L ~ t = k = 1 N ~ t Z k . If further 1 + log α λ ( t ) > 0 for all t [ 0 , T ) , then (3.13) holds, where

K L ( t ) = log ( 1 + η ) q t - λ ( t ) q t F ¯ Z ( x ) d x , q t = F Z - 1 ( 1 + log α λ ( t ) ) ,

where F Z ( z ) = P Z ( - , z ] and F ¯ Z = 1 - F Z . For illustration, if we let L be a compound Poisson process with jumps uniformly distributed on [ 0 , 1 ] occurring according to a homogeneous Poisson process with intensity λ such that q :- 1 + log α λ > 0 , then

K L ( t ) = log ( 1 + η ) q - λ ( 1 2 - q + 1 2 q 2 ) ,
V t ( L ) = ( T - t ) ( log ( 1 + η ) q + λ q - λ 1 2 q 2 ) = ( T - t ) q ( log ( 1 + η ) + λ ( 1 - 1 2 q ) ) .

Remark 7.

Since the statements in Theorems 2, 4 and Corollary 2 are a bit technical, it may be useful to discuss the heuristics underlying these results.

If { L t } t [ 0 , T ] is a process with independent increments, then the cost-of-capital margin V τ k τ ( L ) is nonrandom for each τ k , and therefore, due to the translation-invariance property in Theorem 1,

V τ k τ ( L ) = i = k m - 1 φ τ i δ i ( Δ L τ i + 1 ) = 𝔼 [ L T - L τ k ] + i = k m - 1 δ i φ τ i δ i ( Δ L τ i + 1 - 𝔼 [ Δ L τ i + 1 ] ) δ i .

If

φ t δ ( L t + δ - L t - 𝔼 [ L t + δ - L t ] ) δ

converges uniformly in t to some integrable function K L ( t ) as δ 0 , then there exists a limit process { V t } t [ 0 , T ] satisfying

V t ( L ) = 𝔼 [ L T - L t ] + t T K L ( s ) d s .

The conditions in Theorem 4 are sufficient to ensure the uniform convergence, and showing this is the main difficulty in the proof.

The heuristics explaining why the continuous-time value for an Itô diffusion { X t } t [ 0 , T ] should be the expectation 𝔼 t [ X T - X t ] revolves around the fact that, for Brownian motion (a special case of an additive process),

1 δ VaR t , 1 - α δ ( W t + δ - W t ) 0 as δ 0 .

Suppose V τ k + 1 τ ( X ) 𝔼 [ X T - X τ k + 1 X τ k + 1 ] and that we want to verify that also V τ k τ ( X ) 𝔼 [ X T - X τ k X τ k ] . Set u ( t , X t ) :- 𝔼 [ X T X t ] , and notice that, by Itô’s lemma and by Feynman–Kac, d u ( t , X t ) = u x ( t , X t ) σ ( t , X t ) d W t . Then it seems likely, using the translation-invariance property in Theorem 1 and positive homogeneity of φ τ k δ k , that

V τ k ( X ) = φ τ k δ k ( X τ k + 1 - X τ k + V τ k + 1 τ ( X ) ) u ( τ k , X τ k ) - X τ k + φ τ k δ k ( X τ k + 1 + u ( τ k + 1 , X τ k + 1 ) - X τ k + 1 - u ( τ k , X τ k ) ) = u ( τ k , X τ k ) - X τ k + φ τ k δ k ( u ( τ k + 1 , X τ k + 1 ) - u ( τ k , X τ k ) ) u ( τ k , X τ k ) - X τ k + φ τ k δ k ( u x ( τ k , X τ k ) σ ( τ k , X τ k ) Δ W τ k + 1 ) = u ( τ k , X τ k ) - X τ k + u x ( τ k , X τ k ) σ ( τ k , X τ k ) φ τ k δ k ( Δ W τ k + 1 ) u ( τ k , X τ k ) - X τ k .

In the first approximation, we used the assumption, in the second, we used an Euler–Maruyama approximation and, in the third, the fact that φ τ k δ k ( Δ W τ k + 1 ) 0 . Making these approximations precise requires detailed analysis of the remainder terms in the argument of the mapping φ τ k δ k . Essentially, this is done in Lemma 1. The fact that value-at-risk is not subadditive complicates the analysis and contributes to the need to introduce the upper and lower CoCM processes in Definition 3.

4 Proofs

Proof of Theorem 1.

The statements in Theorem 1 are immediate consequences of [7, Proposition 1] once it is verified that VaR τ k , p (for an arbitrary p ( 0 , 1 ) ) is a mapping from L 1 ( τ k + δ k ) to L 1 ( τ k ) . However, the verification follows from [7, Proposition 4] upon choosing M R in [7, Proposition 4] as the probability measure assigning unit mass to the singleton { 1 - p } . For completeness, we present the argument showing this property of VaR τ k , p . Notice that VaR τ k , p ( | Y | ) VaR τ k , p ( Y ) VaR τ k , p ( - | Y | ) and that

p 𝔼 [ F τ k , | Y | - 1 ( 1 - p ) ] 𝔼 [ 1 - p 1 F τ k , | Y | - 1 ( u ) d u ] 𝔼 [ 0 1 F τ k , | Y | - 1 ( u ) d u ] = 𝔼 [ 𝔼 τ k [ | Y | ] ] = 𝔼 [ | Y | ] < .

Therefore,

𝔼 [ | VaR τ k , p ( - | Y | ) | ] = 𝔼 [ VaR τ k , p ( - | Y | ) ] = 𝔼 [ F τ k , | Y | - 1 ( 1 - p ) ] < ,
𝔼 [ | VaR τ k , p ( | Y | ) | ] = 𝔼 [ - F τ k , - | Y | - 1 ( 1 - p ) ] = 𝔼 [ F τ k , | Y | - 1 ( p + ) ] < .

Proof of Lemma 1.

To ease notation, let t :- τ i , δ :- δ i = τ i + 1 - τ i , A :- A τ i + 1 , U t :- 𝔼 t [ X T - X t ] . By Lemma 2,

V ^ t τ , ε ( X + L ) - 𝔼 t [ X T - X t ]
   = φ ^ t δ , ε ( Δ ( X + L ) t + δ + V ^ t + δ τ , ε ( X + L ) ) - U t
(4.1)    = 1 1 + η δ ( 𝔼 t [ Δ ( X + L ) t + δ + V ^ t + δ τ , ε ( X + L ) - U t ]
(4.2)           - ( 1 - α δ ) ES t , 1 - α δ ( - ( Δ ( X + L ) t + δ + V ^ t + δ τ , ε ( X + L ) - U t ) )
(4.3)           + ( 1 - α δ + η δ ) VaR t , 1 - α δ - δ 1 + ε ( - ( Δ ( X + L ) t + δ + V ^ t + δ τ , ε ( X + L ) - U t ) ) )

If we replace ^ by ˇ , term (4.3) is replaced by

(4.4) ( 1 - α δ + η δ ) VaR t , 1 - α δ + δ 1 + ε ( - ( Δ ( X + L ) t + δ + V ^ t + δ τ , ε ( X + L ) - U t ) ) .

We now bound (4.1)–(4.4) individually. We will use bounds (3.6) and (3.7) repeatedly throughout the arguments.

Term (4.1). An upper bound for (4.1) is constructed as follows:

𝔼 t [ Δ ( X t + δ + L t + δ ) + V ^ t + δ τ , ε ( X + L ) - U t ] 𝔼 t [ Δ L t + δ + V ^ t + δ τ , ε ′′ ( L ) ] + 𝔼 t [ Δ X t + δ + U t + δ - U t + A ( 1 + X t + δ 2 ) ] = 𝔼 t [ Δ L t + δ + V ^ t + δ τ , ε ′′ ( L ) ] + A ( 1 + X t 2 + 𝔼 t [ Δ X t + δ 2 ] ) 𝔼 t [ Δ L t + δ + V ^ t + δ τ , ε ′′ ( L ) ] + A ( 1 + X t 2 + C 2 δ ( 1 + X t 2 ) ) = 𝔼 t [ Δ L t + δ + V ^ t + δ τ , ε ′′ ( L ) ] + A ( 1 + C 2 δ ) ( 1 + X t 2 ) ,

where (3.6) was used for the first inequality, the tower property of conditional expectations was used for the equality and (3.5) was used for the second inequality. Similarly, a lower bound for (4.1) if we replace ^ with ˇ is

𝔼 t [ Δ L t + δ + V ˇ t + δ τ , ε ′′ ( L ) ] - A ( 1 + C 2 δ ) ( 1 + X t 2 ) .

Term (4.2). We first construct an upper bound for (4.2). Using (3.6) and monotonicity and subadditivity of expected shortfall,

ES t , 1 - α δ ( - Δ ( X t + δ + L t + δ ) - V ^ t + δ τ , ε ( X + L ) + U t ) ES t , 1 - α δ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) - Δ X t + δ - U t + δ + U t - A ( 1 + X t + δ 2 ) ) ES t , 1 - α δ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) ) + ES t , 1 - α δ ( - Δ X t + δ - U t + δ + U t - A ( 1 + X t + δ 2 ) ) ES t , 1 - α δ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) ) + ES t , 1 - α δ ( - Δ X t + δ ) + ES t , 1 - α δ ( - U t + δ + U t ) + ES t , 1 - α δ ( - A ( 1 + X t + δ 2 ) ) ES t , 1 - α δ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) ) + ES t , 1 - α δ ( - | Δ X t + δ | ) + ES t , 1 - α δ ( - | U t + δ - U t | ) + ES t , 1 - α δ ( - A ( 1 + X t + δ 2 ) )

Notice that

ES t , 1 - α δ ( - X t + δ 2 ) = ES t , 1 - α δ ( - Δ X t + δ 2 - X t 2 ) X t 2 + ES t , 1 - α δ ( - | Δ X t + δ 2 | ) .

Applying Lemma 3 gives, for some function g ( δ ) o ( 1 ) as δ 0 ,

ES t , 1 - α δ ( - Δ ( X t + δ + L t + δ ) - V ^ t + δ τ , ε ( X + L ) + U t ) ES t , 1 - α δ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) ) + 2 g ( δ ) ( 1 + X t 2 ) + A ( 1 + g ( δ ) ) ( 1 + X t 2 ) .

We now construct a lower bound for (4.2), replacing ^ with ˇ . Using (3.7) and monotonicity and subadditivity of expected shortfall,

ES t , 1 - α δ ( - Δ ( X t + δ + L t + δ ) - V ˇ t + δ τ , ε ( X + L ) + U t ) ES t , 1 - α δ ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) - Δ X t + δ - U t + δ + U t + A ( 1 + X t + δ 2 ) ) ES t , 1 - α δ ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) ) - ES t , 1 - α δ ( Δ X t + δ + U t + δ - U t - A ( 1 + X t + δ 2 ) )

An upper bound for ES t , 1 - α δ ( Δ X t + δ + U t + δ - U t - A ( 1 + X t + δ 2 ) ) , derived as above, gives the lower bound for (4.2):

ES t , 1 - α δ ( - Δ ( X t + δ + L t + δ ) - V ˇ t + δ τ , ε ( X + L ) + U t ) ES t , 1 - α δ ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) ) - 2 g ( δ ) ( 2 + X t 2 ) - A ( 1 + g ( δ ) ) ( 1 + X t 2 ) .

Notice that lim δ 0 δ - 1 ( 1 - α δ ) = - log α . Therefore, there exists a function f ( δ ) o ( δ ) such that, for all δ > 0 sufficiently small,

( 1 - α δ ) ES t , 1 - α δ ( - Δ ( X t + δ + L t + δ ) - V ^ t + δ τ , ε ( X + L ) + U t ) ( 1 - α δ ) ES t , 1 - α δ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) ) + ( ( 1 - log α ) δ A + f ( δ ) ) ( 1 + X t 2 ) ,
( 1 - α δ ) ES t , 1 - α δ ( - Δ ( X t + δ + L t + δ ) - V ˇ t + δ τ , ε ( X + L ) + U t ) ( 1 - α δ ) ES t , 1 - α δ ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) ) - ( ( 1 - log α ) δ A + f ( δ ) ) ( 1 + X t 2 ) .

Term (4.3). Now we construct an upper and a lower bound for (4.3). Using (3.6) and monotonicity of value-at-risk,

VaR t , 1 - α δ - δ 1 + ε ( - Δ ( X t + δ + L t + δ ) - V ^ t + δ τ , ε ( X + L ) + U t ) VaR t , 1 - α δ - δ 1 + ε ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) - | U t + δ - U t | - A ( 1 + X t + δ 2 ) - | Δ X t + δ | ) VaR t , 1 - α δ - δ 1 + ε ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) - | Δ X t + δ | - | U t + δ - U t | - A | Δ X t + δ 2 | ) + A ( 1 + X t 2 ) .

Similarly,

VaR t , 1 - α δ + δ 1 + ε ( - Δ ( X t + δ + L t + δ ) - V t + δ τ ( X + L ) + U t ) VaR t , 1 - α δ + δ 1 + ε ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) + | Δ X t + δ | + | U t + δ - U t | + A | Δ X t + δ 2 | ) - A ( 1 + X t 2 ) .

Applying Lemma 4, for δ sufficiently small, yields the upper bound

VaR t , 1 - α δ - δ 1 + ε ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) - | Δ X t + δ | - | U t + δ - U t | - A | Δ X t + δ 2 | ) VaR t , 1 - α δ - δ 1 + ε ′′ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) ) + 5 δ ( u - ε u ) / 2 ( 1 + A ) ( 1 + X t 2 ) .

Similarly,

VaR t , 1 - α δ + δ 1 + ε ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) + | Δ X t + δ | + | U t + δ - U t | + A | Δ X t + δ 2 | ) VaR t , 1 - α δ + δ 1 + ε ′′ ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) ) - 5 δ ( u - ε u ) / 2 ( 1 + A ) ( 1 + X t 2 ) .

Notice that lim δ 0 δ - 1 ( 1 - α δ + η δ ) = log ( 1 + η ) - log ( α ) , and therefore, there exists a function f ( δ ) o ( δ ) such that, for all δ > 0 sufficiently small,

( 1 - α δ + η δ ) VaR t , 1 - α δ - δ 1 + ε ( - Δ ( X t + δ + L t + δ ) - V t + δ τ ( X + L ) - U t ) ( 1 - α δ + η δ ) VaR t , 1 - α δ - δ 1 + ε ′′ ( - Δ L t + δ - V ^ t + δ τ , ε ′′ ( L ) ) + A ( 1 + log ( 1 + η ) - log α ) δ ( 1 + X t 2 ) + f ( δ ) ( 1 + X t 2 ) ,
( 1 - α δ + η δ ) VaR t , 1 - α δ + δ 1 + ε ( - Δ ( X t + δ + L t + δ ) - V t + δ τ ( X + L ) - U t ) ( 1 - α δ + η δ ) VaR t , 1 - α δ + δ 1 + ε ′′ ( - Δ L t + δ - V ˇ t + δ τ , ε ′′ ( L ) ) - A ( 1 + log ( 1 + η ) - log α ) δ ( 1 + X t 2 ) - f ( δ ) ( 1 + X t 2 ) .

Summing up, there exists a function f ( δ ) o ( δ ) such that, for any sufficiently small δ > 0 , defining

B :- C 2 + log η + 2 - 2 log α ,

we have

V ^ t τ , ε ( X + L ) 𝔼 t [ X T - X t ] + φ ^ t δ , ε ′′ ( Δ L t + δ + V ^ t + δ τ , ε ′′ ( L ) ) + A ( 1 + ( C 2 + log η + 2 - 2 log α ) δ ) + f ( δ ) = 𝔼 t [ X T - X t ] + V ^ t τ , ε ′′ ( L ) + A ( 1 + B δ ) + f ( δ ) ,
V t τ ( X + L ) 𝔼 t [ X T - X t ] + φ ˇ t δ , ε ′′ ( Δ L t + δ + V ˇ t + δ τ , ε ′′ ( L ) ) - A ( 1 + ( C 2 + log η + 2 - 2 log α ) δ ) + f ( δ ) = 𝔼 t [ X T - X t ] + V ˇ t τ , ε ′′ ( L ) - A ( 1 + B δ ) + f ( δ ) .

The proof is complete. ∎

Proof of Theorem 2.

Consider any sequence of partitions { τ m } m = 1 . Take m sufficiently large so that mesh ( τ m ) is small enough for the statements of Theorem 1 to hold for each t τ m , with respect to the triple β 2 < β 1 < ε and a function f ( δ ) o ( δ ) . We show via backward induction that

(4.5) V ^ τ m , i τ m , β 1 ( X + L ) 𝔼 τ m , i [ X T - X τ i ] + A τ m , i ( 1 + X τ m , i 2 ) + V ^ τ m , i τ m , β 2 ( L ) ,
(4.6) V ˇ τ i τ m , β 1 ( X + L ) 𝔼 τ m , i [ X T - X τ i ] - A τ m , i ( 1 + X τ m , i 2 ) + V ˇ τ m , i τ m , β 2 ( L ) .

However, since the induction base i = m is trivial and the induction step follows immediately from Lemma 1, (4.5) and (4.6) immediately follow. Now we note, by Lemma 5, that there exists h ( δ ) o ( 1 ) such that, for each m large enough and k = 0 , , m , A τ m , k h ( mesh ( τ m ) ) . Hence,

sup t τ m | V t τ m ( X + L ) - 𝔼 t [ X T - X t ] - V t ( L ) | sup t τ m max ( | V ^ t τ m , β 1 ( X + L ) - 𝔼 t [ X T - X t ] - V t ( L ) | , | V ˇ t τ m , β 1 ( X + L ) - 𝔼 t [ X T - X t ] - V t ( L ) | ) sup t τ m max ( | V ^ t τ m , β 2 ( L ) - V t ( L ) | , | V ˇ t τ m , β 2 ( L ) - V t ( L ) | ) + sup k { 0 , , m } A τ m , k ( 1 + X τ m , k 2 ) sup t τ m max ( | V ^ t τ m , β 2 ( L ) - V t ( L ) | , | V ˇ t τ m , β 2 ( L ) - V t ( L ) | ) + h ( mesh ( τ m ) ) ( 1 + sup t [ 0 , T ] X t 2 ) .

Similarly,

sup t τ m | V ^ t τ m , β 1 ( X + L ) - V ˇ t τ m , β 1 ( X + L ) | sup t τ m | V ^ t τ m , β 2 ( L ) - V ˇ t τ m , β 2 ( L ) | + 2 h ( mesh ( τ m ) ) ( 1 + sup t [ 0 , T ] X t 2 )

Now, if sup t [ 0 , T ] X t 2 < almost surely, then

sup t τ m max ( | V ^ t τ m , β 2 ( L ) - V t ( L ) | , | V ˇ t τ m , β 2 ( L ) - V t ( L ) | ) + h ( mesh ( τ m ) ) ( 1 + sup t [ 0 , T ] X t 2 ) 0 a.s. as m ,

and

sup t τ m | V ^ t τ m , β 2 ( L ) - V ˇ t τ m , β 2 ( L ) | + 2 h ( mesh ( τ m ) ) ( 1 + sup t [ 0 , T ] X t 2 ) 0 a.s. as m .

This completes the proof. ∎

Proof of Theorem 3.

Consider any ε ( 0 , 1 ) . By Lemma 6, for all y δ ( 1 - ε ) / 4 and t [ 0 , T - δ ] , for δ > 0 sufficiently small,

t ( sup s [ t , t + δ ] | X s - X t | > y ( 1 + | X t | ) ) C 1 exp { - y 2 C 2 δ } ,
t ( sup s [ t , t + δ ] | X s 2 - X t 2 | > y ( 1 + | X t | ) 2 ) C 1 exp { - y 2 C 2 δ } .

Notice that

(4.7) C 1 exp { - y 2 C 2 δ } C 1 1 + y 2 C 2 δ + y 4 2 C 2 δ 2 2 C 1 C 2 δ 2 y - 4 ,

from which it is clear that we may bound C 1 exp { - y 2 C 2 δ } from above by C δ 2 y - 2 / γ for all y δ γ ( 1 - ε ) / 2 for γ = 1 2 and C = 2 C 1 C 2 . We have therefore verified that conditions (3.2) and (3.3) in Lemma 1 hold.

Suppose first that the function u satisfies (3.11). Then

t ( sup s [ t , t + δ ] | u ( s , X s ) - X s - u ( t , X t ) + X t | > y ( 1 + | X t | ) ) t ( sup s [ t , t + δ ] ( K 2 + 1 ) ( δ 1 / 2 ( 1 + | X t | ) + | X s - X t | ) > y ( 1 + | X t | ) ) = t ( sup s [ t , t + δ ] | X s - X t | > ( y K 2 + 1 - δ 1 / 2 ) ( 1 + | X t | ) ) .

For any β ( 0 , ε ) ,

δ ( 1 - ε ) / 4 K 2 + 1 - δ 1 / 2 > δ ( 1 - β ) / 4

for δ sufficiently small. Hence, by Lemma 6 and (4.7), condition (3.4) holds for y δ ( 1 - β ) / 4 . Since ε ( 0 , 1 ) was arbitrary, we have verified that (3.4) in Lemma 1 holds for y δ ( 1 - ε ) / 4 . Now suppose instead that the function u satisfies (3.12). Then the verification of (3.4) in Lemma 1 follows from combining Lemma 8 and (4.7). Finally, the verification of condition (3.5) in Lemma 1 follows from Lemma 7. ∎

Proof of Theorem 4.

First notice that, by (v), 𝔼 [ | L t | ] 𝔼 [ L t 2 ] 1 / 2 < , and therefore L L 2 ( 𝔽 ) L 1 ( 𝔽 ) . Write Δ L t + δ :- L t + δ - L t , and notice that Δ L t + δ is infinitely divisible with Lévy measure ν t + δ - ν t . We first show the statement

(4.8) lim δ 0 , s t | F Δ L s + δ - 1 ( α δ ) - q t | = 0 .

Write F ¯ Δ L s + δ ( x ) :- 1 - F Δ L s + δ ( x ) , and notice that

F Δ L s + δ - 1 ( α δ ) = min { x : F Δ L s + δ ( x ) α δ } = min { x : exp { 1 δ log ( 1 - F ¯ Δ L s + δ ( x ) ) } α δ 1 / δ } = min { x : 1 δ log ( 1 - F ¯ Δ L s + δ ( x ) ) 1 δ log ( 1 - ( 1 - α δ ) ) } .

Notice that - y - y 2 log ( 1 - y ) - y for y [ 0 , 1 ) , and recall that lim δ 0 1 - α δ δ = - log α . By Lemma 9, Corollary 3 and the continuity of x ν ˙ t ( x , ) for x > 0 ,

(4.9) lim δ 0 , s t 1 δ F ¯ Δ L s + δ ( x ) = ν ˙ t ( x , ) for every x > 0 .

Moreover, by combining assumptions (iii) and (iv), there exists a unique q t > 0 such that ν ˙ t ( q t , ) = - log α . Statement (4.8) now follows.

We next show the statement

(4.10) lim δ 0 , s t | 1 δ ( φ s δ ( Δ L s + δ ) - 𝔼 [ Δ L s + δ ] 1 + η δ ) - K L ( t ) | = 0 ,

where K L ( t ) is given in (3.14). Notice that, due to the independent increments of additive processes, φ s δ ( Δ L s + δ ) is independent of s and

1 δ φ s δ ( Δ L s + δ ) = 1 δ F Δ L s + δ - 1 ( α δ ) - 1 ( 1 + η δ ) δ 𝔼 [ ( F Δ L s + δ - 1 ( α δ ) - Δ L s + δ ) + ] = ( 1 - F Δ L s + δ ( F Δ L s + δ - 1 ( α δ ) ) + η δ ) F Δ L s + δ - 1 ( α δ ) ( 1 + η δ ) δ + 𝔼 [ Δ L s + δ I { Δ L s + δ F Δ L s + δ - 1 ( α δ ) } ] ( 1 + η δ ) δ .

Notice that

1 δ 𝔼 [ Δ L s + δ I { Δ L s + δ F Δ L s + δ - 1 ( α δ ) } ] = 1 δ 𝔼 [ Δ L s + δ ] - 1 δ 𝔼 [ Δ L s + δ I { Δ L s + δ > F Δ L s + δ - 1 ( α δ ) } ] = 1 δ 𝔼 [ Δ L s + δ ] - F Δ L s + δ - 1 ( α δ ) δ ( 1 - F Δ L s + δ ( F Δ L s + δ - 1 ( α δ ) ) ) - 1 δ F Δ L s + δ - 1 ( α δ ) ( Δ L s + δ > x ) d x .

Hence,

1 δ ( φ s δ ( Δ L s + δ ) - 𝔼 [ Δ L s + δ ] 1 + η δ ) = η δ F Δ L s + δ - 1 ( α δ ) ( 1 + η δ ) δ - 1 ( 1 + η δ ) δ F Δ L s + δ - 1 ( α δ ) ( Δ L s + δ > x ) d x .

Combining (2.2) and (4.8) establishes the appropriate convergence to log ( 1 + η ) q t of the first terms on the right-hand side as δ 0 . We now address the second term. First notice that, by (4.8), there exists c ( 0 , ) such that

(4.11) lim sup δ 0 , s t | F Δ L s + δ - 1 ( α δ ) 1 ( 1 + η δ ) δ ( Δ L s + δ > x ) d x - q t ν ˙ t ( x , ) d x | lim sup δ 0 , s t c | F Δ L s + δ - 1 ( α δ ) - q t | + lim sup δ 0 , s t | q t 1 ( 1 + η δ ) δ ( Δ L s + δ > x ) d x - q t ν ˙ t ( x , ) d x | = lim sup δ 0 , s t | q t 1 ( 1 + η δ ) δ ( Δ L s + δ > x ) d x - q t ν ˙ t ( x , ) d x | .

We will show that (4.11) equals 0. By continuity of x ν ˙ t ( x , ) for x > 0 and the fact that all functions in (4.9) are monotone, the pointwise convergence in (4.9) is in fact uniform on any interval [ a , b ] , 0 < a < b < . Hence, for any b ( q t , ) ,

lim sup δ 0 , s t | q t b 1 ( 1 + η δ ) δ ( Δ L s + δ > x ) d x - q t b ν ˙ t ( x , ) d x | = 0 ,

from which it follows that (4.11) is less than or equal to

lim sup δ 0 , s t b 1 ( 1 + η δ ) δ ( Δ L s + δ > x ) d x + b ν ˙ t ( x , ) d x .

Next we show that the above upper bound on (4.11) can be chosen arbitrarily small by choosing b sufficiently large. By Markov’s inequality, it follows that

1 δ ( Δ L s + δ > x ) 1 δ ( ( Δ L s + δ ) 2 > x 2 ) 𝔼 [ ( Δ L s + δ ) 2 ] δ x 2

and further that

b 1 δ ( Δ L s + δ > x ) d x 1 δ b 𝔼 [ ( Δ L s + δ ) 2 ] .

In particular, the assumed property (v) gives

lim b lim sup δ 0 , s t b 1 ( 1 + η δ ) δ ( Δ L t + δ > x ) d x = 0 .

By Fatou’s lemma, assumption (v) and (4.9), for any b ( 0 , ) ,

b ν ˙ t ( x , ) d x lim inf δ 0 , s t b 1 δ ( Δ L s + δ > x ) d x lim sup δ 0 , s t b 1 δ ( Δ L s + δ > x ) d x < .

In particular,

lim b lim sup δ 0 b ν ˙ t ( x , ) d x = 0 .

Summing up, we have shown that (4.11) equals 0, from which it follows that

lim δ 0 , s t F Δ L s + δ - 1 ( α δ ) 1 ( 1 + η δ ) δ ( Δ L s + δ > x ) d x = q t ν ˙ t ( x , ) d x

and further that (4.10) holds. By combining assumptions (iii) and (iv), there exists a unique q t > 0 such that ν ˙ t ( q t , ) = - log α . Moreover, by joint continuity of ( t , x ) ν ˙ t ( x , ) , t q t is continuous. Since also t q t is uniformly bounded away from 0, t K L ( t ) is continuous on [ 0 , T ] . Thus (4.10) and Lemma 10 together imply

(4.12) lim δ 0 sup t [ 0 , T - δ ] | 1 δ ( φ t δ ( Δ L t + δ ) - 𝔼 [ Δ L t + δ ] 1 + η δ ) - K L ( t ) | = 0 .

It remains to prove the convergence in (3.13). For any k { 0 , , m - 1 } ,

| V τ m , k τ m ( L ) - 𝔼 [ L T - L τ m , k ] - τ m , k T K L ( s ) d s |
(4.13)    | i = k m - 1 η δ m , i 1 + η δ m , i 𝔼 [ Δ L τ m , i + δ m , i ] |
       + | φ τ m , k δ m , k φ τ m , m - 1 δ m , m - 1 ( i = k m - 1 Δ L τ m , i + δ m , i )
(4.14)           - i = k m - 1 𝔼 [ Δ L τ m , i + δ m , i ] 1 + η δ m , i - i = k m - 1 K L ( τ m , i ) δ m , i |
(4.15)        + | i = k m - 1 K L ( τ m , i ) δ m , i - τ m , k T K L ( s ) d s | .

Term (4.15) tends to 0 as m from the definition of the Riemann integral. Moreover, (4.13) is less than or equal to

sup i m - 1 | 𝔼 [ Δ L τ m , i + δ m , i ] | i = k m - 1 η δ m , i 1 + η δ m , i 0 as m

since the sum is uniformly bounded in m and, by Hölder’s inequality and assumption (v),

lim sup m sup i m - 1 | 𝔼 [ Δ L τ m , i + δ m , i ] | lim sup m sup i m - 1 𝔼 [ | Δ L τ m , i + δ m , i | ] lim sup m sup i m - 1 𝔼 [ ( Δ L τ m , i + δ m , i ) 2 ] 1 / 2 = lim sup m sup i m - 1 δ m , i 1 / 2 ( 𝔼 [ ( Δ L τ m , i + δ m , i ) 2 ] δ m , i ) 1 / 2 lim sup m sup i m - 1 δ m , i 1 / 2 ( lim sup δ 0 sup t [ 0 , T - δ ] 1 δ 𝔼 [ ( Δ L t + δ ) 2 ] ) 1 / 2 = 0 .

Using the translation-invariance property in Theorem 1 (i) and (4.12), term (4.14) equals

| i = k m - 1 δ m , i ( 1 δ m , i ( φ τ m , i δ m , i ( Δ L τ m , i + δ m , i ) - 𝔼 [ Δ L τ m , i + δ m , i ] 1 + η δ m , i ) - K L ( τ m , i ) ) | T max 0 i m - 1 | 1 δ m , i ( φ τ m , i δ m , i ( Δ L τ m , i + δ m , i ) - 𝔼 [ Δ L τ m , i + δ m , i ] 1 + η δ m , i ) - K L ( τ m , i ) | 0 as m .

Hence, (4.12) implies (3.13).

Now it only remains to show that, for any β ( 0 , ) ,

sup t τ m | V ^ t τ m , β ( L ) - V ˇ t τ m , β ( L ) | 0 a.s. as m .

This is straightforward considering the fact that also the sequences { α δ m , k - δ m , k 1 + β } k = 0 m - 1 and { α δ m , k + δ m , k 1 + β } k = 0 m - 1 satisfy (2.2), yielding

lim m sup k m - 1 | F Δ L τ m , k + δ m , k - 1 ( α δ m , k ± δ m , k 1 + β ) - q t | = 0 .

Thus, the arguments in the above proof for V τ m hold for both V ^ τ m , β and V ˇ τ m , β . This concludes the proof. ∎

Proof of Corollary 2.

Notice that { L t } t [ 0 , T ] has the system of generating triplets

{ ( σ t 2 , ν t , γ t ) } = { ( μ ( t ) σ 2 , μ ( t ) ν , μ ( t ) γ ) } .

Now we verify requirements (i)–(v) in Theorem 4, noting that ( σ ˙ t 2 , ν ˙ t , γ ˙ t ) = λ ( t ) ( σ 2 , ν , γ ) .

(i) For each δ ( 0 , T ] and s [ 0 , T - δ ] , by the integral mean value theorem, there exists a θ s , δ [ s , s + δ ] such that

1 δ ( σ s + δ 2 - σ s 2 , ν s + δ - ν s , γ s + δ - γ s ) = λ ( θ s , δ ) ( σ 2 , ν , γ )

By the continuity of t λ ( t ) , we immediately get

lim δ 0 , s t 1 δ ( σ s + δ 2 - σ s 2 , ν s + δ - ν s , γ s + δ - γ s ) = lim δ 0 , s t λ ( θ s , δ ) ( σ 2 , ν , γ )
= λ ( t ) ( σ 2 , ν , γ ) = ( σ ˙ t 2 , ν ˙ t , γ ˙ t ) .

(ii) The statement follows from assumption (i) in Corollary 2 and the fact

[ - ε , ε ] x 2 1 δ ( ν t + δ - ν t ) ( d x ) = μ ( t + δ ) - μ ( t ) δ [ - ε , ε ] x 2 ν ( d x ) .

(iii) The statement follows from ν ˙ t ( x , ) = λ ( t ) ν ( x , ) , which is jointly continuous by assumption (ii) in Corollary 2 and the assumed continuity of t λ ( t ) .

(iv) Since ν ˙ t ( q t , ) = λ ( t ) ν ( q t , ) and, by assumption, there exists q t > 0 solving λ ( t ) ν ( q t , ) = - log α , the statement follows.

(v) We first note that, by assumption (ii) and [21, Corollary 25.8], L ~ t L 2 ( 𝔾 ) . For any s > 0 and δ [ 0 , s ] , by the stationary and independent increments property,

s 1 δ 𝔼 [ L ~ δ 2 ] - s δ 𝔼 [ L ~ 1 ] 2 = s 1 δ ( 𝔼 [ L ~ δ 2 ] - 𝔼 [ L ~ δ ] 2 ) = ( s δ ) Var ( L ~ δ ) = s Var ( L ~ 1 ) < ,

from which sup δ ( 0 , s ] δ - 1 𝔼 [ L ~ δ 2 ] < for any s > 0 follows. Notice that, for any δ ( 0 , T ] and t [ 0 , T - δ ] ,

Δ L t + δ = L ~ μ ( t + δ ) - L ~ μ ( t ) = 𝑑 L ~ μ ( t + δ ) - μ ( t )

and, from the mean value theorem,

λ ¯ :- min s [ 0 , T ] λ ( s ) μ ( t + δ ) - μ ( t ) δ max s [ 0 , T ] λ ( s ) -: λ ¯ ,

where 0 < λ ¯ λ ¯ < . Hence,

sup δ ( 0 , T ] sup t [ 0 , T - δ ] 1 δ 𝔼 [ ( Δ L t + δ ) 2 ] sup δ ( 0 , T ] λ ¯ 1 λ ¯ δ 𝔼 [ ( Δ L ~ λ ¯ δ ) 2 ] = λ ¯ sup δ ( 0 , λ ¯ T ] 1 δ 𝔼 [ ( Δ L ~ δ ) 2 ] < ,

which verifies statement (v). ∎

5 Auxiliary results

Lemma 2.

For Y L 1 ( F t + δ ) ,

φ t δ ( Y ) = 1 1 + η δ ( 𝔼 t [ Y ] - ( 1 - α δ ) ES t , 1 - α δ ( - Y ) + ( 1 - α δ + η δ ) VaR t , 1 - α δ ( - Y ) ) .

Proof.

Notice that

- 𝔼 t [ ( VaR t , 1 - α δ ( - Y ) - Y ) + ] = - 𝔼 t [ ( VaR t , 1 - α δ ( - Y ) - Y ) I { Y VaR t , 1 - α δ ( - Y ) } ] = 𝔼 t [ ( Y - VaR t , 1 - α δ ( - Y ) ) ( 1 - I { Y > VaR t , 1 - α δ ( - Y ) } ) ] = 𝔼 t [ Y ] - VaR t , 1 - α δ ( - Y ) - 𝔼 t [ ( Y - VaR t , 1 - α δ ( - Y ) ) + ] .

Straightforward calculations, see [6, Lemma 2.2], yield

𝔼 t [ ( Y - VaR t , 1 - α δ ( - Y ) ) + ] = ( 1 - α δ ) ( ES t , 1 - α δ ( - Y ) - VaR t , 1 - α δ ( - Y ) ) ,

from which we conclude that

- 𝔼 t [ ( VaR t , 1 - α δ ( - Y ) - Y ) + ] = 𝔼 t [ Y ] - ( 1 - α δ ) ES t , 1 - α δ ( - Y ) - α δ VaR t , 1 - α δ ( - Y ) .

Hence,

φ t ( Y ) = VaR t , 1 - α δ ( - Y ) - 1 1 + η δ 𝔼 t [ ( VaR t , 1 - α δ ( - Y ) - Y ) + ] = 1 1 + η δ ( 𝔼 t [ Y ] - ( 1 - α δ ) ES t , 1 - α δ ( - Y ) + ( 1 - α δ + η δ ) VaR t , 1 - α δ ( - Y ) ) .

Lemma 3.

Let { X t } t [ 0 , T ] and { U t } t [ 0 , T ] be adapted processes. Suppose that there exist constants δ 0 ( 0 , 1 2 ) , γ ( 0 , 2 ) , ε ( 0 , 1 ) and C 1 > 0 such that, for δ ( 0 , δ 0 ) and for any y δ ( γ - ε γ ) / 2 and any t [ 0 , T - δ ] ,

t ( | Δ X t + δ | > y ( 1 + | X t | ) ) C 1 δ 2 y - 2 / γ ,
t ( | Δ U t + δ | > y ( 1 + | X t | ) ) C 1 δ 2 y - 2 / γ ,
t ( | Δ X t + δ 2 | > y ( 1 + | X t | ) 2 ) C 1 δ 2 y - 2 / γ .

Then, for some g ( δ ) o ( 1 ) ,

ES t , 1 - α δ ( - | Δ X t + δ | ) g ( δ ) ( 1 + | X t | ) ,
ES t , 1 - α δ ( - | Δ U t + δ | ) g ( δ ) ( 1 + | X t | ) ,
ES t , 1 - α δ ( - | Δ X t + δ 2 | ) g ( δ ) ( 1 + X t 2 ) .

Proof.

By assumption,

t ( | Δ X t + δ | > y ) C 1 δ 2 ( y 1 + | X t | ) - 2 / γ

for y ( 1 + | X t | ) δ ( γ - ε γ ) / 2 . Solving C 1 δ 2 y - 2 / γ ( 1 + | X t | ) 2 / γ = p for y gives

y ( p ) = ( 1 + | X t | ) ( p C 1 δ 2 ) - γ / 2 .

Hence, for r ( 0 , 1 ) and δ small enough,

ES t , 1 - α δ ( - | Δ X t + δ | ) 1 δ 1 + r 0 δ 1 + r y ( p ) d p = ( 1 + | X t | ) C 1 γ / 2 δ γ 1 δ 1 + r 0 δ 1 + r p - γ / 2 d p = ( 1 + | X t | ) C 1 γ / 2 1 - u / 2 δ γ ( 1 - ( 1 + r ) / 2 ) -: g ( δ ) ( 1 + | X t | ) .

The same argument shows the upper bound for ES t , 1 - α δ ( - | Δ U t + δ | ) . A similar argument applies for showing the upper bound for ES t , 1 - α δ ( - | Δ X t + δ 2 | ) : by assumption,

( | Δ X t + δ 2 | > y t ) C 1 δ 2 ( y ( 1 + | X t | ) 2 ) - 2 / γ

for y ( 1 + | X t | ) 2 δ ( γ - ε γ ) / 2 . The same argument as above gives

ES t , 1 - α δ ( - | Δ X t + δ 2 | ) g ( δ ) ( 1 + | X t | ) 2 .

Noting that ( 1 + | X t | ) 2 2 ( 1 + X t 2 ) yields the upper bound in the statement. ∎

Lemma 4.

Let { X t } t [ 0 , T ] , { Y t } t [ 0 , T ] and { U t } t [ 0 , T ] be adapted processes. Suppose that there exist constants δ 0 ( 0 , 1 2 ) , γ ( 0 , 2 ) , ε ( 0 , 1 ) and C 1 > 0 such that, for δ ( 0 , δ 0 ) and for any y δ ( γ - ε γ ) / 2 and t [ 0 , T - δ ] ,

t ( | Δ X t + δ | > y ( 1 + | X t | ) ) C 1 δ 2 y - 2 / γ ,
t ( | Δ U t + δ | > y ( 1 + | X t | ) ) C 1 δ 2 y - 2 / γ ,
t ( | Δ X t + δ 2 | > y ( 1 + | X t | ) 2 ) C 1 δ 2 y - 2 / γ .

Then, for any K > 0 , β 1 , β 2 ( 0 , ε ) , with β 2 < β 1 and sufficiently small δ ( 0 , δ 0 ) ,

VaR t , 1 - α δ - δ 1 + β 1 ( - Y t + δ - K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) ) VaR t , 1 - α δ - δ 1 + β 2 ( - Y t + δ ) + 5 K δ ( γ - ε γ ) / 2 ( 1 + X t 2 ) ,
VaR t , 1 - α δ + δ 1 + β 1 ( - Y t + δ + K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) ) VaR t , 1 - α δ + δ 1 + β 2 ( - Y t + δ ) - 5 K δ ( γ - ε γ ) / 2 ( 1 + X t 2 ) .

Proof.

Let

E :- { | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | δ ( γ - ε γ ) / 2 ( 3 + 2 | X t | + X t 2 ) } ,
E X :- { | Δ X t + δ | δ ( γ - ε γ ) / 2 ( 1 + | X t | ) } ,
E U :- { | Δ U t + δ | δ ( γ - ε γ ) / 2 ( 1 + | X t | ) } ,
E X 2 :- { | Δ X t + δ 2 | δ ( γ - ε γ ) / 2 ( 1 + | X t | ) 2 } .

From t ( E ) t ( E X E U E X 2 ) , it follows that t ( E C ) t ( E X C ) + t ( E U C ) + t ( E X 2 C ) . Hence,

t ( E C ) 3 C 1 δ 2 ( δ ( γ - ε γ ) / 2 ) - 2 / γ = 3 C 1 δ 1 + ε .

Notice that

t ( Y t + δ - K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) x ) = t ( E { Y t + δ - K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) x } ) + t ( E C { Y t + δ - K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) x } ) t ( Y t + δ x + K δ ( γ - ε γ ) / 2 ( 3 + 4 | X t | + X t 2 ) ) + t ( E C )

and similarly

t ( Y t + δ + K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) x ) t ( Y t + δ + | K | ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) x ) t ( E { Y t + δ x - K δ ( γ - ε γ ) / 2 ( 3 + 4 | X t | + X t 2 ) } ) t ( Y t + δ x - K δ ( γ - ε γ ) / 2 ( 3 + 4 | X t | + X t 2 ) ) - t ( E C ) .

Hence, we conclude that, for δ small enough,

VaR t , 1 - α δ + δ 1 + β 1 ( - Y t + δ + K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) ) VaR t , 1 - α δ + δ 1 + β 1 + t ( E C ) ( - Y t + δ ) - K δ ( γ - ε γ ) / 2 ( 3 + 4 | X t | + X t 2 ) VaR t , 1 - α δ + δ 1 + β 1 + 3 C 1 δ 1 + ε ( - Y t + δ ) - K δ ( γ - ε γ ) / 2 ( 3 + 4 | X t | + X t 2 )

and analogously that

VaR t , 1 - α δ - δ 1 + β 1 ( - Y t + δ - K ( | Δ X t + δ | + | Δ U t + δ | + | Δ X t + δ 2 | ) ) VaR t , 1 - α δ - δ 1 + β 1 - 3 C 1 δ 1 + ε ( - Y t + δ ) + K δ ( γ - ε γ ) / 2 ( 3 + 4 | X t | + X t 2 ) .

We note that 2 | X t | 1 + X t 2 , 3 + 4 | X t | + X t 2 5 + 3 X t 2 5 ( 1 + X t 2 ) . Moreover, δ 1 + β 1 + 3 C 1 δ 1 + ε < δ 1 + β 2 for δ sufficiently small. The proof is complete. ∎

Lemma 5.

Let { τ m } m = 1 be a sequence of partitions of [ 0 , T ] with 0 = τ m , 0 < < τ m , m = T , let g ( δ ) o ( δ ) , and let B > 0 be a constant. Define

A τ m , k τ m :- A τ m , k + 1 τ m ( 1 + B ( τ m , k + 1 - τ m , k ) ) + g ( τ m , k + 1 - τ m , k ) , A τ m , m τ m :- 0 .

Then there exists h ( δ ) o ( δ ) such that A τ m , k τ m h ( mesh ( τ m ) ) for all m , k .

Proof.

Let δ m , k :- τ m , k + 1 - τ m , k . Noticing that 1 + B δ m , k e B δ m , k gives

A τ m , k τ m j = k m - 1 g ( δ m , j ) exp { B j = k m - 1 δ m , j } e B T j = k m - 1 δ m , k max k j m g ( δ m , j ) δ m , j T e B T sup δ mesh ( τ m ) g ( δ ) δ .

Lemma 6.

Let { Y t } t [ 0 , T ] be the strong solution to (3.10) with μ and σ satisfying (3.8) and (3.9). Then there are constants C 1 , C 2 ( 0 , ) such that, for δ ( 0 , 1 ) sufficiently small and y > δ β for any given β ( 0 , 1 2 ) ,

(5.1) t ( sup s [ t , t + δ ] | Y s - Y t | > y ( 1 + | Y t | ) ) C 1 exp { - y 2 C 2 δ } ,
(5.2) t ( sup s [ t , t + δ ] | Y s 2 - Y t 2 | > 3 y ( 1 + | Y t | ) 2 ) 2 C 1 exp { - y 2 C 2 δ } .

Proof.

We first prove (5.1). Let τ :- inf { s [ t , t + δ ] : | Y s - Y t | > y ( 1 + | Y t | ) } , and notice that

t ( sup s [ t , t + δ ] | Y s - Y t | > y ( 1 + | Y t | ) ) = t ( τ t + δ )
(5.3) t ( τ t + δ , sup t s τ | t s μ ( u , Y u ) d u | > 1 2 y ( 1 + | Y t | ) )
(5.4) + t ( τ t + δ , sup t s τ | t s σ ( u , Y u ) d W u | > 1 2 y ( 1 + | Y t | ) ) .

Notice that s τ implies | Y s | | x | + y ( 1 + | x | ) which, in turn, by growth condition (3.8) for μ and σ, implies that there is some finite constant M such that

(5.5) max { sup s [ t , τ ] | μ ( s , Y s ) | , sup s [ t , τ ] | σ ( s , Y s ) | } M ( 1 + | Y t | ) .

Hence, for δ ( 0 , 1 ) sufficiently small, the probability in (5.3) is zero. The probability in (5.4) can be bounded from above as follows. Since s 0 s σ ( u , Y u ) d W u is a continuous local martingale, it may be expressed as a random time change s H ( s ) :- 0 s σ ( u , Y u ) 2 d u of a Brownian motion. By (5.5) and [11, Theorem 18.4], there is standard Brownian motion W ~ such that (5.4) is less than or equal to

t ( H ( t + δ ) - H ( t ) δ M 2 ( 1 + | Y t | ) 2 , sup s [ t , t + δ ] | W ~ H ( s ) - W ~ H ( t ) | > 1 2 y ( 1 + | Y t | ) ) t ( sup s [ 0 , δ ] | W ~ s | > y 2 M ) .

Applying [5, Lemma 5.2.1] to the last expression above gives that (5.4) is less than or equal to

4 exp { - ( y 2 M ) 2 1 2 δ } .

We have proved (5.1). We now prove (5.2). Noting that | Y s 2 - x 2 | = | Y s - x | | Y s + x | , we get

t ( sup s [ t , t + δ ] | Y s 2 - Y t 2 | > 3 y ( 1 + | Y t | ) 2 ) t ( { sup s [ t , t + δ ] | Y t - Y t | > y ( 1 + | Y t | ) } { sup s [ t , t + δ ] | Y t + Y t | > 3 ( 1 + | Y t | ) } ) t ( sup s [ t , t + δ ] | Y t - Y t | > y ( 1 + | Y t | ) ) + t ( sup s [ t , t + δ ] | Y t + Y t | > 3 ( 1 + | Y t | ) ) .

By Lemma 6, for t > 0 sufficiently small,

t ( sup s [ t , t + δ ] | Y s - Y t | > y ( 1 + | Y t | ) ) C 1 exp { - y 2 C 2 δ } .

Moreover,

t ( sup s [ t , t + δ ] | Y s + Y t | > 3 ( 1 + | Y t | ) ) t ( sup s [ t , t + δ ] | Y s - Y t | + 2 | Y t | > 3 ( 1 + | Y t | ) ) t ( sup s [ t , t + δ ] | Y s - Y t | > 1 + | Y t | ) C 1 exp { - y 2 C 2 δ } .

This concludes the proof. ∎

Lemma 7.

Let X be the solution to the stochastic differential equation (3.10) with coefficients μ and σ satisfying (3.8) and (3.9). Then there exists a constant C such that

(5.6) | 𝔼 t [ Δ X t + δ 2 ] | C δ ( 1 + X t 2 ) .

Proof.

Recall that Δ X t + δ 2 :- X t + δ 2 - X t 2 . Itô’s lemma yields

Δ X t + δ 2 = t t + δ ( 2 X s μ ( s , X s ) + σ ( s , X s ) 2 ) d s + 2 t t + δ X s σ ( s , X s ) d W s .

Hence,

| 𝔼 t [ Δ X t + δ 2 ] | = | 𝔼 t [ t t + δ ( 2 X s μ ( s , X s ) + σ ( s , X s ) 2 ) d s ] | 𝔼 t [ t t + δ | 2 X s μ ( s , X s ) + σ ( s , X s ) 2 | d s ] = t t + δ 𝔼 t [ | 2 X s μ ( s , X s ) + σ ( s , X s ) 2 | ] d s .

Since μ and σ satisfy (3.8),

μ ( s , x ) ( K 1 ( 1 + x 2 ) ) 1 / 2 K 1 1 / 2 ( 1 + | x | ) ,
x μ ( s , x ) K 1 1 / 2 ( | x | + x 2 ) 2 K 1 1 / 2 ( 1 + x 2 ) ,
x μ ( s , x ) + σ ( s , x ) 2 ( K 1 + 2 K 1 1 / 2 ) ( 1 + x 2 ) .

Hence, there is a constant C 1 such that

t t + δ 𝔼 t [ | X s μ ( s , X s ) + σ 2 ( s , X s ) | ] d s t t + δ 𝔼 t [ C 1 ( 1 + X s 2 ) ] d s .

By [12, Theorem 4.5.4], there is a constant C 2 such that 𝔼 t [ X s 2 ] C 2 ( 1 + X t 2 ) , which implies the existence of a constant C such that (5.6) holds. ∎

Lemma 8.

Let X be the solution to the stochastic differential equation (3.10) with coefficients μ and σ satisfying (3.8) and (3.9). Define u as in Theorem 3, and assume u satisfies (3.12). Then, for any β ( 0 , 1 2 ) , for δ > 0 sufficiently small and y > δ β , there exist constants C 1 , C 2 > 0 such that

(5.7) t ( sup s [ t , t + δ ] | u ( s , X s ) - X s - u ( t , X t ) + X t | > y ( 1 + | X t | ) ) C 1 exp { - y 2 C 2 δ } .

Proof.

We first notice that, by Itô’s lemma and by Feynman–Kac, we have

d u ( t , X t ) = u x ( t , X t ) σ ( t , X t ) d W t ,

where subscript x denotes partial derivative with respect to the second argument of u. Let

τ :- inf { s [ t , t + δ ] : | X s - X t | > y ( 1 + | X t | ) } ,

and notice that

t ( sup s [ t , t + δ ] | u ( s , X s ) - u ( t , X t ) | > y ( 1 + | X t | ) ) t ( τ t + δ ) + t ( τ > t + δ , sup s [ t , t + δ ] | t s u x ( u , X u ) σ ( u , X u ) d W u | > y ( 1 + | x | ) ) .

Since s 0 s u x ( u , X u ) σ ( u , X u ) d W u is a continuous local martingale, it may be expressed as a random time change s H ( s ) of a Brownian motion:

H ( s ) :- 0 s u x ( u , X u ) 2 σ ( u , X u ) 2 d u .

Notice that if τ > t + δ , then H ( t + δ ) - H ( t ) δ K 1 K 2 2 ( 1 + | X t | ) 2 . By [11, Theorem 18.4], there is standard Brownian motion W ~ such that

t ( τ > t + δ , sup s [ t , t + δ ] | t s u x ( u , X u ) σ ( u , X u ) d W u | > y ( 1 + | X t | ) ) ( τ > t + δ , sup s [ t , t + δ ] | W ~ H ( s ) - W ~ H ( t ) | > y ( 1 + | X t | ) ) ( sup s [ 0 , δ ] | W ~ s | > y K 1 1 / 2 K 2 ) 4 exp { - y 2 2 K 1 K 2 2 δ } ,

where the last inequality follows from applying [5, Lemma 5.2.1]. Noting that

t ( τ t + δ ) = t ( sup s [ t , t + δ ] | X s - X t | > y ( 1 + | X t | ) )

can be bounded by Lemma 6 yields the existence of constants C 3 and C 4 such that

t ( sup s [ t , t + δ ] | u ( s , X s ) - u ( t , X t ) | > y ( 1 + | X t | ) ) C 3 exp { - y 2 C 4 δ } ,
t ( sup s [ t , t + δ ] | X s - X t | > y ( 1 + | X t | ) ) C 3 exp { - y 2 C 4 δ }

for δ small enough. Noting that

t ( sup s [ t , t + δ ] | u ( s , X s ) - X s - u ( t , X t ) + X t | > y ( 1 + | X t | ) ) t ( sup s [ t , t + δ ] | u ( s , X s ) - u ( t , X t ) | > y 2 ( 1 + | X t | ) ) + t ( sup s [ t , t + δ ] | X s - X t | > y 2 ( 1 + | X t | ) )

completes the argument showing (5.7) for C 1 = 2 C 3 and C 2 = 4 C 4 . ∎

Lemma 9.

Let { L t } t [ 0 , T ] be an R -valued additive process with system of generating triplets { ( σ t 2 , ν t , γ t ) } t [ 0 , T ] . For each t [ 0 , T ] , let σ ˙ t 2 and γ ˙ t be constants, and let ν ˙ t be a measure on R { 0 } whose restrictions to sets bounded away from 0 are finite. Fix t [ 0 , T ] , and assume that

(5.8) 1 δ ( σ s + δ 2 - σ s 2 , ν s + δ - ν s , γ s + δ - γ s ) ( σ ˙ t 2 , ν ˙ t , γ ˙ t ) 𝑎𝑠 δ 0 , s t ,

where the convergence in the second component means that

lim δ 0 , s t 1 δ { 0 } f ( x ) ( ν s + δ - ν s ) ( d x ) = { 0 } f ( x ) ν ˙ t ( d x )

for bounded and continuous functions vanishing in a neighborhood of 0. Assume further that

(5.9) lim ε 0 lim sup δ 0 , s t [ - ε , ε ] x 2 1 δ ( ν s + δ - ν s ) ( d x ) = 0 .

Consider sequences δ n 0 , t n t , with t n [ 0 , T - δ n ] , and for every n, let { L s δ n , t n } s [ 0 , T ] be a Lévy process satisfying Δ L t + δ n δ n , t n = 𝑑 Δ L t n + δ n , and let μ δ n , t n be the probability distribution of L 1 δ n , t n . Then

lim δ n 0 1 δ n { 0 } f ( x ) μ δ n , t n δ n ( d x ) { 0 } f ( x ) ν ˙ t ( d x )

for bounded and continuous functions vanishing in a neighborhood of 0.

Proof.

Notice that μ δ n , t n is infinitely divisible with Lévy triplet

1 δ n ( σ t n + δ n 2 - σ t n 2 , ν t + δ n - ν t n , γ t n + δ n - γ t n ) .

By [21, Theorem 8.7], (5.8) and (5.9) together imply that μ δ n , t n converges weakly to an infinitely divisible distribution μ with Lévy triplet ( σ ˙ t 2 , ν ˙ t , γ ˙ t ) . In particular, the corresponding characteristic functions converge pointwise,

(5.10) lim δ n 0 μ ^ δ n , t n ( z ) = μ ^ ( z ) .

Define μ n via its characteristic function μ ^ n ( z ) as

μ ^ n ( z ) :- exp { δ n - 1 ( μ ^ δ n , t n ( z ) δ n - 1 ) } = exp { δ n - 1 { 0 } ( e i z x - 1 ) μ δ n , t n δ n ( d x ) } .

From [21, pp. 38–39], in particular [21, (8.7)], it follows that μ n is infinitely divisible with Lévy triplet ( 0 , δ n - 1 μ δ n , t n δ n , 0 ) . Moreover,

μ ^ n ( z ) = exp { δ n - 1 ( μ ^ δ n , , t n ( z ) δ n - 1 ) } = exp { δ n - 1 ( e δ n log ( μ ^ δ n , t n ( z ) ) - 1 ) } = exp { δ n - 1 ( δ n log ( μ ^ ( z ) ) + O ( δ n 2 ) ) } ,

where the last equality is due to (5.10) which, as in [21, proof of Theorem 8.7], implies that

lim δ n 0 log μ ^ δ n , t ( z ) = log μ ^ ( z )

uniformly on any compact set. Hence, lim n μ ^ n ( z ) = μ ^ ( z ) for every z, implying μ n μ weakly. Now [21, Theorem 8.7] gives

lim δ n 0 1 δ n { 0 } f ( x ) μ δ n , t n δ n ( d x ) = { 0 } f ( x ) ν ˙ t ( d x )

for bounded and continuous functions vanishing in a neighborhood of 0. ∎

An important special case of Lemma 9 is the following.

Corollary 3.

If the conditions of Lemma 9 hold, and if x > 0 is a continuity point of y ν ˙ t ( y , ) , then

lim δ 0 , s t 1 δ F ¯ Δ L s + δ ( x ) = ν ˙ t ( x , ) .

Proof.

Notice that f ( y ) :- 1 ( x , ) ( y ) is bounded, vanishes in a neighborhood of 0 but is not continuous. For m > 0 , let f ¯ and f ¯ be polygon functions given by

f ¯ ( y ) = { 0 , y x - 1 m , m ( y - x + 1 m ) , y ( x - 1 m , x ) , 1 , y x ,
f ¯ ( y ) = { 0 , y x , m ( y - x ) , y ( x , x + 1 m ) , 1 , y x + 1 m .

Then f ¯ f f ¯ ,

lim δ n 0 1 δ n { 0 } g ( y ) μ δ n , t n δ n ( d y ) = { 0 } g ( y ) ν ˙ t ( d y ) , g { f ¯ , f ¯ } ,

and

lim δ n 0 1 δ n { 0 } ( f ¯ - f ¯ ) μ δ n , t n δ n ( d y ) ν ˙ t [ x - 1 m , x + 1 m ] ,

which tends to 0 as m . ∎

Lemma 10.

Let f : { ( t , δ ) [ 0 , T ) × ( 0 , T ] : t + δ T } R , and suppose there exists a continuous function g : [ 0 , T ] R such that, for all t [ 0 , T ] , lim δ 0 , s t f ( s , δ ) = g ( t ) . Then lim δ 0 sup t [ 0 , T - δ ] | f ( t , δ ) - g ( t ) | = 0 .

Proof.

By assumption, lim δ 0 | f ( t , δ ) - g ( t ) | = 0 for every t [ 0 , T - δ ] . Suppose that the convergence is not uniform in t. Then there exist ε > 0 and a sequence { ( t n , δ n ) } n 1 { ( t , δ ) [ 0 , T ) × ( 0 , T ] : t + δ T } with δ n 0 such that | f ( t n , δ n ) - g ( t n ) | > ε for all n. By the Bolzano–Weierstrass theorem, there exist t [ 0 , T ] and a subsequence { t n k } k 1 of { t n } n 1 such that lim k t n k = t . Hence,

ε < | f ( t n k , δ n k ) - g ( t n k ) | | f ( t n k , δ n k ) - g ( t ) | + | g ( t n k ) - g ( t ) | 0 as k .

From this contradiction, we conclude that the convergence is indeed uniform, thereby proving the statement. ∎

Lemma 11.

Let { α δ m , k } and { η δ m , k } satisfy (2.2). Then

lim m k = 0 m - 1 α δ m , k = α T , lim m k = 0 m - 1 ( 1 + η δ m , k ) = ( 1 + η ) T .

Proof.

We prove the first statement for α δ m , k . The proof of the second statement is completely analogous and omitted. Notice that

α δ m , k = ( ( 1 - δ m , k ( 1 - α δ m , k δ m , k ) ) 1 / δ m , k ) δ m , k .

We immediately use this to see that

log [ k = 0 m - 1 α δ m , k ] = k = 0 m - 1 δ m , k log [ ( 1 - δ m , k ( 1 - α δ m , k δ m , k ) ) 1 / δ m , k ] .

By (2.2) and the well-known convergence result

lim δ 0 ( 1 + δ a + o ( δ ) ) 1 / δ = e a ,

for any real a and any higher-order term o ( δ ) , we get

lim m sup k m - 1 | ( 1 - δ m , k ( 1 - α δ m , k δ m , k ) ) 1 / δ m , k - α | = 0 .

Hence, we conclude that

lim m log [ k = 0 m - 1 α δ m , k ] = T log ( α ) .

This proves the result for k = 0 m - 1 α δ m , k . ∎

Acknowledgements

The authors thank two anonymous reviewers for comments and suggestions that significantly improved the paper.

References

[1] B. Acciaio and I. Penner, Dynamic risk measures, Advanced Mathematical Methods for Finance, Springer, Heidelberg (2011), 1–34. 10.1007/978-3-642-18412-3_1Suche in Google Scholar

[2] P. Artzner, F. Delbaen, J.-M. Eber, D. Heath and H. Ku, Coherent multiperiod risk adjusted values and Bellman’s principle, Ann. Oper. Res. 152 (2007), 5–22. 10.1007/s10479-006-0132-6Suche in Google Scholar

[3] P. Cheridito, F. Delbaen and M. Kupper, Coherent and convex monetary risk measures for unbounded càdlàg processes, Finance Stoch. 9 (2005), no. 3, 369–387. 10.1007/s00780-004-0150-7Suche in Google Scholar

[4] P. Cheridito, F. Delbaen and M. Kupper, Dynamic monetary risk measures for bounded discrete-time processes, Electron. J. Probab. 11 (2006), no. 3, 57–106. 10.1214/EJP.v11-302Suche in Google Scholar

[5] A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, 2nd ed., Appl. Math. (N. Y.) 38, Springer, New York, 1998. 10.1007/978-1-4612-5320-4Suche in Google Scholar

[6] P. Embrechts and R. Wang, Seven proofs for the subadditivity of expected shortfall, Depend. Model. 3 (2015), no. 1, 126–140. 10.1515/demo-2015-0009Suche in Google Scholar

[7] H. Engsner, M. Lindholm and F. Lindskog, Insurance valuation: A computable multi-period cost-of-capital approach, Insurance Math. Econom. 72 (2017), 250–264. 10.1016/j.insmatheco.2016.12.002Suche in Google Scholar

[8] H. Föllmer and I. Penner, Convex risk measures and the dynamics of their penalty functions, Statist. Decisions 24 (2006), no. 1, 61–96. 10.1524/stnd.2006.24.1.61Suche in Google Scholar

[9] H. Föllmer and A. Schied, Stochastic Finance. An Introduction in Discrete Time, 4th ed., De Gruyter, Berlin, 2016. 10.1515/9783110463453Suche in Google Scholar

[10] A. Jobert and L. C. G. Rogers, Valuations and dynamic convex risk measures, Math. Finance 18 (2008), no. 1, 1–22. 10.1111/j.1467-9965.2007.00320.xSuche in Google Scholar

[11] O. Kallenberg, Foundations of Modern Probability, 2nd ed., Springer, New York, 2002. 10.1007/978-1-4757-4015-8Suche in Google Scholar

[12] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, Appl. Math. (N. Y.) 23, Springer, Berlin, 1992. 10.1007/978-3-662-12616-5Suche in Google Scholar

[13] S. Kusuoka and Y. Morimoto, Homogeneous law-invariant coherent multiperiod value measures and their limits, J. Math. Sci. Univ. Tokyo 14 (2007), no. 2, 117–156. Suche in Google Scholar

[14] D. Madan, M. Pistorius and M. Stadje, On dynamic spectral risk measures, a limit theorem and optimal portfolio allocation, Finance Stoch. 21 (2017), no. 4, 1073–1102. 10.1007/s00780-017-0339-1Suche in Google Scholar

[15] C. Möhr, Market-consistent valuation of insurance liabilities by cost of capital, Astin Bull. 41 (2011), no. 2, 315–341. Suche in Google Scholar

[16] A. Pelsser and A. Salahnejhad Ghalehjooghi, Time-consistent actuarial valuations, Insurance Math. Econom. 66 (2016), 97–112. 10.1016/j.insmatheco.2015.10.010Suche in Google Scholar

[17] P. E. Protter, Stochastic Integration and Differential Equations, 2nd ed., Appl. Math. (N. Y.) 21, Springer, Berlin, 2004. Suche in Google Scholar

[18] F. Riedel, Dynamic coherent risk measures, Stochastic Process. Appl. 112 (2004), no. 2, 185–200. 10.1016/j.spa.2004.03.004Suche in Google Scholar

[19] B. Roorda, J. M. Schumacher and J. Engwerda, Coherent acceptability measures in multiperiod models, Math. Finance 15 (2005), no. 4, 589–612. 10.1111/j.1467-9965.2005.00252.xSuche in Google Scholar

[20] E. Rosazza Gianin, Risk measures via g-expectations, Insurance Math. Econom. 39 (2006), no. 1, 19–34. 10.1016/j.insmatheco.2006.01.002Suche in Google Scholar

[21] K.-i. Sato, Lévy Processes and Infinitely Divisible Distributions, Cambridge University, Cambridge, 1999. Suche in Google Scholar

[22] M. Stadje, Extending dynamic convex risk measures from discrete time to continuous time: A convergence approach, Insurance Math. Econom. 47 (2010), no. 3, 391–404. 10.1016/j.insmatheco.2010.08.005Suche in Google Scholar

Received: 2019-04-29
Revised: 2020-04-01
Accepted: 2020-04-01
Published Online: 2020-04-17
Published in Print: 2021-11-01

© 2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 16.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/strm-2019-0008/html
Button zum nach oben scrollen