Startseite Epsilon-Efficiency in a Dynamic Partnership with Adverse Selection and Moral Hazard
Artikel
Lizenziert
Nicht lizenziert Erfordert eine Authentifizierung

Epsilon-Efficiency in a Dynamic Partnership with Adverse Selection and Moral Hazard

Veröffentlicht/Copyright: 29. November 2021

Abstract

For a dynamic partnership with adverse selection and moral hazard, we design a direct profit division mechanism that satisfies ϵ-efficiency, periodic Bayesian incentive compatibility, interim individual rationality, and ex-post budget balance. In addition, we design a voting mechanism that implements the profit division rule associated with this direct mechanism in perfect Bayesian equilibrium. For establishing these possibility results, we assume that the partnership exhibits intertemporal complementarities instead of contemporaneous complementarities; equivalently, an agent’s current effort affects other agents’ future optimal efforts instead of current optimal efforts. This modelling assumption fits a wide range of economic settings.


Corresponding author: Vi Cao, School of Economics, Sichuan University, Chengdu, China, E-mail:

Funding source: Sichuan University

Award Identifier / Grant number: Unassigned

Acknowledgments

I thank my advisors, Paulo Barelli and Srihari Govindan, for their guidance and encouragement throughout this research. I also thank the Editor-in-chief, an anonymous referee, Yu Awaya, William Thomson, Narayana Kocherlakota, Asen Kochov, Zizhen Ma, Nicolas Riquelme, Can Urgun, Thomas Mariotti, Andrea Attar, seminar participants at the University of Rochester, Sichuan University, the 2017 Fall Midwest Economic Theory Conference (Dallas), the 2018 Spring Midwest Economic Theory Conference (Philadelphia), the 2018 North American Summer Meeting of the Econometric Society (Davis), and the 29-th International Conference on Game Theory (Stony Brook) for helpful discussions and comments. All errors are my own.

Appendix A: First-best Joint Effort

In this appendix, we show that the maximization problem (3.1) has a unique solution ( e i , t * ) i N ; in addition e i , t * = θ 0 , t θ i , t ( 1 + δ γ β N , t + 1 ) for some β N , t + 1 R + . The proof is by induction. By definition, U N ,T+1(θ T+1) = 0; hence, E θ 0 , T + 1 U N , T + 1 ( θ T + 1 ) = θ 0 , T + 1 β N , T + 1 for β N ,T+1 = 0. Fix some period t. Assume that E θ 0 , t + 1 U N , t + 1 ( θ t + 1 ) = θ 0 , t + 1 β N , t + 1 for some β N , t + 1 R + . We have

E U N , t + 1 ( θ t + 1 ) | θ t , ( e i , t ) i N = [21] E θ 0 , t + 1 E θ 0 , t + 1 U N , t + 1 ( θ t + 1 ) | θ t , ( e i , t ) i N

= E θ 0 , t + 1 θ 0 , t + 1 β N , t + 1 | θ t , ( e i , t ) i N = E [ θ 0 , t + 1 | θ t , ( e i , t ) i N ] β N , t + 1

= θ 0 , t + γ i N e i , t β N , t + 1 .

The first order condition for maximization problem (3.1) is: for each iN′,

1 + δ E U N , t + 1 ( θ t + 1 ) | θ t , ( e i , t ) i N e i , t = e i , t θ 0 , t θ i , t ,

which holds if and only if e i,t = θ 0,t θ i,t (1 + δγβ N ,t+1). Thus, this maximization problem has a unique solution. We have

E θ 0 , t U N , t ( θ t ) = E θ 0 , t i N θ 0 , t θ i , t ( 1 + δ γ β N , t + 1 ) 1 2 θ 0 , t θ i , t ( 1 + δ γ β N , t + 1 ) 2 + E θ 0 , t δ θ 0 , t + γ i N θ 0 , t θ i , t ( 1 + δ γ β N , t + 1 ) β N , t + 1

= θ 0,t β N ,t , where

β N , t = E θ 0 , t 1 2 ( 1 + δ γ β N , t + 1 ) 2 i N θ i , t + δ β N , t + 1 0 .

Lemma A.1

β N ,t and β N,t β {i},t are strictly increasing in Tt.

Proof

Let θ ̄ E θ i = k = 1 K p ( θ ̃ k ) θ ̃ k .

Step 1. To show that β N ,t is strictly increasing in Tt, it suffices to show that β N ,t > β N ,t+1 for each t ∈ {1, …, T}. The proof is by induction. It is obvious that β N ,T > β N ,T+1 = 0. Suppose β N ,t > β N ,t+1 for some t ∈ {2, …, T}. Then

β N , t 1 = | N | 2 ( 1 + δ γ β N , t ) 2 θ ̄ + δ β N , t > | N | 2 ( 1 + δ γ β N , t + 1 ) 2 θ ̄ + δ β N , t + 1 = β N , t .

Step 2. To show that β N,t β {i},t is strictly increasing in Tt, it suffices to show that β N,t β {i},t > β N,t+1β {i},t+1 for each t ∈ {1, …, T}. The proof is by induction. It is obvious that β N,T β {i},T > β N,T+1β {i},T+1 = 0. Suppose β N,t β {i},t > β N,t+1β {i},t+1 for some t ∈ {2, …, T}. Then x t N,t β {i},t > N,t+1β {i},t+1x t+1 and y t n β N , t 2 β { i } , t 2 > n β N , t + 1 2 β { i } , t + 1 2 y t + 1 , which implies

β N , t 1 β { i } , t 1 = 1 2 ( n 1 + 2 δ γ x t + δ 2 γ 2 y t ) θ ̄ + δ ( β N , t β { i } , t ) > 1 2 ( n 1 + 2 δ γ x t + 1 + δ 2 γ 2 y t + 1 ) θ ̄ + δ ( β N , t + 1 β { i } , t + 1 ) = β N , t β { i } , t .

B Design

B.1 Bonuses

For constructing bonus b h 0 , t , for each e i , t R + , each x ∈ {0, 1}, and each θ i,t ∈ Θ i,t , define

V ( e i , t , x , θ i , t ) = c i , t ( e i , t , θ 0 , t , θ i , t ) + q h 0 , t s i , t ( h 0 , t ) π ( e i , t , e i , t * , r ) + δ ξ i , t + 1 ( h t , e t * , r , D , x ) ( θ 0 , t + γ π ( e i , t , e i , t * , r ) ) + ( 1 q h 0 , t ) × π ( e i , t , e i , t i , r ) + δ ξ i , t + 1 ( h t , e t i , r , D , x ) ( θ 0 , t + γ π ( e i , t , e i , t i , r ) ) .

Inequality (6.1) holds if V ( e i , t * , r , 1 , θ i , t ) + ( 1 q h 0 , t ) b h 0 , t max e i , t [ 0 , e i , t * , r ] V ( e i , t , 0 , θ i , t ) . We solve the maximization problem on the right hand side below. The marginal gain of e i,t is

M G q h 0 , t s i , t ( h 0 , t ) + δ γ ξ i , t + 1 ( h t , e t * , r , D , 0 ) + ( 1 q h 0 , t ) 1 + δ γ ξ i , t + 1 ( h t , e t i , r , D , 0 ) ,

whereas the marginal cost of e i,t is MC(e i,t ) ≡ e i,t /(θ 0,t θ i,t ). If M G > M C ( e i , t * , r ) , then the solution is e i , t * , r ; otherwise, the solution is θ 0,t θ i,t MG. Finally, let

b h 0 , t = 1 1 q h 0 , t max 0 , max ( i , θ i , t ) max e i , t [ 0 , e i , t * , r ] V ( e i , t , 0 , θ i , t ) V ( e i , t * , r , 1 , θ i , t ) .

B.2 Proof for Lemma 6.1

Fix a history h 0 , t ( h t , θ 0 , t , θ ̂ i , t , ( θ j , t ) j i ) . If π τ < π ( e τ r ) for some τ < t, then it is obviously optimal for each agent to exert the recommended effort. In the following, we assume either t = 1 or π τ π ( e τ r ) for each τ < t. If agent i is recommended not to work, then she expects to receive no profit share from period t onwards; thus, her optimal effort is e i , t * = 0 . Suppose agent i receives recommended effort e i , t * , r . Due to the bonus b h 0 , t , she prefers exerting some e i , t e i , t * , r to exerting any e i , t < e i , t * , r . Thus, her optimal effort e i , t * is the solution to max e i , t e i , t * , r V ( e i , t , 1 , θ i , t ) , where V is defined in Appendix B.1. The marginal gain of e i,t is

M G q h 0 , t s i , t ( h 0 , t ) + δ γ ξ i , t + 1 ( h t , e t * , r , D , 1 ) + ( 1 q h 0 , t ) 1 + δ γ ξ i , t + 1 ( h t , e t i , r , D , 1 ) ,

whereas the marginal cost of e i,t is MC(e i,t ) ≡ e i,t /(θ 0,t θ i,t ). If M G < M C ( e i , t * , r ) , then the solution is e i , t * , r ; otherwise, the solution is θ 0,t θ i,t MG′. Since the maximal aggregate payoff of a | N h 0 , t | –member partnership is β N h 0 , t , t + 1 θ 0 , t + 1 , we have ξ i , t + 1 ( h t , e t * , r , D , 1 ) β N h 0 , t , t + 1 . Since the maximal aggregate payoff of a | N h 0 , t | –member partnership is increasing in | N h 0 , t | , we have ξ i , t + 1 ( h t , e t i , r , D , 1 ) = β { i } , t + 1 β N h 0 , t , t + 1 . Hence, if θ ̂ i , t = θ i , t , then M G M C ( e i , t * , r ) = 1 + δ γ β N h 0 , t , t + 1 . It follows that agent i’s optimal effort is e i , t * = e i , t * , r . □

B.3 Increasing Differences

Fix some history h 0 , t ( h t , θ 0 , t , θ ̂ i , t , ( θ j , t ) j i ) , where h t ( θ 0 , τ , θ ̂ 0 , τ , e τ r , π τ , m τ ) τ < t . Let N h 0 , t be the set of agents that were recommended to work in period t − 1. If either | N h 0 , t | = 1 or π τ < π ( e τ r ) for some τ < t, then agent i’s expected lifetime payoff W ̃ i , t ( h t , θ t ; θ ̂ i , t ; D ) is independent of her report θ ̂ i , t ; thus, W ̃ i , t ( h t , , θ i , t ; ; D ) satisfies increasing differences. In the following, we assume | N h 0 , t | > 1 , and either t = 1 or π τ π ( e τ r ) for each τ < t, and show that W ̃ i , t ( h t , , θ i , t ; ; D ) satisfies increasing differences.

Step 1. By construction, s i , t ( h 0 , t ) = θ ̂ i , t + δ γ β N h 0 , t , t + 1 θ ̂ i , t j N h 0 , t θ ̂ j , t / | N h 0 , t | j N h 0 , t θ ̂ j , t .

It is easy to show that s i,t ( h 0,t ) is strictly increasing in θ ̂ i , t .

Step 2. As showed in Appendix B.2, if agent i is recommended not to work, then her optimal effort is e i , t * = 0 ; if agent i receives recommended effort e i , t * , r , then her optimal effort is either e i , t * = e i , t * , r (corner solution) or e i , t * = θ 0 , t θ i , t M G (interior solution). Thus, agent i’s maximal payoff from reporting θ ̂ i , t is

W ̃ i , t ( h t , θ t ; θ ̂ i , t ; D ) = 1 ϵ * + ϵ * | N h 0 , t | V ( e i , t * , 1 , θ i , t )

where V is defined in Appendix B.1. If e i , t * is a corner solution, then V ( e i , t * , 1 , θ i , t ) / θ i , t = c i , t ( e i , t * , θ 0 , t , θ i , t ) / θ i , t (as all other terms of V ( e i , t * , 1 , θ i , t ) are independent of θ i,t ). If e i , t * = θ 0 , t θ i , t M G , then

V ( e i , t * , 1 , θ i , t ) θ i , t = c i , t ( e i , t * , θ 0 , t , θ i , t ) θ i , t + e i , t * θ i , t M G e i , t * θ 0 , t θ i , t = c i , t ( e i , t * , θ 0 , t , θ i , t ) θ i , t .

Consequently, W ̃ i , t ( h t , θ t ; θ ̂ i , t ; D ) θ i , t = 1 ϵ * + ϵ * | N h 0 , t | ( e i , t * ) 2 2 θ 0 , t θ i , t 2 .

Step 3. With a slight abuse of notation, we let e i , t * ( θ ̂ i , t ) be agent i’s optimal effort conditional on reporting θ ̂ i , t and receiving recommended effort e i , t * , r θ 0 , t θ ̂ i , t ( 1 + δ γ β N h 0 , t , t + 1 ) . We show that e i , t * ( θ ̂ i , t ) is increasing in θ ̂ i , t . Let θ ̂ i , t > θ ̂ i , t . If both e i , t * ( θ ̂ i , t ) and e i , t * ( θ ̂ i , t ) are corner solutions, then e i , t * ( θ ̂ i , t ) = θ 0 , t θ ̂ i , t ( 1 + δ γ β N h 0 , t , t + 1 ) > θ 0 , t θ ̂ i , t ( 1 + δ γ β N h 0 , t , t + 1 ) = e i , t * ( θ ̂ i , t ) . If both e i , t * ( θ ̂ i , t ) and e i , t * ( θ ̂ i , t ) are interior solutions, then e i , t * ( θ ̂ i , t ) = θ 0 , t θ i , t M G ( θ ̂ i , t ) > θ 0 , t θ i , t M G ( θ ̂ i , t ) = e i , t * ( θ ̂ i , t ) [the inequality is due to the fact that s i,t ( h 0,t ) is strictly increasing in θ ̂ i , t ]. If e i , t * ( θ ̂ i , t ) is corner and e i , t * ( θ ̂ i , t ) is interior, then e i , t * ( θ ̂ i , t ) θ 0 , t θ ̂ i , t ( 1 + δ γ β N h 0 , t , t + 1 ) > θ 0 , t θ ̂ i , t ( 1 + δ γ β N h 0 , t , t + 1 ) = e i , t * ( θ ̂ i , t ) [the first inequality is due to the fact that agent i’s optimal effort weakly exceeds her recommended effort]. If e i , t * ( θ ̂ i , t ) is interior and e i , t * ( θ ̂ i , t ) is corner, then e i , t * ( θ ̂ i , t ) > θ 0 , t θ i , t M G ( θ ̂ i , t ) > θ 0 , t θ i , t M G ( θ ̂ i , t ) = e i , t * ( θ ̂ i , t ) [the first inequality is due to the fact that M G ( θ ̂ i , t ) < M C ( e i , t * ( θ ̂ i , t ) ) ].

Step 4. Without loss of generality, assume θ i , t θ i , t = ϵ is sufficiently small. We have

W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) = W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) = W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) θ i , t W ̃ i , t ( h t , θ i , t , θ i , t ; θ ̂ i , t ; D ) θ i , t ϵ = 1 ϵ * + ϵ * | N h 0 , t | 1 2 θ 0 , t θ i , t 2 [ e i , t * ( θ ̂ i , t ) ] 2 [ e i , t * ( θ ̂ i , t ) ] 2 ϵ 0 .

The last inequality is due to Step 3.

B.4 Transfers

Let M i , t ( h 0 , t , D ) = 0 for each i N h 0 , t . We compute M i , t ( h 0 , t , D ) for each i N h 0 , t below. If N h 0 , t = { i } , let M i , t ( h 0 , t , D ) = 0 . In the following, we assume | N h 0 , t | > 1 .

Step 1. For each i N h 0 , t and each k ∈ {1, …, K − 1}, let

E Δ W ̃ i , t ( h t , θ 0 , t , θ ̃ k ; D ) E ( θ j , t ) j i W ̃ i , t h t , θ 0 , t , θ ̃ k , ( θ j , t ) j i ; θ ̃ k + 1 ; D W ̃ i , t h t , θ 0 , t , θ ̃ k , ( θ j , t ) j i ; θ ̃ k ; D

be the expected welfare difference for an agent who has type θ ̃ k but raises her report from θ ̃ k to θ ̃ k + 1 (taking expectation over other agents’ current types).

Step 2. We let M t [ h t , θ 0 , t , θ ̂ 0 , t , D ] ( 0 , , 0 ) only if the reported type profile θ ̂ 0 , t satisfies the following condition: one agent in N h 0 , t reports θ ̃ k and all other agents in N h 0 , t report θ ̃ k + 1 . Fix some agent i N h 0 , t . Let χ i , t + ( θ ̃ 1 ) be a type profile in jN Θ j,t such that agent i has type θ ̃ 1 and all other agents in N h 0 , t have type θ ̃ 2 . For each k ∈ {2, …, K − 1}, let χ i , t ( θ ̃ k ) be a type profile in ∏j∈N Θ j,t such that some agent j N h 0 , t \ { i } has type θ ̃ k 1 and all other agents in N h 0 , t have type θ ̃ k ; let χ i , t + ( θ ̃ k ) be a type profile in ∏ jN Θ j,t such that agent i has type θ ̃ k and all other agents in N h 0 , t have type θ ̃ k + 1 . Let χ i , t ( θ ̃ K ) be a type profile in jN Θ j,t such that some agent j N h 0 , t \ { i } has type θ ̃ K 1 and all other agents in N h 0 , t have type θ ̃ K .

Let M i , t + ( θ ̃ 1 ) = p | N h 0 , t | 1 ( θ ̃ 2 ) M i , t [ h t , θ 0 , t , χ i , t + ( θ ̃ 1 ) , D ] .

For each k ∈ {2, …, K − 1}, let

M i , t ( θ ̃ k ) = p ( θ ̃ k 1 ) p | N h 0 , t | 2 ( θ ̃ k ) M i , t [ h t , θ 0 , t , χ i , t ( θ ̃ k ) , D ] and

M i , t + ( θ ̃ k ) = p | N h 0 , t | 1 ( θ ̃ k + 1 ) M i , t [ h t , θ 0 , t , χ i , t + ( θ ̃ k ) , D ] .

Let M i , t ( θ ̃ K ) = p ( θ ̃ K 1 ) p | N h 0 , t | 2 ( θ ̃ K ) M i , t [ h t , θ 0 , t , χ i , t ( θ ̃ K ) , D ] .

Fix some k ∈ {2, …, K − 1}. If agent i reports θ ̃ k , then M t [ h t , θ 0 , t , θ ̂ 0 , t , D ] ( 0 , , 0 ) only in the following cases:

  1. the reported type profile is χ i , t ( θ ̃ k ) : some agent j N h 0 , t \ { i } reports θ ̃ k 1 and all other agents in N h 0 , t report θ ̃ k [this case occurs with probability p ( θ ̃ k 1 ) p | N h 0 , t | 2 ( θ ̃ k ) ], or

  2. reported type profile is χ i , t + ( θ ̃ k ) : each agent j N h 0 , t \ { i } reports θ ̃ k + 1 [this case occurs with probability p | N h 0 , t | 1 ( θ ̃ k + 1 ) ].

Thus, agent i’s expected transfer from reporting θ ̃ k is M i , t ( θ ̃ k ) + M i , t + ( θ ̃ k ) . Similar interpretations apply to M i , t + ( θ ̃ 1 ) and M i , t ( θ ̃ K ) .

Remark B.1

For each k ∈ {1, …, K − 1},

p ( θ ̃ k ) M i , t + ( θ ̃ k ) + ( | N h 0 , t | 1 ) p ( θ ̃ k + 1 ) M i , t ( θ ̃ k + 1 ) = p ( θ ̃ k ) p | N h 0 , t | 1 ( θ ̃ k + 1 ) j N M j , t [ h t , θ 0 , t , χ i , t + ( θ ̃ k ) , D ] = 0 .

Step 3. We construct M i , t and M i , t + below.

For each k ∈ {1, …, K − 1}, let ρ k p ( θ ̃ k + 1 ) p ( θ ̃ k ) .

Let ρ ̃ 1 ( | N h 0 , t | 1 ) ρ 1 and ρ ̃ k ( | N h 0 , t | 1 ) ρ ̃ k 1 ρ k 1 + ρ ̃ k 1 for each k ∈ {2, …, K − 1}.

Set M i , t ( θ ̃ K ) = k = 1 K 1 l = k K 1 1 1 + ρ ̃ l E Δ W ̃ i , t ( h t , θ 0 , t , θ ̃ k ; D ) .

For each k ∈ {2, …, K − 1}, set M i , t + ( θ ̃ k ) = ( | N h 0 , t | 1 ) ρ k M i , t ( θ ̃ k + 1 ) (see Remark B.1) and

M i , t ( θ ̃ k ) = k = 1 k 1 l = k k 1 1 1 + ρ ̃ l E Δ W ̃ i , t ( h t , θ 0 , t , θ ̃ k ; D ) 1 1 + ρ ̃ k 1 M i , t + ( θ ̃ k ) .

Set M i , t + ( θ ̃ 1 ) = ( | N h 0 , t | 1 ) ρ 1 M i , t ( θ ̃ 2 ) (see Remark B.1).

Given M i , t and M i , t + , computing M t is straightforward.

B.5 Linearity and Independence

In this appendix, we show that ( σ τ * , μ τ * ) τ t induces linearity and independence. Let D be an arbitrary direct mechanism that contains ( σ τ * , μ τ * ) τ t . Fix some history h 0 , t ( h t , θ 0 , t , θ ̂ 0 , t ) , where h t ( θ 0 , τ , θ ̂ 0 , τ , e τ r , π τ , m τ ) τ < t . Let N h 0 , t be the set of agents that were recommended to work in period t − 1. If N h 0 , t = { i } , then agent i is the sole member from period t onwards; thus E θ 0 , t W ̃ i , t ( h t , θ t , D ) = θ 0 , t β { i } , t and E θ 0 , t W ̃ j , t ( h t , θ t , D ) = 0 for each ji. If π τ < π ( e τ r ) for some τ < t, then each agent i N h 0 , t becomes the sole member from period t onwards with probability 1 / | N h 0 , t | ; thus, E θ 0 , t W ̃ i , t ( h t , θ t , D ) = θ 0 , t β { i } , t / | N h 0 , t | for each i N h 0 , t and E θ 0 , t W ̃ i , t ( h t , θ t , D ) = 0 for each i N h 0 , t . In the following, we assume | N h 0 , t | > 1 , and either t = 1 or π τ π ( e τ r ) for each τ < t. For each i N h 0 , t , we have E θ 0 , t W ̃ i , t ( h t , θ t , D ) = 0 . Fix some i N h 0 , t . We will show that E θ 0 , t W ̃ i , t ( h t , θ t , D ) = ξ ̃ i , t ( D ) θ 0 , t for some ξ ̃ i , t ( D ) R + (see Lemma B.3).

Agent i’s maximal expected lifetime payoff from reporting some θ i , t Θ i , t (taking expectation over other agents’ current types) is

Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ i , t ; D ) E ( θ j , t ) j i W ̃ i , t h t , θ 0 , t , θ i , t , ( θ j , t ) j i ; θ i , t ; D + M i , t ( θ i , t ) + M i , t + ( θ i , t ) ,

where M i , t ( θ ̃ 1 ) = M i , t + ( θ ̃ K ) 0 .

Lemma B.1

Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ i , t ; D ) Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ i , t ; D )

Proof

The transfer rule M t ensures local Bayesian incentive compatibility: for each k ∈ {1, …, K − 1}, an agent who has type θ ̃ k is indifferent between reporting θ ̃ k and θ ̃ k + 1 ; formally, Z ̃ i , t ( h t , θ 0 , t , θ ̃ k ; θ ̃ k + 1 ; D ) = Z ̃ i , t ( h t , θ 0 , t , θ ̃ k ; θ ̃ k ; D ) . For each θ i,t ∈ Θi,t , the payoff function W ̃ i , t ( h t , , θ i , t ; ; D ) satisfies increasing differences; thus, standard arguments apply to show that local Bayesian incentive compatibility implies global Bayesian incentive compatibility. □

Lemma B.2

Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ i , t ; D ) 0

Proof

Step 1. We show that there exists k* ∈ {1, …, K} such that M i , t ( θ ̃ k * ) + M i , t + ( θ ̃ k * ) 0 .

The proof is by contradiction. Suppose that M i , t ( θ ̃ k ) + M i , t + ( θ ̃ k ) < 0 for each k ∈ {1, …, K}. Then M i , t + ( θ ̃ 1 ) < 0 . Fix k ∈ {1, …, K − 2}. If M i , t + ( θ ̃ k ) < 0 , then M i , t ( θ ̃ k + 1 ) > 0 ,[22] which implies M i , t + ( θ ̃ k + 1 ) < 0 .[23] Hence, M i , t + ( θ ̃ K 1 ) < 0 . Since M i , t + ( θ ̃ K 1 ) + ( n 1 ) ρ K 1 M i , t ( θ ̃ K ) = 0 , we have M i , t ( θ ̃ K ) > 0 , which is a contradiction.

Step 2. We show that Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ ̃ k * ; D ) 0 .

Since M i , t ( θ ̃ k * ) + M i , t + ( θ ̃ k * ) 0 , it suffices to show that W ̃ i , t h t , θ 0 , t , θ i , t , ( θ j , t ) j i ; θ ̃ k * ; D 0 for each ( θ j , t ) j i . Let h 0 , t ( h t , θ 0 , t , θ ̃ k * , ( θ j , t ) j i ) . We have

W ̃ i , t h t , θ 0 , t , θ i , t , ( θ j , t ) j i ; θ ̃ k * ; D 1 ϵ * + ϵ * | N h 0 , t | V ( e i , t * , r , 1 , θ i , t )

where the right hand side is agent i’s expected lifetime payoff from exerting e i , t * , r conditional on receiving recommended effort e i , t * , r = θ 0 , t θ ̃ k * ( 1 + δ γ β N h 0 , t , t + 1 ) and exerting no effort conditional on being recommended not to work. Due to symmetry and ϵefficiency, | N h 0 , t | ξ i , t + 1 ( h t , e t * , r , D , 1 ) is arbitrarily close to β N h 0 , t , t + 1 . In addition, q h 0 , t is arbitrarily close to 1. Hence, we can find sufficiently small ϵ such that

(B.1) V ( e i , t * , r , 1 , θ i , t ) > s i , t ( h 0 , t ) π ( e t * , r ) + δ β N h 0 , t , t + 1 | N h 0 , t | θ 0 , t + γ π ( e t * , r ) c i , t ( e i , t * , r , θ 0 , t , θ i , t ) ϵ .

Substituting s i , t ( h 0 , t ) = e i , t * , r + δ γ β N h 0 , t , t + 1 ( e i , t * , r π ( e t * , r ) / | N h 0 , t | ) j N h 0 , t e j , t * , r + δ γ β N h 0 , t , t + 1 ( e j , t * , r π ( e t * , r ) / | N h 0 , t | ) into (B.1) gives

V ( e i , t * , r , 1 , θ i , t ) > 1 2 θ 0 , t θ ̃ k * ( 1 + δ γ β N h 0 , t , t + 1 ) 2 + δ β N h 0 , t , t + 1 | N h 0 , t | θ 0 , t ϵ 0 .

This implies W ̃ i , t h t , θ 0 , t , θ i , t , ( θ j , t ) j i ; θ ̃ k * ; D 0 .

Step 3. By Lemma B.1, Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ i , t ; D ) Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ ̃ k * ; D ) 0 . □

Lemma B.3

E θ 0 , t W i , t ( h t , θ t , D ) = θ 0 , t ξ ̃ i , t ( D ) , where ξ ̃ i , t ( D ) R + .

Proof

The proof is by induction. Assume that for each h t + 1 ( θ 0 , τ , θ ̂ 0 , τ , e τ r , π τ , m τ ) τ < t + 1 such that π τ π ( e τ r ) for each τ < t + 1 and e j , t r > 0 for each j N h 0 , t , we have E θ 0 , t + 1 W ̃ i , t ( h t + 1 , θ t + 1 , D ) = ξ ̃ i , t + 1 ( D ) θ 0 , t + 1 for some ξ ̃ i , t + 1 ( D ) R + .

Step 1. Let h 0 , t ( h t , θ ̂ i , t , θ i , t ) . Let e i , t * be agent i’s optimal effort conditional on receiving recommended effort e i , t * , r .

If e i , t * = e i , t * , r , then

W ̃ i , t ( h t , θ t ; θ ̂ i , t ; D ) = 1 2 θ 0 , t θ ̂ i , t ( 1 + δ γ β N h 0 , t , t + 1 ) 2 + q h 0 , t s i , t ( h 0 , t ) + δ γ ξ ̃ i , t + 1 ( D ) θ 0 , t ( 1 + δ γ β N h 0 , t , t + 1 ) θ ̂ i , t + j N h 0 , t \ { i } θ j , t + θ 0 , t δ ξ ̃ i , t + 1 ( D ) + ( 1 q h 0 , t ) 1 + δ γ β { i } , t + 1 θ 0 , t θ ̂ i , t ( 1 + δ γ β N h 0 , t , t + 1 ) + θ 0 , t δ β { i } , t + 1 .

If e i , t * = θ 0 , t θ i , t M G , where

M G q h 0 , t s i , t ( h 0 , t ) + δ γ ξ ̃ i , t + 1 ( D ) + ( 1 q h 0 , t ) 1 + δ γ β { i } , t + 1 ,

then W ̃ i , t ( h t , θ t ; θ ̂ i , t ; D ) = 1 2 θ 0 , t θ i , t ( M G ) 2

+ q h 0 , t s i , t ( h 0 , t ) + δ γ ξ ̃ i , t + 1 ( D ) θ 0 , t θ i , t M G + ( 1 + δ γ β N h 0 , t , t + 1 ) j θ j , t + θ 0 , t δ ξ ̃ i , t + 1 ( D )

+ ( 1 q h 0 , t ) 1 + δ γ β { i } , t + 1 θ 0 , t θ i , t M G + θ 0 , t δ β { i } , t + 1 .

By construction, s i,t ( h 0,t ) depends only on ( θ ̂ i , t , θ i , t ) . Hence, we can write

W ̃ i , t ( h t , θ t ; θ ̂ i , t ; D ) = θ 0 , t ζ i , t ( θ 0 , t ; θ ̂ i , t ; D ) , where ζ i , t ( θ 0 , t ; θ ̂ i , t ; D ) < .

Step 2. For each k ∈ {1, …, K − 1}, define

ζ ̃ i , t ( θ ̃ k , D ) E ( θ j , t ) j i ζ i , t θ ̃ k , ( θ j , t ) j i ; θ ̃ k + 1 ; D ζ i , t θ ̃ k , ( θ j , t ) j i ; θ ̃ k ; D .

Then, E Δ W ̃ i , t ( h t , θ 0 , t , θ ̃ k , D ) = θ 0 , t ζ ̃ i , t ( θ ̃ k , D ) . Thus, for each k ∈ {1, …, K}, we can write

M i , t + ( θ ̃ k ) θ 0 , t μ i , t + ( θ ̃ k , D ) and M i , t ( θ ̃ k ) θ 0 , t μ i , t ( θ ̃ k , D ) for some μ i , t ( θ ̃ k , D ) < and μ i , t + ( θ ̃ k , D ) < .

Step 3. We have

E ( θ j , t ) j i W i , t ( h t , θ t , D ) = E ( θ j , t ) j i W ̃ i , t ( h t , θ t ; θ i , t ; D ) + M i , t ( θ i , t ) + M i , t + ( θ i , t ) = E ( θ j , t ) j i θ 0 , t ζ i , t θ i , t , ( θ j , t ) j i ; θ i , t ; D + θ 0 , t μ i , t ( θ i , t , D ) + θ 0 , t μ i , t + ( θ i , t , D ) = θ 0 , t ξ ̂ i , t ( θ i , t , D ) for some ξ ̂ i , t ( θ i , t , D ) < .

By Lemma B.2, we have Z ̃ i , t ( h t , θ 0 , t , θ i , t ; θ i , t ; D ) = E ( θ j , t ) j i W i , t ( h t , θ t , D ) 0 , which implies ξ ̂ i , t ( θ i , t , D ) 0 . It follows that E θ 0 , t W i , t ( h t , θ t , D ) = θ 0 , t ξ ̃ i , t ( D ) for some ξ ̃ i , t ( D ) R + . □

B.6 Proof for Theorem 6.1

By construction, at each history h 0 , t ( h t , θ t ) H 0 , t * , the mechanism designer assigns probability 1 − ϵ* to the first-best joint effort σ ̃ t F ( θ t , N ) . With ϵ* < ϵ, the direct mechanism D * satisfies ϵ-efficiency. Condition (a) of periodic Bayesian incentive compatibility holds due to Lemma 6.1 and the fact that the introduction of M t does not affect optimal efforts. Condition (b) of periodic Bayesian incentive compatibility holds due to Lemma B.1 and the fact that the mechanism designer’s decision is independent of reports at any history h 0,t such that either | N h 0 , t | = 1 or π τ < π ( e τ r ) for some τ < t. Finally, interim individual rationality holds due to Lemma B.2 and the fact that each agent either gets excluded permanently or becomes a permanent sole member at any history h 0,t such that either | N h 0 , t | = 1 or π τ < π ( e τ r ) for some τ < t (it is easy to show that a sole member partnership always earns a non-negative expected lifetime payoff). □

C Proof for Theorem 7.1

Let G V * be the partnership game associated with voting mechanism V * . For each agent i and each period t, a stage-1 history is some h i , t 1 ( θ i , τ , e i , τ ) τ < t , h t , θ 0 , t , θ i , t at which she votes, a stage-2 history is some h i , t 2 ( h i , t 1 , v t , e i , t r ) at which she supplies effort. We construct a perfect Bayesian equilibrium of the partnership game G V * below.

Beliefs. Fix agent iN and agent ji. At any stage-1 history h i , t 1 , if agent i observes that agent j has selected item k τ in each period τ < t, then agent i assigns probability one to the event that agent j’s past types are ( θ ̃ k τ ) τ < t ; formally, β i j * ( θ ̃ k τ ) τ < t | h i , t 1 = 1 . Likewise, at any stage-2 history h i , t 2 , if agent i observes that agent j has selected item k τ in each period τt, then agent i assigns probability one to the event that agent j’s past types are ( θ ̃ k τ ) τ t ; formally, β i j * ( θ ̃ k τ ) τ t | h i , t 2 = 1 .

Strategies. Fix agent iN. A strategy λ i * assigns a voting strategy to a stage-1 history and an effort to a stage-2 history: λ i * ( h i , t 1 ) V i , t * ( h t , θ 0 , t ) and λ i * ( h i , t 2 ) E i , t . For each h i , t 1 ( θ i , τ , e i , τ ) τ < t , h t , θ 0 , t , θ ̃ k , let λ i * ( h i , t 1 ) be a voting strategy that selects item k. For each h i , t 2 ( h i , t 1 , v t , e i , t r ) such that each agent jN has selected item k j in period t, let λ i * ( h i , t 2 ) arg max e i , t W ̂ i , t ( h t , θ t , θ ̃ k i , e i , t r , e i , t , D * ) , where θ j , t = θ ̃ k j for each ji.

Fix period t with history h t , experience θ 0,t , and true types ( θ ̃ k 1 , , θ ̃ k n ) . By construction of λ i * ( h i , t 1 ) , each agent i selects item k i on her menu. Since α t * ( h t , θ 0 , t , θ ̃ k 1 , , θ ̃ k n ) is contained in item k i for each agent i, it gets n votes. If an arrangement α t * ( h t , θ 0 , t , θ 0 , t ) has θ i , t θ ̃ k i for some agent i, then it is not contained in item k i for agent i and gets at most n − 1 votes. As a result, α t * ( h t , θ 0 , t , θ ̃ k 1 , , θ ̃ k n ) is the winning arrangement. By construction of λ i * ( h i , t 2 ) , it is clear that after each agent i selects item k i on her menu, agents supply recommended efforts. In the following, we show that ( β i j * , λ i * ) i N is a perfect Bayesian equilibrium.

Bayes’ Rule. Without loss of generality, fix a stage–1 history h i , t 1 at which agent i observes that agent j has selected item k τ in each period τ < t. By construction of β i j * , at h i , t 1 , agent i assigns probability one to the event that agent j’s past types are ( θ ̃ k τ ) τ < t . By construction of λ j * , agent j has selected item k τ in each period τ < t if and only if agent j’s past types are ( θ ̃ k τ ) τ < t . Thus, β i j * satisfies Bayes’ rule given λ j * .

Sequential Rationality. Fix some period t with history h t and experience θ 0,t , and some agent i with type θ i , t = θ ̃ k i * . Assume the following: (a) from period t + 1 onwards, each agent jN with type θ ̃ k j selects item k j and supplies recommended effort, (b) in period t, each agent ji with type θ ̃ k j selects item k j and exerts recommended effort.

Suppose agent i selects item k i in period t. Then α t * ( h t , θ 0 , t , θ ̃ k 1 , , θ ̃ k n ) is the winning arrangement in period t. It is easy to see that if agent i receives recommended effort e i , t r and exerts effort e i,t , then her expected payoff is W ̂ i , t ( h t , θ t , θ ̃ k i , e i , t r , e i , t , D * ) , where θ j , t = θ ̃ k j for each ji. Thus, exerting effort λ i * ( h i , t 2 ) is sequentially rational.

It is easy to see that if agent i selects item k i in period t, then her expected payoff is E ( θ j , t ) j i W ̃ i , t ( h t , θ t ; θ ̃ k i ; D * ) . Due to periodic Bayesian incentive compatibility of D * , selecting item k i * is sequentially rational. □

References

Anderson, M. J. 2013. “Partner Compensation Systems in Professional Services Firms.” In Law Firm Compensation Systems. Princeton: Princeton University Press.Suche in Google Scholar

Aoyagi, M. 2005. “Collusion through Mediated Communication in Repeated Games with Imperfect Private Monitoring.” Economic Theory 25 (2): 455–75. https://doi.org/10.1007/s00199-003-0436-6.Suche in Google Scholar

Athey, S., and D. A. Miller. 2007. “Efficiency in Repeated Trade with Hidden Valuations.” Theoretical Economics 2 (3): 299–354.Suche in Google Scholar

Athey, S., and I. Segal. 2013. “An Efficient Dynamic Mechanism.” Econometrica 81 (6): 2463–85.10.3982/ECTA6995Suche in Google Scholar

Bartling, B., and A. F. von Siemens. 2010. “Equal Sharing Rules in Partnerships.” Journal of Institutional and Theoretical Economics 166 (2): 299–320. https://doi.org/10.1628/093245610791342987.Suche in Google Scholar

Bergemann, D., and J. Välimäki. 2010. “The Dynamic Pivot Mechanism.” Econometrica 78 (2): 771–89.10.2139/ssrn.1521685Suche in Google Scholar

d’Aspremont, C., and L.-A. Gérard-Varet. 1998. “Linear Inequality Methods to Enforce Partnerships under Uncertainty: An Overview.” Games and Economic Behavior 25 (2): 311–36.10.1006/game.1998.0675Suche in Google Scholar

Eden, A., M. Feldman, A. Fiat, and K. Goldner. 2018. “Interdependent Values without Single-Crossing.” In EC '18: Proceedings of the 2018 ACM Conference on Economics and Computation, June 2018, 369. New York: 2018 ACM Conference on Economics and Computation (USA).10.1145/3219166.3219173Suche in Google Scholar

Fudenberg, D., D. Levine, and E. Maskin. 1994. “The Folk Theorem with Imperfect Public Information.” Econometrica 62 (5): 997–1039.10.1142/9789812818478_0012Suche in Google Scholar

Holmstrom, B. 1982. “Moral Hazard in Teams.” The Bell Journal of Economics 13 (2): 324–40. https://doi.org/10.2307/3003457.Suche in Google Scholar

Jehiel, P., and B. Moldovanu. 2001. “Efficient Design with Interdependent Valuations.” Econometrica 69 (5): 1237–59. https://doi.org/10.1111/1468-0262.00240.Suche in Google Scholar

Kandori, M. 2003. “Randomization, Communications, and Efficiency in Repeated Games with Imperfect Public Monitoring.” Econometrica 71 (1): 345–53.10.1111/1468-0262.00398Suche in Google Scholar

Legros, P., and H. Matsushima. 1991. “Efficiency in Partnerships.” Journal of Economic Theory 55 (2): 296–322. https://doi.org/10.1016/0022-0531(91)90042-3.Suche in Google Scholar

Legros, P., and A. S. Matthews. 1993. “Efficient and Nearly-Efficient Partnerships.” The Review of Economic Studies 60 (3): 599–611. https://doi.org/10.2307/2298126.Suche in Google Scholar

Liu, H. 2018. “Efficient Dynamic Mechanisms in Environments with Interdependent Valuations: The Role of Contingent Transfers.” Theoretical Economics 13 (2): 795–829. https://doi.org/10.3982/te2234.Suche in Google Scholar

Matsushima, H. 1989. “Efficiency in Repeated Games with Imperfect Monitoring.” Journal of Economic Theory 48 (2): 428–42. https://doi.org/10.1016/0022-0531(89)90036-7.Suche in Google Scholar

Myerson, R. B., and M. A. Satterthwaite. 1983. “Efficient Mechanisms for Bilateral Trading.” Journal of Economic Theory 29 (2): 265–81. https://doi.org/10.1016/0022-0531(83)90048-0.Suche in Google Scholar

Pavan, A., I. Segal, and J. Toikka. 2014. “Dynamic Mechanism Design: A Myersonian Approach.” Econometrica 82 (2): 601–53.10.3982/ECTA10269Suche in Google Scholar

Radner, R. 1986. “Repeated Partnership Games with Imperfect Monitoring and No Discounting.” The Review of Economic Studies 53 (1): 43–57. https://doi.org/10.2307/2297590.Suche in Google Scholar

Rahman, D., and I. Obara. 2010. “Mediated Partnerships.” Econometrica 78 (1): 285–308.10.3982/ECTA6131Suche in Google Scholar

Rasmusen, E. 1987. “Moral Hazard in Risk-Averse Teams.” The RAND Journal of Economics 18 (3): 428–35. https://doi.org/10.2307/2555607.Suche in Google Scholar

Tomala, T. 2009. “Perfect Communication Equilibria in Repeated Games with Imperfect Monitoring.” Games and Economic Behavior 67 (2): 682–94. https://doi.org/10.1016/j.geb.2009.02.005.Suche in Google Scholar

Wasserman, N. 2012. “The Founder’s Dilemmas: Anticipating and Avoiding the Pitfalls that Can Sink a Startup.” In The Kauffman Foundation Series on Innovation and Entrepreneurship. Washington DC: Edge International.10.2307/j.ctvcm4hqcSuche in Google Scholar

Wesemann, N., and N. Jarrett-Kerr. 2015. Global Partner Compensation System Survey. Washington DC: Edge International.Suche in Google Scholar

Received: 2020-05-23
Accepted: 2021-11-12
Published Online: 2021-11-29

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 18.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/bejte-2020-0089/pdf?lang=de
Button zum nach oben scrollen