Abstract We study the strategic interaction between a decision maker who needs to take a binary decision but is uncertain about relevant facts, and an informed expert who can send a message to the decision maker but has a preference over the decision. We show that the probability that the expert can persuade the decision maker to take the expert’s preferred decision is a hump-shaped function of his costs of sending dishonest messages. JEL classi…cation: C72, D72, D82 Keywords: Persuasion, Costly Signalling

We would like to thank an anonymous referee whose comments have helped us improve the paper. Financial support by the Faculty of Business and Economics at the University of Melbourne via a Visiting Scholar Grant is also gratefully acknowledged. This paper supersedes an earlier version titled “Biased Experts, Costly Lies, and Binary Decisions.” y Department of Economics, University of St.Gallen. Email: [email protected] z Department of Economics, University of Melbourne. Email: [email protected] x Department of Economics, Faculty of Business and Economics (HEC Lausanne), University of Lausanne. Email: [email protected]

1

1

Introduction

Many decision problems are binary in nature and characterized by the uncertainty that the decision maker faces about crucial decision-relevant facts. Examples include a policy maker’s decision whether or not to realize a given infrastructure project, a Board’s decision whether or not to replace a company’s CEO, the voters’ decisions whether or not to reelect an incumbent government, or a judge’s decision whether or not to convict a defendant. To reduce uncertainty decision makers often consult experts who are better informed about the underlying facts. However, experts may themselves have a preference over the decision, and this preference may not be well-aligned with the decision maker’s preference. Examples include industry experts who are interested in benefitting from public investment, CEOs and incumbent governments who want to remain in power and who know more about their performance and competence than Boards and voters, respectively, and (expert) witnesses in trials who have private information about the level of fault of the defendant, but may be biased towards a specific outcome of the trial. The question of how likely an expert is to persuade the decision maker is of obvious interest and relevance for public policy and the economics of organization. In this letter, we focus on one key aspect that may affect persuasion: the expert’s costs of dishonesty, which can represent mental or moral costs of lying, reputational concerns, or expected punishment when misreporting is unlawful. Our main result is that experts will not be able to persuade a critical decision maker if costs of dishonesty are very low, in which case the decision maker rarely follows the expert’s advice, or if costs of dishonesty are very high, in which case the expert rarely deviates from telling the truth. However, the expert frequently succeeds in persuading the decision maker to take the expert’s preferred decision when the costs of dishonesty are intermediate. Persuasion started to become an influential concept in economics with McCloskey and Klamer (1995). Recent theoretical contributions include Mullainathan et al. (2008) who focus on how a sender can persuade receivers who are coarse thinking (rather than Bayesian), and Kamenica and Gentzkow (2011) and Kolotilin (2013) who study Bayesian 1

persuasion in a setting in which the sender can choose the signal, but cannot misreport its realization. In contrast, we focus on Bayesian persuasion through misreporting the truth. The canonical model to study strategic interactions between a sender (or expert) and a receiver (or decision maker) is Crawford and Sobel (1982). We depart from their framework in two important ways. First, we assume that it is costly for the sender to misreport the state of the world. Second, the choice variable of the receiver is binary rather than continuous. This second difference implies that there is a conflict of interest between the sender and the receiver for some, but not all states. While in Crawford and Sobel’s model the sender and the receiver perpetually disagree about the optimal policy (even under complete information), their disagreement is only partial in our setup. Banks (1990) introduced lying costs into the literature on strategic information transmission. Subsequent contributions include Callander and Wilkie (2007), Kartik et al. (2007) and Kartik (2009). Kartik et al. (2007) and Kartik (2009) add lying costs to the framework of Crawford and Sobel, but maintain the assumptions that the receiver’s choice variable is continuous and that the sender’s preferred action increases in the state of the world. Our analysis thus complements theirs with the key modification that the receiver’s choice is binary, which makes our model suitable to study the real-world problems discussed above. Our framework is relevant for various applications. First, it complements contributions showing how CEOs influence continuous outcome variables, such as the market price of the firm (Fischer and Verrecchia, 2000), his compensation (Goldman and Slezak, 2006), or the range of possible projects (Adams and Ferreira, 2007). Second, when applied to incumbent government behavior, our model is related to the aforementioned works by Banks (1990) and Callander and Wilkie (2007). While they analyze how two symmetric candidates make costly lies about future policies, our model applies to asymmetric elections in which an incumbent with an informational advantage about the state of the world runs for reelection. Hence, our model is also related to Rogoff and Sibert (1988) and Hodler et al. (2010), where an incumbent with private information about his competence or the state of the world may choose socially inefficient policies to improve his reelection prospects;

2

and to Edmond (2013), where a dictator manipulates information to reduce the risk of an uprising. This letter is organized as follows: Section 2 describes the setup, Section 3 provides the results and Section 4 concludes.

2

The Setup

There are two strategic players: sender (or expert) S and receiver (or decision maker) R. The state of the world σ is a random draw from the distribution F (σ) with density f (σ) > 0 and support [0, 1], which is common knowledge. Timing and actions are as follows: First, S observes σ and sends message µ ∈ [0, 1].1 Second, R observes µ (but not σ) and then has the binary choice of realizing or rejecting the project. We denote the probability with which she realizes the project by v. A strategy for R is thus a function v : [0, 1] → [0, 1], with v(µ) ∈ [0, 1] denoting the probability of accepting, given message µ. The joint assumption of binary choice and continuous states is a sensible approximation to many real-world problems, including shareholders’ and voters’ decision (not) to re-elect an incumbent and a policymaker’s decision to accept or reject an infrastructure project with uncertain returns. Payoffs are as follows: S receives a benefit of 1 if and only if the project is realized, but has to bear costs whenever his message is not truthful. These costs of dishonesty are kc(d), where k ≥ 0, d ≡ |µ − σ|, c(0) = 0, c′ (d) > 0 and c′′ (d) ≥ 0. R’s net utility from realizing the project is uR (σ), which satisfies u′R (σ) > 0 and uR (0) < 0 < uR (1). Let σ ˆ be the unique number such that uR (ˆ σ ) = 0. To make the problem interesting, assume R1 u (σ)f (σ)dσ ≤ 0. This ensures that R’s expected net utility from realizing the project 0 R would be negative in absence of any information about σ other than F (σ). The solution concept is perfect Bayesian equilibrium (PBE), and we focus on PBE that satisfy the restrictions on off-equilibrium beliefs proposed by Grossman and Perry’s (1986) concept of Perfect Sequential Equilibria (PSE). Intuitively, the PSE concept agrees 1

Assuming that S observes σ eases the exposition. All our results go through if S only observes a noisy signal of σ, provided that σ and the signal are affiliated random variables.

3

with the Intuitive Criterion that after receiving an off-equilibrium message µ ˆ, R should put zero probability on states at which S could not possibly benefit from a deviation, but adds the requirement that R should put probability that is proportional to the prior over σ on the possibility that S has deviated at any state σ at which the deviation µ ˆ could potentially be profitable for him. To be precise, denote by µ(σ) and v(µ(σ)) the actions a PBE prescribes S and R to take after observing σ and µ, respectively. Fix an equilibrium and let uS (v, µ|σ) be S’s expected payoff given σ, when he plays µ and R plays v and let π(σ|µ) be R’s posterior belief that the state is σ when the message is µ. Consequently, uS (v(µ(σ)), µ(σ)|σ) is S’s expected equilibrium payoff, given σ. Then: Definition 1 A PSE is a perfect Bayesian equilibrium in which after observing some µ ˆ that S does not play in equilibrium, R’s beliefs satisfy (i) π(σ|ˆ µ) = 0 if uS (1, µ ˆ|σ) < uS (v(µ(σ)), µ(σ)|σ), and (ii)

π(σ1 |ˆ µ) π(σ0 |ˆ µ)

=

f (σ1 ) f (σ0 )

if uS (1, µ ˆ|σi ) ≥ uS (v(µ(σi )), µ(σi )|σi ) for

i = 0, 1. In what follows we focus on monotone PSE, i.e., PSE in which µ(σ0 ) ≥ µ(σ1 ) and v(µ0) ≥ v(µ1 ) for σ0 ≥ σ1 and µ0 ≥ µ1 , respectively.

3

Results

3.1

Equilibrium

Let σ ′ be the unique number such that

R1

σ′

uR (σ)f (σ)dσ = 0. That is, if R’s posterior

satisfies π(σ|µ) ∝ f (σ) for all σ ∈ [σ ′ , 1] and π(σ|µ) = 0 otherwise, she is indifferent between accepting and rejecting the project. This is the case if some message µ is sent with equal probability in all states σ ∈ [σ ′ , 1] and with zero probability otherwise. Notice that σ ′ < σˆ . The equilibrium behavior depends on whether S would be willing to play µ(σ ′ ) = 1 if v(1) = 1 but v(µ) = 0 for any µ < 1, i.e., on whether S would misreport state σ ′ by d = 1 − σ ′ if doing so helps to get the project realized. Hence, the equilibrium behavior

4

1 −1 1 depends on whether or not the cost parameter k exceeds c(1−σ > 0 and ′ ) . Let d ≡ c k R σ′′ denote σ ′′ the unique number such that σ′′ −d uR (σ)f (σ)dσ = 0. The following proposition describes the unique monotone PSE: Proposition 1 There exists a unique monotone PSE. Given k ≤

1 , c(1−σ′ )

S plays µ(σ) =

σ if σ < σ ′ , and µ(σ) = 1 if σ ≥ σ ′ ; and R plays v(µ) = 0 for all µ < 1, and v(1) = kc(1 − σ ′ ) < 1. Given k >

1 , c(1−σ′ )

S plays µ(σ) = σ if σ < σ ′′ − d or σ ≥ σ ′′ , and

µ(σ) = σ ′′ if σ ∈ [σ ′′ − d, σ ′′ ); and R plays v(µ) = 0 for all µ < σ ′′ and v(µ) = 1 for all µ ≥ σ ′′ . The proof of Proposition 1 is provided in the Supplementary Material. Figure 1 provides the intuition for Proposition 1.

Ɋሺɐሻ

Ɋሺɐሻ

1

ሺͳሻൌ ሺͳ-ɐ̵ሻ൏ͳ ൏ͳ ͳ

Ɋሺɐሻ

ሺɊሻൌͲ (off eq.)

ሺɊሻൌͲ

Ɋሺɐሻ

ሺɊሻൌͳ ሺɊሻൌͲ (off eq.)

ሺɊሻൌͲ

45° 0

ɐ̵

(a) k ≤

ͳ

ɐ

45° 0

1 c′ (1−σ′ )

ɐ̵̵-݀ҧ

(b) k >

ɐ̵̵

1

ɐ

1 c′ (1−σ′ )

Figure 1: The monotone PSE. The lefthand graph shows the equilibrium if k is small. In this case, R does not realize the project for any message µ < 1, and plays a mixed strategy when observing µ = 1. She thereby mixes in such a way that, at state σ ′ , S is indifferent between the truthful message µ(σ ′) = σ ′ and not having the project realized, and the distorted message µ(σ ′ ) = 1 and having the project realized with probability v(1). If R does not realize the project for any off equilibrium message µ ∈ [σ ′ , 1), which is consistent with the PSE requirements, S finds it indeed optimal to send the truthful message µ(σ) = σ for σ < σ ′ and µ(σ) = 1 for σ ≥ σ ′ . 5

As k increases beyond the threshold

1 , c′ (1−σ′ )

the constraint v(µ) ≤ 1 becomes bind-

ing. That is, R cannot possibly give S more than acceptance with probability 1 upon a sufficiently high message µ. This reduces the size of the maximum distortion that can be supported in equilibrium, and thereby the size of the interval for which S does not send a truthful message. The righthand graph in Figure 1 shows the resulting equilibrium when k is large. In this case, R does not realize the project for any message µ < σ ′′ , but realizes the project for µ ≥ σ ′′ . When observing message µ = σ ′′ , which is sent for all states σ ∈ [σ ′′ − d, σ ′′ ], R is indifferent and therefore willing to realize the project. It is consistent with the PSE requirements that R would not realize the project for any off equilibrium message µ ∈ [σ ′′ − d, σ ′′ ). Given this strategy of R, S finds it indeed optimal to send message µ(σ) = σ ′′ for all σ ∈ [σ ′′ − d, σ ′′ ), and the truthful message µ(σ) = σ for all other σ.

3.2

Main Result

In this section we study the effect of changes in the costs of dishonesty on persuasion. We thereby define persuasion as the ex ante probability P (k) that the project is realized in the equilibrium of the game, and study how P (k) depends on the cost parameter k: Proposition 2 P (k) increases in k for k ≤

1 , c(1−σ′ )

and decreases in k for k >

Proof: Proposition 1 implies P (k) = [1 − F (σ ′ )]kc(1 − σ ′ ) for k ≤ 1 − F (σ ′′ − d) for k >

1 . c(1−σ′ )

1 , c(1−σ′ )

1 . c(1−σ′ )

and P (k) =

Observe first that σ ′ is independent of k. Hence, it directly

1 follows that P ′ (k) > 0 for k ≤ c(1−σ ′ ) . Observe second that since d decreases in k, it R σ′′ follows from σ′′ −d uR (σ)f (σ)dσ = 0 that σ ′′ − d increases in k (while σ ′′ decreases in k).

Hence, it directly follows that P ′(k) < 0 for k >

1 . c(1−σ′ )

The intuition for this hump-shaped relationship is as follows: If dishonesty is not very costly, i.e., if k is relatively small, S sends the message µ(σ) = 1 for all σ ≥ σ ′ , but R responds by only realizing the project with probability v(1) = kc(1 − σ ′ ). As k and the associated costs of dishonesty increase, R realizes the project with ever higher probability v(1) in order to keep S indifferent between the truthful message µ(σ ′ ) = σ ′ and µ(σ ′ ) = 1 6

at state σ ′ . Persuasion then peaks when k =

1 . c(1−σ′ )

In this case, S persuades R with

probability one for all states σ ≥ σ ′ as her costs d = 1 − σ ′ of misreporting at stage σ ′ are just equal to her benefit from the project’s realization. As dishonesty becomes even costlier, S’s willingness to misreport decreases and she only persuades R for ever fewer states of the world.2

4

Conclusions

We have shown that an expert is most persuasive if misreporting the truth is neither too cheap, nor too costly. Hence, one should expect experts to be most influential in circumstances in which their costs of dishonesty are intermediate. 1 Related to Proposition 2, it holds that S’s expected utility EuS is maximized by some k ≥ c(1−σ ′) . R 1 1 ′ ′ To see this, observe that EuS = k∆ if k ≤ c(1−σ′ ) , where ∆ ≡ [1−F (σ )]c(1−σ )− σ′ c(1−σ)f (σ)dσ > 0. 2

7

References [1] Adams, Ren´ee, and Daniel Ferreira, 2007, A Theory of Friendly Boards, Journal of Finance LXII: 217-250. [2] Banks, Jeffrey S., 1990, A Model of Electoral Competition with Incomplete Information, Journal of Economic Theory 50: 309-325. [3] Callander, Steven, and Simon Wilkie, 2007, Lies, Damned Lies and Political Campaigns, Games and Economic Behavior 60: 262-286. [4] Crawford, Vincent, and Joel Sobel, 1982, Strategic Information Transmission, Econometrica 50: 1431-1451. [5] Edmond, Chris, 2013, Information Manipulation, Coordination, and Regime Change, Review of Economic Studies 80: 1422-1458. [6] Fischer, Paul, and Robert Verrecchia, 2000, Reporting Bias, Accounting Review 75: 229-245. [7] Goldman, Eitan, and Steve Slezak, 2006, An Equilibrium Model of Incentive Contracts in the Presence of Information Manipulation, Journal of Financial Economics 80: 603-626. [8] Grossman, Sanford J., and Motty Perry, 1986, Perfect Sequential Equilibrium, Journal of Economic Theory 39, 97-119. [9] Hodler, Roland, Simon Loertscher, and Dominic Rohner, 2010, Inefficient Policies and Incumbency Advantage, Journal of Public Economics 94: 761-767. [10] Kamenica, Emir, and Matthew Gentzkow, 2011, Bayesian Persuasion, American Economic Review 101: 2590-2615. [11] Kartik, Navin, 2009, Strategic Communication with Lying Costs, Review of Economic Studies 76: 1359-1395.

8

[12] Kartik, Navin, Marco Ottaviani, and Francesco Squintani, 2007, Credulity, Lies and Costly Talk, Journal of Economic Theory 134: 93-116. [13] Kolotilin, Anton, 2013, Experimental Design to Persuad, Working Paper University of New South Wales. [14] McCloskey, Donald, and Arjo Klamer, 1995, One Quarter of GDP is Persuasion, American Economic Review Papers and Proceedings 85: 191-195. [15] Mullainathan, Sendhil, Joshua Schwartzstein, and Andrei Shleifer, 2008, Coarse Thinking and Persuasion, Quarterly Journal of Economics 123: 577-619. [16] Rogoff, Kenneth, and Anne Sibert, 1988, Elections and Macroeconomic Policy Cycles, Review of Economic Studies 55: 1-16.

9