http://www.econometricsociety.org/

Econometrica, Vol. 81, No. 2 (March, 2013), 781–812 LANGUAGE BARRIERS ANDREAS BLUME University of Arizona, Tucson, AZ 85721, U.S.A. OLIVER BOARD New York University School of Law, New York, NY 10012, U.S.A.

The copyright to this Article is held by the Econometric Society. It may be downloaded, printed and reproduced only for educational or research purposes, including use in course packs. No downloading or copying may be done for any commercial purpose without the explicit permission of the Econometric Society. For such commercial purposes contact the Office of the Econometric Society (contact information may be found at the website http://www.econometricsociety.org or in the back cover of Econometrica). This statement must be included on all copies of this Article that are made available electronically or in any other format.

Econometrica, Vol. 81, No. 2 (March, 2013), 781–812

LANGUAGE BARRIERS BY ANDREAS BLUME AND OLIVER BOARD1 Different people use language in different ways. We capture this by making language competence—the set of messages an agent can use and understand—private information. Our primary focus is on common-interest games. Communication generally remains possible; it may be severely impaired even with common knowledge that language competence is adequate; and, indeterminacy of meaning, the confounding of payoff-relevant information with information about language competence, is optimal. KEYWORDS: Communication games, language differences, indeterminate meaning, organizational codes.

1. INTRODUCTION INDIVIDUALS DIFFER in their language use, express themselves more clearly in some domains than others, and do not always agree on the meanings of utterances. The very notion of an organizational code (Arrow (1974)) presumes a privileged understanding of the code by the members of the organization. If meanings were always clear, there would be no need for statutory interpretation of laws by courts (Posner (1987), Eskridge, Frickey, and Garrett (2006)), or to create trading zones to mediate communication across subcultures in science (Galison (1997)). Our aim here is to present a simple, portable, formal framework for expressing the idea that language is imperfectly shared, that some individuals are better equipped to use language, and that there can be disagreements about meaning. To this end, we introduce privately known language competence into standard communication games. Our approach lets us express that agents are language constrained, that their constraints differ, the different degrees to which

1

We are grateful for conversations with and comments received from Ying Chen, Wouter Dessein, Christopher Gertz, Maria Goltsman, Sergiu Hart, Navin Kartik, Frederic Koessler, Jiwoong Lee, Wei Li, John Moore, Stephen Morris, Gregory Pavlov, Joel Sobel and from seminar audiences at the Harvard/MIT Theory seminar, the Trimester Program on Mechanism Design at the Hausdorff Institute for Research in Mathematics (Universität Bonn), the European University Institute, the Institute for Advanced Study, Lehigh University, Maastricht University, Princeton University, Rice University, the University of Bielefeld, Boston College, the Paris School of Economics, the University of Pittsburgh, Polytechnic—HEC, Rutgers University, the University of Texas–Austin, the University of Toronto, the University of Western Ontario, the Econometric Society World Congress (Shanghai), Workshop on Decentralized Mechanism Design, Distributed Computing, and Cryptography sponsored by the Institute for Advanced Study and DIMACS (Princeton) and the 2nd Brazilian Workshop of the Game Theory Society (São Paulo). Blume’s stay at the Institute for Advanced Study was funded through a Roger W. Ferguson, Jr. and Annette L. Nazareth Membership. © 2013 The Econometric Society

DOI: 10.3982/ECTA9183

782

A. BLUME AND O. BOARD

agents know a language, and the different degrees to which language is shared among agents.2 In the examples that Lewis (1969) used to illustrate conventional meaning, meaning is clear: Each state of the world is indicated by one and only one message and each message induces one and only one action. Thus the meaning of a message can be equivalently expressed as the state in which that message is appropriate (its indicative meaning) or as the action which is appropriate for that message (its imperative meaning). In Crawford and Sobel’s (1982; henceforth CS) setting, conflict between the communicating parties necessitates that meanings are coarse. The indicative meaning of a message is now a nontrivial set of states, and its imperative meaning the action that is induced by beliefs concentrated on that set of states. Meanings are also coarse in the commoninterest case if the language is restricted by limiting the set of available messages, as is done by Crémer, Garicano, and Prat (2007) in their work on optimal organizational codes and by Jäger, Metzger, and Riedel (2011) on convex categories in optimal (natural) languages. In our model, players have private information about which messages they can send and understand—their language competence—in addition to their decision-relevant private information.3 An example of such language competence would be familiarity with an organization’s code. The degree to which language is shared in organizations is an important determinant of their structure and performance. March and Simon (1958) noted the inadequacy of language for communicating about intangible and nonstandardized objects and expressed the belief that “language compatibility” shapes the usage of communication channels in organizations. Consistent with this view, Bechky (2003) found misunderstandings due to occupational language differences in a study of a semiconductor equipment manufacturing company, and Zenger and Lawrence (1989) found that age and tenure distributions affect communication patterns in a U.S. electronics firm. Weber and Camerer (2003) demon2 Language constraints also appear in Crémer, Garicano, and Prat (2007) and Jäger, Metzger, and Riedel (2011), who limited players to finite numbers of messages; Crawford and Haller (1990) and Blume (2000), who imposed symmetry requirements on players’ strategies; and Rubinstein (1996), who dealt with agents for whom some objects are nameless and who have access to a limited set of binary relations on the set of objects. Rubinstein (2000) studied how language constraints affect individual decision making. None of these focused on communication with an imperfectly shared language. Dewatripont and Tirole (2005) pointed out and investigated the moralhazard-in-teams aspect of communication, that senders and receivers exert effort in clarification and comprehension. They modeled language constraints implicitly through a communicationsuccess function with sender and receiver efforts as arguments and did not raise the issue of agents being uncertain about each other’s language competence. 3 There are sensible ways of expressing imperfectly shared meanings other than through private information about language competence, for example, communication through noisy channels (Blume, Board, and Kawamura (2007)), correlated equilibria (De Jaegher (2003)), and local interaction (Zollman (2005)). We view these as complementary. What our approach adds is a natural way to express different degrees of knowing a language and reasoning about other players’ knowledge of language in communication games.

LANGUAGE BARRIERS

783

strated experimentally how organizations develop homegrown languages when natural language fails them and how post-merger performance suffers when different homegrown languages disagree. Another domain in which language competences differ and there is uncertainty about them is doctor-patient communication. Ong, de Haes, Hoos, and Lammes (1995) surveyed the literature and cited evidence for a tension between medical language (ML) and everyday language (EL); patients sometimes try to make use of basic familiarity with ML but doctors fail to recognize the change in register, and there are considerable gaps in the physician-patient understanding of common psychological terms (depression, migraine, eating disorder). In linguistics, Hymes (1972) has formulated the notion of “communicative competence” and stressed the need to acknowledge “differential competence in a heterogeneous speech community,” contrasting his view with Chomsky’s conception of linguistic theory as “concerned primarily with an ideal speaker-listener, in a completely homogeneous speech community” (Chomsky (1965), cited in Hymes (1972)). In our setup, players’ information about language competence is purely instrumental. This suggests that it is sensible to continue to think of the indicative meaning of a message as the decision-relevant information conveyed by the message, and the imperative meaning as the action induced by that message. With this interpretation, in addition to being coarse, meaning becomes indeterminate. The sets of decision-relevant states indicated by a message—its indicative meaning—may vary with the messages available to the player sending the message. The action induced by a message—its imperative meaning—may depend on whether or not the recipient of the message understands it. We are interested in how this uncertainty about message meaning affects the ability of players to communicate and the manner of their communication. Our main focus is on common-interest games. When all players send messages, receive messages, and take actions, we find that communication generally remains possible, while efficiency losses from making language competence private information can be severe even when language competence itself is always sufficient for attaining efficiency. We then restrict attention to sender-receiver games so as to isolate the effects of uncertainty about the meaning of received messages from uncertainty about how sent messages will be interpreted. In optimal equilibria of commoninterest sender-receiver games where only the sender’s language competence is an issue, we show that the sender will always make effective use of all the messages available to her. In these equilibria, indicative meaning will generally be indeterminate in the sense that decision-relevant information gets confounded with instrumental information about language.4 Similarly, if only the receiver’s competence is an issue, then the imperative meaning will be indeterminate, in 4 Similar difficulties of separating different dimensions of private information from each other arise when all of these dimensions directly impact payoffs. Morgan and Stocken (2003) showed, in a variant of the CS model where the sender is privately informed about her preferences in addition to the state, that, in equilibrium, the sender cannot fully reveal the state even if pref-

784

A. BLUME AND O. BOARD

the sense that the receiver’s response to a message becomes stochastic from the sender’s perspective due to its dependence on the receiver’s privately known language competence. Private information about language competence thus drives a wedge between the indicative meanings of messages and their imperative meanings. 2. PRIVATE KNOWLEDGE OF LANGUAGE COMPETENCE We start with a framework that incorporates privately known language competence into a class of two-stage games in which players simultaneously and publicly communicate in the first stage and simultaneously take actions in the second stage. Players i = 1     I interact in two stages (we use I to indicate both the player set and its cardinality). In the communication stage, each player i sends a message mi from a finite set M that is observed by all other players. In the action stage, each player i takes an action ai ∈ Ai . At the beginning of the game, each player i is privately informed of her decision type ti ∈ Ti and her language type λi ⊆ M. We assume that there is one message, m0 , that is always available to all players, and define the set of player i’s language types as Λi := {λ ∈ 2M |m0 ∈ λ}. Each ti is drawn from a distribution Fi on Ti and the language type profile λ is drawn from a distribution π on Λ := i∈I Λi . The distributions F1      FI and π are independent and common knowledge. The profile of decision types t ∈ T = i∈I Ti and the profile of actions a ∈ A = i∈I Ai determine player i’s payoff Ui (a t). Player i’s language type λi is the set of messages that she can send and understand. Messages that she does not understand, she has to treat identically. To capture this, for any λi introduce an equivalence relation ∼λi on the set of all profiles of messages m ∈ M I , with the interpretation that m ∼λi m if m and m differ only in messages that i does not understand. Formally, m ∼λi m if and only if, for all j ∈ I, it is the case that mj = mj ⇒ mj  mj ∈ M \ λi .5 Each player i’s strategy is a pair (σi  ρi ) consisting

×

×

×

erences are fully aligned at the interim stage. It is impossible completely to separate the two dimensions of private sender information. Unlike in our setup, both dimensions directly impact payoffs, and the confounding of information about the state with information about preferences is driven by ex ante conflict between sender and receiver. Morris (2001) demonstrated how a reputational dimension may prevent full revelation of the state even if interim preferences are fully aligned. Levy and Razin (2007) showed that communication in one (common-interest) dimension may be hindered by conflict in another dimension because they are linked through the prior. Sometimes there is a benefit from private information being multidimensional, either because it permits the expert to trade off incentives across dimensions (Chakraborty and Harbaugh (2010)) or because uncertainty about the expert’s bias leads to a bias that is diminished in expectation (Li and Madarász (2008)). 5 There are many natural ways to enrich this framework: (1) One can allow players to make some, albeit coarse, distinctions among messages that they do not understand by letting a language type be a pair (λi  Pi ), where λi ⊆ M is the set of messages that player i can send and Pi

LANGUAGE BARRIERS

785

of a signaling rule σi : Ti × Λi → Δ(M) at the communication stage and a decision rule ρi : Ti × Λi × M I → Δ(Ai ) at the action stage. The signaling rule σi must satisfy the condition that σi (ti  λi ) ∈ Δ(λi ) for all ti ∈ Ti and λi ∈ Λi , and the decision rule must satisfy the condition that ρi (ti  λi  m) = ρi (ti  λi  m ) for all ti ∈ Ti , λi ∈ Λi and for all m m ∈ M I with m ∼λi m . We refer to these two conditions as player i’s language constraints.6 3. UNIVERSAL PRIVATE LANGUAGE CONSTRAINTS Our initial focus is on the case where all players communicate, take actions, and face privately known language constraints. Here we make two observations: (1) There is generally a role for communication even with privately known language competence; that is, for language to be useful, it does not have to be common knowledge. (2) Universal private information about language constraints may imply a significant efficiency loss even when it is common knowledge that language constraints themselves impose no efficiency losses and when the loss from partial private information about language constraints is negligible. 3.1. A Role for Communication Assume that Ai and Ti are finite for all i and that all players have common interests, that is, there is a function U : A × T → R such that Ui = U for all i ∈ I. All distributions F1      FI and π have full support. Suppose that U has a unique maximizer a(t) for every t and that M contains at least three messages, is a partition of M that satisfies m ∈ λi ⇒ {m} ∈ Pi and indicates which distinctions the player can make among messages. This would allow one to capture, for instance, the phenomenon that the agent does not understand the difference between “metaphysics” and “dialectics” but places both in philosophy. (2) Instead of letting players respond to unknown messages strategically, as we do, one could introduce a nonstrategic default interpretation of unknown messages. (3) One could permit the sender to send some messages that she does not understand, by letting a language type be a pair (Qi  Pi ) of (possibly identical) partitions of M, where, as before, Pi captures the distinctions she can make among received messages and where she has to treat messages in any element of Qi identically by randomizing uniformly over those messages. This would capture, for example, the player’s being able to use the terms “dialectics” and “metaphysics,” but without being able to differentiate their meanings. 6 The constraint on the signaling rule resembles that of the literature on verifiable information/hard evidence/disclosure, where permissible messages are subsets of the type space that include the true type (Grossman (1981), Milgrom (1981), Milgrom and Roberts (1986)), or, more generally, where each type is assigned a set of permissible messages (Glazer and Rubinstein (2001, 2006)). In our setting, a type is two-dimensional, composed of a language type and a decision type, and a (two-dimensional) type’s set of permissible messages coincides with the second type component, the language type. Unlike in the disclosure literature, messages do not directly reveal information about the sender’s decision type. In the future, it may be interesting to study environments where language types and decision types are correlated, for example, when mastery of an occupation-specific jargon correlates with skill in that occupation.

786

A. BLUME AND O. BOARD

m0 , m , and m . We call such a common-interest game information responsive if it satisfies the condition     ∃t   t  such that aj t = aj t−  t  ∀j ∈ I Intuitively, a game is information responsive if there is some player whose information affects every component of the optimal action profile. In commoninterest games, a strategy profile that maximizes ex ante expected payoffs must be a perfect Bayesian equilibrium profile. We refer to such an equilibrium as an optimal equilibrium and note that it solves the problem of a planner who can commit players to follow strategies of her choice.7 PROPOSITION 1: In information-responsive common-interest games, there is an optimal equilibrium with communication that is strictly superior to any equilibrium without communication. PROOF: Since we have a finite game, the problem of finding a profile of strategies αi : Ti → Ai in the game without communication that maximize joint payoffs,    max U α1 (t1 )     αI (tI ) t F(t) α1 αI

t∈T

has a solution, α. ˆ Evidently, there is no loss in restricting attention to pure strategies, and given that we have a common-interest game, the profile αˆ forms an equilibrium. For every player i, let λ˜ i be a language type with m  m ∈ λ˜ i and recall that, by our full-support assumption on π, any such language type has positive probability. In the communication game, consider the strategy profile (σ ρ) that prescribes for all players i = the signaling rule σi (ti  λ˜ i ) = m and σi (ti  λi ) = m0 for all (ti  λi ) = (ti  λ˜ i ), and for player the signaling rule σ (t   λ˜ ) = m , σ (t   λ˜ ) = m , and σ (t  λ ) = m0 for all (t  λ ) = (t   λ˜ ) (t   λ˜ ) at the communication stage. Define t := (t−  t  ), m := (m  m      m ), and m := (m  m      m  m  m      m ) (with m in the th component). At the action stage, let the strategy profile prescribe the action rule ρi (ti  λ˜ i  m ) = ai (t ), ρi (ti  λ˜ i  m ) = ai (t ), and ρi (ti  λi  m) = αˆ i (ti ) otherwise. Then, for any decision type profile t = t  t , the ex post payoff in the communication game is the same as in the game without communication. For any decision type profile t = t  t , an ex post optimal action profile is chosen whenever the language state λ˜ is realized, which occurs with positive probability, and otherwise the ex post payoff is the same as in the game without communication. Therefore, the ex post payoff in the game with communication is never 7 The close connection between optimality and equilibrium in common-interest games has been fruitfully exploited by Crawford and Haller (1990), McLennan (1998), and Alpern (2002).

LANGUAGE BARRIERS

787

less than the ex post payoff in the game without communication. If there is i ∈ I with α(t ˆ i ) = ρi (ti  λ˜ i  m ), then, in state t , the ex post payoff in the communication game strictly exceeds the ex post payoff in the no-communication game. If, however, α(t ˆ i ) = ρi (ti  λ˜ i  m ) for all i ∈ I, then it must be the case that αˆ i (ti ) = ρi (ti  λ˜ i  m ) for all i = , in which case, in state t , the ex post payoff in the communication game strictly exceeds the no-communication payoff. Therefore, the ex ante payoff from the profile (σ ρ) in the communication game strictly exceeds the payoff from any optimal profile αˆ in the game without communication. While (σ ρ) itself need not be an optimal profile in the communication game, since the game is finite and has the common-interest property, an optimal profile (σ ∗  ρ∗ ) exists. Using the fact that we have a commoninterest game once more, it follows that (σ ∗  ρ∗ ) is an equilibrium. Therefore, the communication game has an optimal equilibrium, and this equilibrium has a strictly higher payoff than the optimal equilibrium of the game without communication. Q.E.D. Note that this result tells us nothing about the form of the optimal equilibrium. It simply utilizes the fact that when the distribution of language types has full support, there will be instances in which it is possible accurately to reveal the profile of decision types, to signal universal comprehension of the relevant messages, and to take the corresponding optimal profile of actions. It may, however, not be optimal ever fully to reveal the state, and the manner in which decision types pool on messages may vary with their language types, thus confounding message meanings. Message meaning may be further confounded because players may be unable to signal message comprehension. We will investigate these questions, which are of central interest to us, later in environments with more structure. 3.2. The Interaction of Uncertainties About Language Competence Our purpose in this section is to show, via a two-player example, that universal uncertainty about language constraints can lead to severe efficiency losses even when the constraints themselves and one-sided uncertainty imply no substantial loss. For the next result, assume that there are two players. At the action stage, each player chooses one of 2n locations, that is, Ai = {1 2     2n}, with n ≥ 2. Locations are either good or bad. If both players choose the same location and that location is good, then both receive a payoff of 1. Otherwise their common payoff is 0. Exactly one of the first n locations is good and exactly one of the second n locations is good. Good locations are drawn independently from uniform distributions on the sets {1     n} and {n + 1     2n}, respectively. Player 1 privately knows which of the first n locations is good: Her decision type set is T1 = {1 2     n} with t1 ∈ T1 indicating the good location. Similarly, player 2 privately knows which of the second n locations is good: His decision type set is T2 = {n + 1     2n} with t2 ∈ T2 indicating

788

A. BLUME AND O. BOARD

the good location. Consider a class of games indexed by κ, where, for each κ = 1 2     the finite message space M has at least 2 × κ2 elements (for notational purposes, we do not explicitly index M by κ), #(λi ) = κ2 with probability 1 for both individuals and #(λ1 ∩ λ2 ) = κ with probability 1. Pairs of language types (λ1  λ2 ) are drawn from a uniform distribution on the set {(λ1  λ2 ) ∈ 2M × 2M |#(λ1 ∩ λ2 ) = κ #(λi ) = κ2  i = 1 2}.8 To accommodate all cases of players having or not having access to information about their counterpart’s language competence, we write strategies as functions of the entire language state and make the appropriate restrictions when part of the state is private information. Then a (behaviorally mixed) strategy for player i is a pair (σi  ρi ) consisting of a signaling rule σi : Λi × Λ−i × Ti → ΔM (where σi (λi  λ−i  ti ) = σi (λi  λ−i  ti ) ∀λi ∈ Λi  λ−i  λ−i ∈ Λ−i  ti ∈ Ti if i does not know −i’s language competence), and an action rule ρi : Λi × Λ−i × Ti × M → ΔAi , so that ρi (λi  λ−i  ti  m−i ) is player i’s action as a function of player −i’s message m−i ∈ M (where ρi (λi  λ−i  ti  m−i ) = ρi (λi  λ−i  ti  m−i ) ∀λi ∈ Λi  λ−i  λ−i ∈ Λ−i  ti ∈ Ti  m−i ∈ M ∈ if i does not know −i’s language competence), with the understanding that both message and action rules respect player i’s language constraints. The following observation demonstrates that efficiency losses from private information about language competence can be severe, even when language competence itself is always sufficient for attaining efficiency. OBSERVATION 1: In the two-player location choice game: (1) If the language competence of both players is private information, then the common efficient equilibrium payoff converges to the no-communication payoff, n1 , as κ → ∞. (2) If at least one player knows the language competence of the other, the common efficient equilibrium payoff converges to the maximally feasible payoff, 1, as κ → ∞. PROOF: For (1), observe that for any message in λi that player i may send, that message does not belong to λ−i with probability 1 − κ1 , in which case player −i’s action does not depend on the message sent by player i and the probability of being able to coordinate on location ti is at most 1/n. Since the probability that neither player sends a message that the other player understands is (1 − κ1 )2 , the optimal payoff when both language competences are private information is bounded from above by 1 − (1 − κ1 )2 (1 − n1 ), which converges to n1 as κ → ∞. To show (2), without loss of generality, assume that it is player 1 who knows player 2’s language competence λ2 . For any κ2 ≥ n, we can find 2  κn  mutually exclusive subsets of λ2 of size n, S1 (λ2 ) S2 (λ2 )     Sκ2 /n (λ2 ). For any λ2 , define a function φλ2 : λ2 → T1 with the property that the restric8

Note that in this example the language types are not drawn independently from each other.

LANGUAGE BARRIERS

789

tion of φλ2 to any set Si (λ2 ) is a bijection denoted φiλ2 . At the communication stage, let player 1 use the message rule σ1 : Λ1 × Λ2 × T1 → M that is defined by  −1   φi∗ λ2 (t1 ) if i∗ = min i|φ−1 iλ2 (t1 ) ∈ λ1  σ1 (λ1  λ2  t1 ) = otherwise m(λ1 λ2 )  where m(λ1 λ2 ) is an arbitrary element of λ1 ∩λ2 , and let player 2 use an arbitrary message rule σ2 . At the action stage, let player 1 use the action rule ρ1 that is defined by ρ1 (λ1  λ2  t1  m2 ) = t1 for all (λ1  λ2  t1  m2 ) and let player 2 use the action rule ρ2 that is defined by  φλ2 (m1 ) if m1 ∈ λ2  ρ2 (λ2  λ1  t2  m1 ) = otherwise t2  for all (λ2  λ1  t2  m1 ). Notice that whenever the set {i|φ−1 iλ2 (t1 ) ∈ λ1 } is nonempty, the above strategy profile guarantees both players a payoff of 1. Conditional on λ2 , the proba1 κ2 /n , which bility that the set {i|φ−1 iλ2 (t1 ) ∈ λ1 } is empty is no larger than (1 − κ ) converges to zero as κ → ∞. Therefore, as κ → ∞, we have a sequence of games and corresponding strategy profiles with payoffs converging to 1. The result follows from the fact that, in finite common-interest games, optimal profiles are equilibrium profiles, an optimal profile exists, and the payoff from an optimal profile is bounded below by the payoff from the profile we constructed. Q.E.D. The reason for having to state the second part of Observation 1 as a convergence result is that knowing one’s counterpart’s language type is not enough to communicate in her language. Joe can know that Jill can name all the local bars without being able to name any himself. It is a strength of the framework that it permits one to make the distinction between knowing someone’s language competence and having that competence oneself. 4. SENDER-RECEIVER GAMES In this section, we restrict attention to sender-receiver games, and separately analyze the cases where only the language competence of the sender is the issue and where only the language competence of the receiver is the issue. 4.1. Language Competence of the Sender A privately informed sender, S, communicates with a receiver, R, by sending one of a finite number of messages m ∈ M, where #(M) ≥ 2. The payoffs U S (a t) and U R (a t) of the sender and the receiver depend on the receiver’s action, a ∈ A = R , and the sender’s payoff-relevant information t ∈ T , her decision type; we assume that T is a convex and compact subset of R that has a

790

A. BLUME AND O. BOARD

nonempty interior. It is common knowledge that the sender’s decision type is drawn from a distribution F with density f that is everywhere positive on T . The function U S is differentiable and strictly concave in a for every t ∈ T . Denote the set of distributions over T by Δ(T ) and assume that the receiver has a unique best reply ρ(μ) ˆ to any belief μ ∈ Δ(T ). Slightly abusing notation, for any measurable set Θ ⊂ T , denote by ρ(Θ) ˆ his optimal response to his prior belief concentrated on Θ and use ρ(t) ˆ as shorthand for ρ({t}). ˆ Assume that for ˆ  ) = ρ(t). ˆ Note that for any set Θ ⊂ T that has positive probability all t  = t, ρ(t and any set Θ0 that has zero probability,   ρ(Θ) ˆ = ρˆ Θ \ Θ0  For any Θ ⊂ T and any two actions a1 ∈ A and a2 ∈ A, define   Θa1 a2 := t ∈ Θ|U S (t a1 ) ≥ U S (t a2 )  the set of types in Θ who prefer action a1 to action a2 . Similarly, define Θa1 a2 for strict preference, and Θa1 ∼a2 for indifference. Note that for any measurable set Θ ⊂ T and for any pair a1  a2 ∈ A with a1 = a2 , the continuity of the sender’s payoff function implies that the sets Θa1 a2  Θa2 a1 , and Θa1 ∼a2 are measurable. Assume that for any two a1  a2 ∈ A with a1 = a2 , Prob(Ta1 ∼a2 ) = 0. For any K-tuple of actions (a1      aK ) with 2 ≤ K ≤ M, deK fine Θa1 a2 aK := n=2 Θa1 an , the set of sender types who prefer action a1 over actions a2      aK , and use Ω to denote the collection of all such sets. Our next assumption ensures that agents would find it useful to have access to a fine-grained language. ASSUMPTION 1: (A) For any Θ ∈ Ω and any pair of actions a1  a2 ∈ A such ˆ a1 a2 ) = ρ(Θ). ˆ that Θa1 a2 and Θa2 a1 both have positive probability, ρ(Θ (B) For any belief μ, there exists a type t(μ) such that ρ(μ) ˆ = ρ(t(μ)). ˆ Part (A) of Assumption 1 formalizes the idea that the optimal receiver response is sufficiently sensitive to beliefs. This is the key assumption that ensures that the receiver responds differently to a message, depending on whether he knows or does not know that the sender has alternative attractive messages available. Part (B) requires that any best response to some belief is also the receiver’s ideal point for some state of the world. In the common-interest case, which is our primary focus, part (A) comes essentially for free given the assumptions we made on the payoff function (strict concavity, unique maximizer for every state, and differentiability), while part (B) can be guaranteed by a collective nonsatiation property that requires that all actions that are not optimal for any type can be improved upon for all types. Note in particular that Assumption 1 is satisfied in the general version of the CS model and in a multidimensional environment where sender and receiver have convex loss functions (considered by Jäger, Metzger, and Riedel (2011)), both of which permit conflict of interest. We verify these claims in Appendix A.

LANGUAGE BARRIERS

791

We assume that not every message m ∈ M may be available to the sender. Instead, the sender privately learns a set λ ⊂ M of available messages, her language type.9 One message, m0 ∈ M, is assumed to be always available. Thus the sender’s language type λ is drawn, independently from her decision type t, from a commonly known distribution π on Λ = {λ ∈ 2M |m0 ∈ λ}, the set of all subsets of M that contain the message m0 . As usual, we assume that this entire structure is common knowledge. A sender strategy is a mapping σ : T × Λ → Δ(M) that satisfies the condition σ(t λ) ∈ Δ(λ). A receiver strategy is a mapping ρ : M → A.10 We study perfect Bayesian Nash equilibria (σ ρ β), where β is a belief system that is derived from the sender’s strategy σ by Bayes’s rule whenever possible, the sender’s strategy σ is a best reply to the receiver’s strategy ρ, and ρ is a best reply after every message, given the belief system β. 4.1.1. Indeterminacy of Indicative Meaning Inspired by Lewis (1969), we refer to the (decision-relevant) information about the sender that is conveyed by the message as the indicative meaning of the message. Indeterminacy of indicative meaning arises when the receiver’s strategy is not optimal given the sender’s language competence. DEFINITION 1: There is indeterminacy of indicative meaning in equilibrium (σ ρ β) if there exists a language type λ and a message m ∈ λ that is used with positive probability by λ such that ρ(m) is not optimal for the receiver conditional on the language type λ being revealed (in other words, if the equilibrium is not ex post with respect to λ). As an example, consider the case where the sender’s decision type t is drawn from a uniform distribution on the interval [0 1], the receiver takes actions a ∈ R, and sender and receiver have the common payoff function −(t −a)2 ; this corresponds to the uniform-quadratic CS environment with common interests. If it is commonly known that the sender’s set of available messages is λ0 = {m0 }, the unique optimal action is 1/2; if it is commonly known that the set of available messages is λ1 = {m0  m1 } then it is optimal for the sender to send message m0 on the interval [0 1/2) and message m1 otherwise, and for the receiver to respond with actions 1/4 and 3/4 to those messages. In contrast, if 9 Our distinction between decision types and language types is a convenient terminological device. One could instead follow Harsanyi (1967, 1968a, 1968b) and express the inability of the sender to send a particular message by assigning an arbitrarily large negative payoff to doing so. This would not affect our results but would obscure the fact that ultimately both parties are interested in communicating information about t. Any information transmission about language competence is merely instrumental. We leave the analysis of a still more general model, in which different messages are available at different privately known costs, for later work. 10 The restriction to pure strategies for the receiver is without loss of generality because of our assumption that the receiver has a unique best reply given any belief.

792

A. BLUME AND O. BOARD

it is commonly known that both language types λ0 and λ1 have probability 1/2, it is√optimal for the sender with language type λ1 to send message m0 for t ∈ √ 5−1 [0 5−1 ), where ≈ 0618, and message m1 otherwise, and for the receiver 2 2 to respond to message m0 with action a0 ≈ 0427 and to message m1 with action a1 ≈ 0809. Since it would be optimal for the receiver to take action a0 ≈ 0309 in response to message m0 if it were revealed to him that the sender’s language type is λ1 , instead of taking action a0 ≈ 0427, there is distortion of indicative meaning. Indeterminacy of indicative meaning need not arise if only a few actions are induced in equilibrium and, given the equilibrium strategy of the receiver, the sender is never constrained by her language ability so that, for every action that can be induced, she always has a message that induces that action. This is, trivially, the case in pooling equilibria.11 Intuitively, however, the more information is transmitted and the more actions are induced in equilibrium, the more likely it is that there will be indeterminacy of indicative meaning. Those language types of the sender who have access to fewer messages will sometimes find themselves language constrained and forced to send messages that they would prefer not to send if they had access to a larger set of messages. Thus different language types will pool on the same message for different sets of decision types. When receiving such messages, the receiver best responds by averaging over these sets of decision types, and will generally take an action that differs from the action he would take if he knew the sender’s language type and therefore did not have to average. This is captured by the following preliminary observation. LEMMA 1: There will be indeterminacy of indicative meaning in any equilibrium (σ ρ β) for which there is a message m∗ ∈ M and a pair of language types ˜ = 0, π(λ∗ ) = 0, λ∗ uses all of her available λ∗ = λ˜ such that λ∗ = λ˜ ∪ {m∗ }, π(λ) messages with positive probability, and all those messages induce distinct actions.12 11 It is also possible to find games with equilibria in which there is positive probability that the sender is unable to induce some of the equilibrium actions of the receiver, but there is no indeterminacy of indicative meaning. A simple example is this: Consider the uniform-quadratic CS environment with common interests. Let the sender’s set of available messages be λ1 = {m0  m1  m2 } with probability p and λ0 = {m0 } otherwise. Regardless of the value of p, there is an equilibrium in which language type λ1 divides the decision type space into three equal-length intervals, sends message m1 for decision types in the interval (0 13 ), sends message m0 for decision types in the interval ( 13  23 ), and sends message m2 for decision types in the interval ( 23  1). This equilibrium is optimal and there is no indeterminacy of indicative meaning: Conditional on observing either message m1 or m2 , the receiver knows the sender’s language type, and after message m0 , the sender’s language type is irrelevant to him. Note that this example is nongeneric because it depends on the fact that the receiver’s pooling action coincides with one of the actions in a three-step equilibrium. 12 The result is stated in terms of a one-message difference between language types so as to avoid counterexamples like the one in footnote 11. While the one-message difference is sufficient

LANGUAGE BARRIERS

793

PROOF: Since m0 is always available, the set λ˜ is not empty. The fact that λ uses all of her messages with positive probability and all of those messages induce distinct actions, and using the fact that, for any two a1  a2 ∈ A with a1 = a2 , we have Prob(Ta1 ∼a2 ) = 0, imply that language type λ˜ also uses all her messages with positive probability. Hence, there must be a set of decision types, that has positive probability, who use m∗ when their language type is λ∗ ˜ Use a∗ to denote ˜ = m∗ when their language type is λ. and use a message m ˜ Let Θ˜ the action that is induced by m∗ and a˜ the action that is induced by m. ˜ when their language type denote the set of decision types who use message m ˜ Since λ˜ uses all of her messages with positive probability, the set Θ˜ has is λ. positive probability. Similarly, since λ∗ uses all of her messages with positive probability, the set Θ˜ a∗ a˜ of types who switch to message m∗ and the set Θ˜ aa ˜ ∗ ˜ both have positive probability. The set Θ˜ aa of types who continue to send m ˜ ∗ differs at most by a set that has probability zero from the set of decision types ˜ when their language type is λ∗ . Hence, if there is no inwho send message m ˜ = ρ( determinacy in the equilibrium (σ ρ β), then ρ(m) ˆ Θ˜ aa ˜ ∗ ). Also, in the ˜ equilibrium (σ ρ β), by assumption, Θ is the set of decision types who send ˜ Therefore, if there is no indetermi˜ when their language type is λ. message m ˜ ˜ = ρ( nacy, then ρ(m) ˆ Θ). By Assumption 1, however, ∗

˜ ρ( ˆ Θ˜ aa ˆ Θ) ˜ ∗ ) = ρ( which is inconsistent with having no indeterminacy.

Q.E.D.

4.1.2. Common-Interest Games In this section, we consider the case where sender and receiver have identical preferences: U S ≡ U R ≡ U. We show that an optimal equilibrium exists. Furthermore, in any optimal equilibrium, all language types use all their messages with positive probability and all available messages induce distinct actions. It is interesting that this holds despite the fact that, as we showed above, different language types using all their messages may lead to indeterminacy of indicative meaning. First-order intuition for why every language type uses all of her messages is simple: An unused message can be introduced to refine the information that for our purposes and is always satisfied if we impose a full support condition on the distribution of language types, it is clearly not necessary for indeterminacy of indicative meaning to arise in equilibrium. After all, if different language types have access to a common message, they have different alternatives to using that message and therefore are likely to use that message for different sets of decision types. Only rarely will the receiver’s best responses to beliefs concentrated on these sets of decision type coincide with each other and thus satisfy a necessary condition for absence of indeterminacy of indicative meaning. As an illustration, in the example of footnote 11, any small positive sender bias would resurrect indeterminacy of indicative meaning.

794

A. BLUME AND O. BOARD

the sender transmits. A complication arises because other language types may already use that message and may see their payoffs reduced as the action induced by that message changes. We will show, however, that the magnitude of such losses is of second order in comparison to the gains of the language type who begins using that message. We proceed by first establishing existence of an optimal strategy profile. Here we argue in terms of the receiver’s strategy ρ which, as we will see, can be viewed as a point in the compact set T M .13 We construct a function that assigns to each strategy of the receiver the payoff that results from the sender using a best response to that strategy. Under our assumptions, this function is continuous. The problem of maximizing a continuous function over a compact set has a solution, and therefore an optimal strategy profile exists. Since we have a common-interest game, this profile must be part of an equilibrium profile. Thus, the following holds. LEMMA 2: With common interests, there exists an optimal strategy profile. (The proofs of Lemmas 2, 3, and 4 can be found in Appendix B, along with other proofs that are omitted from the main text.) For each language type and any optimal receiver strategy, one can partition the set of decision types into subsets for which the same message is optimal. Our assumptions imply that each language type induces every action that she can achieve with her repertoire of messages on a set of decision types that has positive probability. LEMMA 3: In an optimal profile, each language type induces every action a for which she has a message m with ρ(m ) = a on a set of decision types that contains an open set and therefore has positive probability. Hence, if a language type does not use one of her messages, it must be because one of her other messages induces the same action. Then, if there is a language type who does not use all of her messages, we can take a pair of messages that induce the same action a, one of which is used by the language type under consideration and one of which is not. Split the subset of decision types who induce action a into two positive-probability subsets and have one of these subsets continue to use the message they used before, while the other subset switches to the formerly unused message m. Other language types may already have been using message m, but note that since we are considering an optimal strategy profile, the receiver’s response to message m was itself optimal. Therefore, an infinitesimal change in the response to m results in a first-order common loss that is zero when the expectation is taken over the types who used message m to begin with. At the same time, there is a positive first-order gain 13 This result generalizes the corresponding result of Jäger, Metzger, and Riedel (2011) to environments with private information about language competence.

LANGUAGE BARRIERS

795

for the language type who starts using message m because she transmits useful information to the receiver. This implies the following. LEMMA 4: In an optimal profile, all messages of a language type λ with π(λ) > 0 induce distinct actions. The following result summarizes our findings and connects them to indeterminacy of indicative meaning. PROPOSITION 2: In any common-interest game, there exists an optimal equilibrium; in any such equilibrium, all messages of a language type that has positive probability induce distinct actions; all such language types use each of their messages with positive probability; and, if the language type distribution π has full support on Λ, there will be indeterminacy of indicative meaning.14 PROOF: The first three parts of the proposition summarize Lemmas 2–4. This sets the stage for invoking Lemma 1, which proves the fourth part of the proposition: If the language-type distribution π has full support on Λ, there will be pairs of language types both of which have positive probability and which differ only by one available message, and by Lemmas 2–4, all of these messages are used by both language types and induce distinct actions. Q.E.D. Proposition 2 is one of our key results. It demonstrates the ubiquity of indeterminacy of indicative meaning that results from combining private information about language competence with closely aligned incentives. With congruence of incentives, optimality requires that a large variety of messages will be used; private information about language competence then implies that the receiver cannot always be sure whether a message was sent out of necessity, because more preferable messages were not available, or out of a desire to communicate payoff-relevant information. Note that repeated talk by the sender alone, that is, replacing the set of messages M by the set M T of strings of length T that can be formed with the elements of M, is no guarantee for absence of meaning indeterminacies. In particular, the intuition that it may be optimal to first talk about language and then about payoff states is frequently incorrect. This is easiest to see if the language-type distribution on the expanded message space M T that results from letting the sender talk repeatedly is subject to the full support assumption that is used in Proposition 2, in which case this result implies that there is 14 While all messages that are in the repertoire of a language type induce distinct actions in an optimal equilibrium, it need not be the case that all messages in M induce distinct actions in an optimal equilibrium. For example, in the uniform-quadratic CS environment with common interests, with two language types {m0  m1 } and {m0  m2 }, in any optimal equilibrium, m1 and m2 are synonyms, while there is a non-optimal equilibrium with ρ(m0 ) = 12 , ρ(m1 ) = 16 , and ρ(m2 ) = 56 .

796

A. BLUME AND O. BOARD

indeterminacy of indicative meaning. Even under the (perhaps more natural) assumption that there is an language-type distribution on the set of elementary messages M and any concatenation of a given length of available elements of M is itself available, the logic of Proposition 2 applies: It is generally optimal that all messages in the expanded message space induce distinct receiver replies and language types use all messages in their repertoire. Then it is impossible for any language type λ0 that is a strict subset of a language type λ1 to send a message that identifies her language as λ0 , because λ1 would want to send the same message for a positive-probability set of decision types. Similarly, giving the sender the option to disclose her language type generally does not remove indeterminacies of indicative meaning. The reason is that, in an optimal equilibrium, the sender will disclose selectively. Here is a simple example: Consider the uniform-quadratic CS environment with common interests; let the sender’s language type be λ0 = {m0 } with probability 1/2 and λ1 = {m0  m1 } otherwise; and, in addition, let a λ1 sender have the option to disclose her language type. It is easily seen that it is not part of an optimal equilibrium for λ1 always to disclose: Conditional on always disclosing, optimality would require that λ1 partitions the unit interval into two equal-length subintervals and induces actions 1/4 and 3/4. There would be a positive measure subset of decision types who would find it optimal simply to send message m0 , which in the putative equilibrium would be implicitly revealed to come from λ0 and therefore induce action 1/2. Another way to think about this is that the disclosure option effectively allows the λ1 sender to send four distinct messages, (m0 , disclosure), (m1 , disclosure), (m0 , no disclosure), and (m1 , no disclosure), and always disclosing would waste two of these messages.15 We conclude this section by exploring another natural way in which one might think that indeterminacies of meaning could be avoided. It is intuitive that, with language constraints, it may sometimes be optimal to opt out of communication altogether and let the status quo, in which the receiver takes a default action, prevail. If inarticulate agents choose not to communicate, we can imagine that the communication of articulate agents does not get compromised. Our requirement that all receiver best replies are optimal for some type (part (B) of Assumption 1) rules out this possibility. Our model does not admit a default action: Every candidate for an equilibrium action is the ideal point of some type and tempts nearby types to induce it if they can. To get a sense of what will happen, however, when a default action is available, consider the uniform-quadratic CS environment with common interests, 15 Admittedly, there are frequent instances where a statement like “hablo español,” especially when made with the right accent, communicates a great deal about one’s language competence. But whenever differential competence in the sense of Hymes (1972) is not lumpy, when different fluencies do not have different names, when there is a broad spectrum of technical vocabularies among customers of a consumer electronics store, etc., it is hard to see how language competence could be easily communicated, and as we showed, it would generally not even be optimal always to disclose one’s fluency.

LANGUAGE BARRIERS

797

where the action space R ∪ d has been enlarged by adding a default action d that yields a common payoff, −ε, for some ε > 0. Suppose there are two language types, an articulate type λA and an inarticulate type λI , with λI  λA . Let 1/n nI = #(λI ) and nA = #(λA ). Assume that ε < nI 0 I ( 2n1I − t)2 dt, so that the inarticulate language type in expectation prefers the default action to the best use of the messages available to her. Call this the default-action game. Now consider equilibrium behavior as the articulate type acquires an increasingly rich vocabulary, that is, nA → ∞. In an optimal equilibrium, the payoff to λA converges to zero (> −ε) for every decision type, and therefore, for sufficiently large nA , the articulate language type strictly prefers not to induce the default action. Since the inarticulate language type strictly prefers the default action over an optimal (for that language type) use of her messages, for large enough nA it is optimal to use at least one message to induce the default action; there is a significant benefit to the inarticulate type from having one message induce the default action, while the potential loss to the articulate language type becomes negligible. The articulate type will not use any message that induces the default action. Hence, in an optimal equilibrium, there is at least one message that is not used by λA . There will be at most one message that induces the default action. Otherwise, λI could start using one of these messages to identify a nontrivial interval of types who currently induce the default action. If that interval is sufficiently small, all decision types in that interval would obtain a payoff in excess of −ε. Similar arguments can be used to establish that there cannot be any messages that both language types do not use and that there cannot be two messages that induce identical actions. Since all messages other than the single message that induces the default action induce distinct actions in [0 1], all these messages are used when available. Only the articulate type forgoes the use of one message, and this is the only message that is not used by one of the language types. Hence, we get the following. OBSERVATION 2: In the default-action game, for sufficiently small ε and with a sufficiently expressive articulate type, the inarticulate type will use one of her messages to induce the default action. All other messages induce distinct actions and are used with positive probability by any language type to whom they are available. It is not hard to see that the observation generalizes to the case where the language type distribution satisfies a full support condition, in which case it follows that there must be indeterminacy of indicative meaning. 4.2. Language Competence of the Receiver In this section, we analyze a sender-receiver model in which the receiver’s language competence is private information. We give conditions under which

798

A. BLUME AND O. BOARD

there will be indeterminacy of the imperative meanings of messages in optimal equilibria: The sender cannot be sure how her message will be interpreted; messages induce nondegenerate distributions over receiver actions; and, the sender’s strategy is not optimal given the receiver’s language competence. Parallel to the case of uncertainty about sender competence and indeterminacy of indicative meaning, we find that there is indeterminacy of imperative meaning in communication-rich equilibria (i.e., equilibria where there is sufficient variation in the receiver response to messages). Interestingly, in contrast to the case of uncertainty about sender competence, with uncertainty about receiver competence it may be optimal not to use all messages. For simplicity, in this section we focus exclusively on the receiver’s language competence and assume that the sender’s language competence is not an issue. Since it is costless to do so, we adopt a slightly more general model of limited understanding of messages than in Section 2: The receiver’s language competence is a partition P of the message set M, with the interpretation that the receiver cannot distinguish messages that belong to the same partition element P ∈ P but can distinguish messages from different partition elements, as in the case of someone who can associate “impressionism” and “expressionism” with art but cannot distinguish the two. Formally, we require the receiver’s strategy to be measurable with respect to P . The receiver’s partition type P is private information and is drawn from a common-knowledge distribution πR on the set P of partitions of M. Note that while we do not make the sender’s language competence an issue, the sender is implicitly language constrained by the cardinality of the set of potential messages M, and therefore cannot communicate in the language that the receiver uses to describe his actions to himself. One possible interpretation of this model is as a shorthand for a model with a richer space of possible messages M ∗  M, but constraints on their use. The richer message space M ∗ could be large enough to accommodate the messages that the receiver uses to describe the actions to himself; the constraints would make these messages unavailable to the sender. Formally, this can be accomplished by letting the sender have language type λ = M  M ∗ with probability 1 in the model with the richer message space M ∗ . A second possible interpretation is that the receiver does not have names for all his actions. This would be in the spirit of Polanyi’s (1966) “tacit knowledge” as opposed to explicit knowledge. Formally, this could be accomplished by introducing a richer message space M ∗ as above, but this time making the additional messages unavailable to the receiver by adding them to one of the partition elements P ∈ P for every language type P of the receiver from the shorthand model, so that M ∗ \ M ⊂ P. We restrict attention to CS environments, that is, the sender’s type t is drawn from a differentiable distribution F on [0 1] with a density f that is everywhere positive on [0 1]; the receiver takes an action a ∈ R; U is twice continuously differentiable; and, using subscripts to denote partial derivatives, for each realization of t there exists an action a∗t such that U1 (a∗t  t) = 0 and U11 < 0 < U12 .

LANGUAGE BARRIERS

799

In this environment, a sender strategy is a mapping σ : T → Δ(M) and we can conveniently represent a receiver strategy as a mapping ρ : 2M → A.16 DEFINITION 2: There is indeterminacy of imperative meaning in equilibrium (σ ρ β) if there exist a set of decision types Θ ⊂ T that has positive probability, a message m ∈ M with σ(m|t) > 0 for all t ∈ Θ, and a partition type P of the receiver that has positive probability such that message m fails to be optimal for decision types in Θ conditional on the receiver’s partition type P . For the case where the sender’s language competence is privately known, we showed that for indeterminacy of indicative meaning to occur, it is sufficient that there is variety in the use of messages and in the support of the language type distribution, that is, when there are language types that differ in just one message, who use all their messages, and all of their messages induce distinct actions. In Definition 3, we introduce an analogous condition that requires the existence of multiple receiver types, each of which responds differently to each of its partition elements and that suffices for indeterminacy of imperative meaning when the receiver’s language competence is the issue. DEFINITION 3: There is a varied receiver response in equilibrium E = (σ ρ β) if (i) there is a pair of partition types P ∗ = P˜ of the receiver with a common element P0 such that πR (P˜ ) = 0, πR (P ∗ ) = 0, and (ii) for every P ∈ P ∗ ∪ P˜ , the set {t ∈ T |U S (ρ(P) t) > U S (ρ(P  ) t) ∀P  = P P  ∈ P ∗ ∪ P˜ } has positive probability. With a varied receiver response, it becomes important for the sender to know exactly what the partition type of the receiver is; this guarantees that there will be at least one pair of receiver types for which a positive-probability set of sender types would want to induce the action associated with a common partition element for one receiver type and another action for the other receiver type. LEMMA 5: There will be indeterminacy of imperative meaning in any equilibrium E = (σ ρ β) with a varied receiver response. PROOF: Call two elements Pi and Pj of the set P ∗ ∪ P˜ adjacent for equilibrium E if ρ(Pi ) = ρ(Pj ) and there does not exist Pk ∈ P ∗ ∪ P˜ with ρ(Pk ) ∈ 16 With CS preferences, the receiver has a unique best reply given any belief. Hence, for partition elements P that have positive probability in equilibrium, our representation of receiver strategies is without loss of generality. For partition elements P that are off the equilibrium path, the receiver’s beliefs and therefore actions could, in principle, depend not just on the partition element itself but also on the receiver’s partition type P . In this case, our representation amounts to a consistency condition in the spirit of sequential equilibrium (Kreps and Wilson (1982)).

800

A. BLUME AND O. BOARD

(ρ(Pi ) ρ(Pj )). Since P ∗ and P˜ have a common element and because P ∗ = P˜ , there is (at least) one common element, PC , that is adjacent to a noncommon element, PNC . With CS preferences, the sender’s single-crossing condition implies that there is a unique sender type who is indifferent between the actions ρ(PC ) and ρ(PNC ). Without loss of generality, let ρ(PC ) < ρ(PNC ) and PNC ∈ P˜ . Define P+ := arg min{ρ(P)|P ∈ P ∗ and ρ(P) > ρ(PC )} if there exists P ∈ P ∗ with ρ(P) > ρ(PC ) and define P+ := PC otherwise. Suppose that P+ = PC . Since PC is common to both partitions, we have PC ∩ PNC = ∅. From the sender’s single-crossing condition, it follows that those types who would want to induce ρ(PNC ) when learning P˜ , would want to induce ρ(PC ) when learning P ∗ . Since PC ∩ PNC = ∅, they would want to send different messages in both cases. Thus in one of the cases, the message they would want to send differs from their equilibrium message, which establishes our claim. Now consider the case where P+ = PC . Since PC and PNC are adjacent, it must be the case that ρ(P+ ) > ρ(PNC ). Since ρ(PC ) < ρ(PNC ) < ρ(P+ ), the sender’s single-crossing condition implies that there is a positive-probability set of types (the interior of the interval of types between the type who is indifferent between ρ(PC ) and ρ(PNC ) and the type who is indifferent between ρ(PNC ) and min{ρ(P) ∈ P ∗ ∪ P˜ |ρ(P) > ρ(PNC )}) who would want to induce ρ(PNC ) when learning P˜ and would want to induce ρ(PC ) when learning P ∗ . Thus, as before, in one of these two cases, the message these types would want to send differs from their equilibrium message, which establishes our claim. Q.E.D. The following example demonstrates that there is an interesting asymmetry in the effects of making the sender’s language competence private information versus doing the same for the receiver. It demonstrates that, in the latter case, optimality sometimes requires that there are messages that will never be used. EXAMPLE 1: Consider the uniform-quadratic CS environment with common interests. Let M = {m0  m1  m2  m3 }. For any ε ∈ [0 1), define a game Γ ε by the property that each of the receiver types {{m0  m3 } {m1 }, {m2 }}, {{m0 } {m1  m3 } {m2 }}, and {{m0 } {m1 }, {m2  m3 }} has probability 1−ε and the 3 remaining receiver types are equally likely. Note that if ε ∈ (0 1), the partitiontype distribution πR has full support. If ε = 0, then in any optimal equilibrium, the type space is partitioned into three equal-length intervals and the actions that are induced in equilibrium are 16 , 12 , and 56 . To see this, observe first that this holds if, for the moment, we make the receiver type common knowledge. This provides an upper bound on the players’ equilibrium payoffs. Then note that the same outcome that is optimal when the receiver type is common knowledge can be realized when the receiver type is private information. Denote the corresponding ex ante payoff 0 by vmax . With positive small ε, the messages m0 , m1 , and m2 must approximately induce the same set of actions in an optimal equilibrium as they do in an optimal

LANGUAGE BARRIERS

801

equilibrium for ε = 0. Otherwise, the ex ante payoff from optimal equilibria, ε 0 vmax , would remain bounded away from vmax , and we know that cannot be the 0 case because the strategy profile that results in vmax when ε = 0 yields approxi0 mately vmax when ε > 0, and since we have a common-interest game, the optimal equilibrium strategy must do even better. For any ε, let E (ε) be an optimal equilibrium for the game Γ ε . We argue that, for sufficiently small ε > 0, no type t ∈ [0 1] of the sender sends message m3 in the equilibrium E (ε). For any δ > 0, there exists ε(δ) > 0 such that, for all ε ∈ (0 ε(δ)), type t’s payoff from sending message m3 is bounded from above by

2

2

2

1 1−ε 1 5 ε + ε · 0 + δ − t− v (t) = − t− − t− 3 6 2 6 while at the same time the payoff to t from sending the optimal message from the set {m0  m1  m2 } is bounded from below by 2

2

2



1 1 5 ε  t−  t− v (t) = (1 − ε) − min t − 6 2 6 − ε · 1 − δ For sufficiently small ε and δ, we have vε (t) > vε (t) for all t ∈ [0 1], which shows that there is no type of the sender who would be willing to send message m3 in any optimal equilibrium of the game Γ ε for sufficiently small ε ∈ (0 1). The example shows that, unlike in the case where only sender competence is the issue, when there is uncertainty about receiver competence there may be instances when the sender may not want to use all messages in an optimal equilibrium. This will be the case when there are messages for which the probability is high that the receiver does not understand them. Then only a few of the receiver’s partition types may be relevant, undermining the varied-response condition from Lemma 5. On the other hand, in an optimal equilibrium of a common-interest game, the sender will want to communicate some information. Thus, an optimal equilibrium will not be a pooling equilibrium, and for the communicated information to have an impact, there will be receiver messages that induce distinct actions. For the following result, we adopt a slightly different perspective. Denote by P f the finest partition of M, that is, the type of the receiver who understands all messages. We show that, in any optimal equilibrium of a game in which πR has full support but assigns almost probability 1 to P f , there is indeterminacy of imperative meaning. PROPOSITION 3: With common interests, an optimal equilibrium exists. For any class of games that differ only in the distributions πR , if there are finitely many

802

A. BLUME AND O. BOARD

optimal equilibria in the game with πR (P f ) = 1 (e.g., if CS’s condition M holds), then there exists an ε0 > 0 such that, for all ε ∈ (0 ε0 ) and for every πR that has full support and satisfies πR (P f ) = 1 − ε, there will be indeterminacy of imperative meaning in any optimal equilibrium. 5. CONCLUSION AND DISCUSSION Lewis (1969) made common knowledge part of the definition of a (language) convention. In contrast, we show that there can be benefits from communication without common knowledge of language, enquire into the form that such communication takes, and explore the conditions under which lack of common knowledge of language will be more or less deleterious. We find that lack of common knowledge of language can generate substantial efficiency losses even when language competence would be adequate to achieve ex post efficiency, and that it is frequently optimal to operate with indeterminate meanings rather than limiting communication to only that part of language that is known to be shared. We would like to comment briefly on some of the modeling choices we made, how we interpret them, and what alternatives one might consider. In our baseline model, there is a generic commonly known set of possible messages, and a player’s language competence is the subset of that set which she can send and understand. As a consequence, players are aware of messages that do not belong to their repertoire and in equilibrium know their strategic use. Our interpretation of this feature is that a player has an internal private language that is complete, but does not have a complete translation of that private language into the language she could use in a given strategic interaction. This is plausible, for example, in the case of a native speaker of language A who is not proficient in foreign language B but knows that language B is functionally equivalent to language A, or in the case of a customer at a hardware store who knows what item she is looking for but cannot name it. When there is only uncertainty about the sender’s language competence, the assumption that players know the use of messages that they themselves cannot use can easily be relaxed. For a sender with a given language competence, it suffices to know the receiver’s response to the messages that are available to her; the extra knowledge is a matter of analytical convenience. The assumption is more restrictive in the case of players having to interpret messages that do not belong to their repertoire. We adopt it to close the model without having to make an ad hoc assumption about the receiver’s response to messages he does not understand. Even when the receiver does not understand, he will generally be able to make inferences. Our assumption allows him to do that. The intuition underlying our result on the indeterminacy of imperative meaning (Proposition 3) appears robust to alternative assumptions, like assuming the receiver responds to unfamiliar messages with a default action; a restaurant patron is more likely to ask for “sour cream” when she really wants crème

LANGUAGE BARRIERS

803

fraîche the higher the probability that the waiter does not understand “crème fraîche” and responds with a puzzled look and (default) shrug. In our baseline model, there are two categories of messages: available and unavailable messages. Additional categories can be added, with the interpretation that players can distinguish messages from different categories but not messages from the same category. This can be formalized by having a language type be any partition of the message space and requiring players to treat messages belonging to the same partition element identically. The identical treatment of undistinguished messages is similar to the representation of “absence of a common language” through symmetry constraints by Crawford and Haller (1990). Combining it with a nondegenerate partition of the message space adds some linkages between messages. More complex linkages could be expressed through a group structure on the message space, as in Blume (2000), sets of relations on the message space, as in Rubinstein (1996), or other privately known constraints on players’ communication strategies and behavioral types embedded in the language type distribution. These additional linkages would become important with richer communication protocols because they facilitate communication about language and the learning and teaching of language, as in Blume (2000) and Rubinstein (1996). In a very stimulating paper, Lipman (2009) has asked “Why Is Language Vague?” Perhaps privately known language competence is part of the answer. If we interpret the indicative meaning of a message as the decision-relevant information conveyed by that message, then we have shown that, in optimal equilibria of common-interest games, meaning will generally be confounded by auxiliary information about the speaker’s language competence. In that sense, in our setting it can be optimal to be vague. One may argue that vagueness of this form is a result of projecting precise two-dimensional private information into one dimension.17 Note, however, that the language dimension is purely a nuisance in common-interest games. There is no intrinsic value in conveying information about language competence. Furthermore, the “prior collateral information” that Quine (1960) cited in his discussion of the indeterminacy of translation resembles dimensions that are unaccounted for. In his well-known example of a linguist’s effort to understand whether a native speaker’s ‘Gavagai’ translates into an English speaker’s ‘Rabbit,’ he described such “prior collateral information” as a source of “discrepancy between the present stimulus meaning of ‘Gavagai’ for the informant and that of ‘Rabbit’ for the linguist,” a discrepancy not unlike that between the sender’s rule for mapping decision-relevant information into messages given her language competence and the receiver’s belief about that rule, which has to average across multiple possible language competences. It is unlikely that an equilibrium approach in standard game theory can do more. After all, it is inherent in the notion of equilibrium that players know 17

We are grateful to Benny Moldovanu for reminding us of this point.

804

A. BLUME AND O. BOARD

each other’s strategies, which implies that, in a communication game, the receiver of a message always precisely knows the rule by which a message is generated. Our interest is in exploring the boundaries of what can be said about imprecise languages with the precise tools of game theory. In our setting, players may disagree about meaning in the sense that an agent’s use or interpretation of a message, given her actual language competence, differs from the use or interpretation that another agent expects, given his uncertainty about the former agent’s language competence. This modeling strategy transfers to other environments. For example, it is tempting to speculate about the consequences of disagreements regarding the meaning of verbal agreements and contracts. If language is imperfectly shared, contracting parties may have different perceptions of which obligations are entailed by a contract and may be uncertain about a judge’s interpretation of the contract, resulting in disputes. We have followed the bulk of the literature on communication games in letting meaning be endogenous, equilibrium meaning. In the future, there may be some benefit in considering more explicitly the phenomenon of languageconstrained players in the presence of a pre-existing focal language. One possibility would be to treat the focal language analogously to the behavior of nonstrategic players in models with behavioral types (e.g., Crawford (2003)). If players are language constrained and have access only to a subset of the focal meanings of a given language, mirroring our setup, then, in any given strategic situation, we expect agents to use a close available substitute for terms that they do not know. Therefore, in equilibrium we expect the kind of indeterminacies studied in this paper. The equilibrium meaning of commonly known expressions will depart from their focal meaning because agents will partly use them as substitutes for expressions that are unavailable to them. This much is captured by our present approach. What may be missing is the fact that the need to conform with focal meanings imposes additional constraints on the problem of finding an optimal language. APPENDIX A: SUFFICIENT CONDITIONS FOR ASSUMPTION 1 With common interests, there is a common payoff function U : R × T → R. If this function has a unique maximizer for every t ∈ T , denote it by at . In that case, define A(T ) := {a ∈ A|∃t ∈ T for which a = at }. We will show that part (A) of Assumption 1 holds under the conditions stated in the text, and that for part (B) the following nonsatiation condition on actions outside of A(T ) suffices. / A(T ), there exists a ∈ A such that COMMON IMPROVEMENT: For every a ∈   U(a  t) > U(a  t) for all t ∈ T . OBSERVATION A1: With common interests, the following conditions on the payoff function U are sufficient to guarantee that part (A) of Assumption 1 is

LANGUAGE BARRIERS

805

satisfied: (a) U is strictly concave in a for all t ∈ T ; (b) U has a unique maximizer at for all t; and (c) U ∈ C 1 . If, in addition, the payoff function U satisfies the common-improvement condition, then part (B) of Assumption 1 holds. PROOF: Suppose the set Θ ⊆ T has positive probability. Define a1 := maxa∈A E[U(a t)|t ∈ Θ]; (a) and (b) ensure that a1 is well defined. Let a2 ∈ A be an action such that Θa1 a2 and Θa2 a1 both have positive probability. Define a3 := maxa∈A E[U(a t)|t ∈ Θa1 a2 ]. Suppose that a1 = a3 . Consider an infinitesimal move from a3 in the direction of a2 . Strict concavity, (a), implies that all types in Θa2 a1 gain strictly from such a move, and since Θa2 a1 has positive probability, there is a positive-probability subset of types who gain at a rate that is bounded away from zero. In contrast, from (c), the expected loss for types in Θa1 a2 is of second order. This implies that, contrary to what we assumed, we must have a1 = maxa∈A E[U(a t)|t ∈ Θa1 a2 ]. The proof that a1 = maxa∈A E[U(a t)|t ∈ Θa2 a1 ] is the same, modulo changes in subscripts. Hence, part (A) of Assumption 1 is satisfied. To see that the common-improvement condition suffices for part (B), define, for every distribution μ and every set Θ ⊆ T that has positive μ-probability,

a(μ) := max U(a t) dμ(t) a

Θ

Then, since U is strictly concave in a and has a unique maximizer for all t, a(μ) / A(T ) exists. The common-improvement condition implies that, for all a ∈ and for all μ with support in T , a = a(μ). Therefore, we must have a(μ) ∈ A(T ). Q.E.D. Recall that in the CS model, the sender’s decision type t is drawn from a differentiable distribution F on [0 1] with a density f that is everywhere positive on [0 1]. The receiver takes an action a ∈ R. It is assumed that the functions U S and U R are twice continuously differentiable and, using subscripts to denote partial derivatives, the remaining assumptions are that, for each realization of t, there exists an action a∗t such that U1S (a∗t  t) = 0; for each t, there S S (a t) < 0 < U12 (a t) for all a t; exists an action at such that U1R (at  t) = 0; U11 R R and, U11 (a t) < 0 < U12 (a t) for all a and t. OBSERVATION A2: Assumption 1 is satisfied for the CS model. PROOF: Part (A) of Assumption 1 is satisfied because sender and receiver S R  U12 > 0: Single-crossing preferences satisfy the single-crossing condition, U12 for the sender implies that, for any positive-probability set Θ ⊂ T , the set Θa1 a2 is of the form Θ ∩ Ta1 a2 , where Ta1 a2 is an interval that is either of the form [0 t) or of the form (t 1]. Hence, the distribution that is the prior probability concentrated on Θ ∩ Ta1 a2 either stochastically dominates or is

806

A. BLUME AND O. BOARD

stochastically dominated by the distribution that is the prior probability concentrated on Θ. Therefore, the single-crossing condition for the receiver implies that ρ(Θ ˆ a1 a2 ) = ρ(Θ). ˆ Part (B) of Assumption 1 follows from the fact that a0 ≤ ρ(μ) ˆ ≤ a1 for all beliefs μ by the single-crossing condition of the receiver, continuity of at in t, which is implied by the maximum theorem, and the intermediate value theorem. Q.E.D. Another environment in which Assumption 1 is satisfied is one where payoffs can be expressed in terms of convex loss functions and the sender’s decision type space T is permitted to be multidimensional. Suppose the sender’s and receiver’s payoffs are given by U S (a t) = νS (t + b − a) and U R (a t) = νR (t − a), respectively, where  ·  is the Euclidean norm and −νS and −νR are strictly increasing convex functions.18 OBSERVATION A3: Assumption 1 is satisfied when sender and receiver have convex loss functions. PROOF: For any distribution μ with support in T and any convex subset K of T , let

  V (a K μ) = νR t − a dμ(t) K

Consider a point a ∈ / T . By the separating hyperplane theorem, there exists a vector c = 0 with c · t > c · a ∀t ∈ T . The derivative of V (· T μ) at a in the direction c satisfies ∇V (a T μ) ·

= 18

c c

1  c dμ(t) > 0 νR t − a t − a−1/2 (a − t) · 2 c T

Jäger, Metzger, and Riedel (2011) have examined the optimal equilibria of this environment, without uncertainty about language competence, for the common-interest case, where b = 0. There are well-defined indicative meanings (“categories” in their terminology). In any optimal equilibrium, categories are shown to be convex, giving rise to a Voronoi tessellation of the type space, and all messages are used with positive probability and induce distinct actions. In the present paper, the indicative meanings of messages become more fluid: While it is still the case that, in equilibrium, each language type partitions the set of decision types into convex sets, at the same time, for a given message, these sets will generally differ for different language types and it is no longer the case that the set of decision types is partitioned into categories with fixed boundaries. The receiver’s posterior distributions after different messages will generally have overlapping supports. For an extreme example, if instead of always permitting silence, we required the language-type distribution to have full support on the power set of M, then trivially in any equilibrium, the receiver’s posterior would have full support on T after every message.

LANGUAGE BARRIERS

807

c < 0 for all t ∈ T . This implies that part because νR is decreasing and (a − t) · c (B) of Assumption 1 is satisfied. Furthermore, with convex loss functions, every set Θ in Ω will be convex. For any pair of distinct actions a1 and a2 , the set Ta1 a2 is the intersection of T with a halfspace, and thus if Θa1 a2 = Θ ∩ Ta1 a2 and Θa2 a1 = Θ ∩ Ta2 a1 have positive probability, they are convex and have a nonempty interior. If we denote the interior of a set X by int(X), then a variation of the above argument, using the supporting hyperplane theorem and the fact that F has full support on T , establishes that ρ(Θ ˆ a1 a2 ) ∈ int(Θa1 a2 ) and ρ(Θ ˆ a2 a1 ) ∈ int(Θa2 a1 ). Use a12 to denote ρ(Θ ˆ a1 a2 ) and a21 to denote ρ(Θ ˆ a2 a1 ). Since a12 ∈ / Θa2 a1 , there exists a vector d = 0 with d · t ≥ d · a12 ∀t ∈ Θa2 a1 (and > for all t ∈ int(Θa2 a1 )). Consider the derivative of V (· Θ F) at a12 in the direction d:

∇V (a12  Θ F) ·

d d

= ∇V (a12  Θa1 a2  F) ·

d d + ∇V (a12  Θa2 a1  F) · d d

= ∇V (a12  Θa2 a1  F) ·

d d

=

 1 d f (t) dt > 0 νR t − a12  t − a12 −1/2 (a12 − t) · 2 d Θa2 a1

which shows that ρ(Θ ˆ a1 a2 ) = ρ(Θ) ˆ and, therefore, that part (A) of Assumption 1 is satisfied. Q.E.D. APPENDIX B: PROOFS PROOF OF LEMMA 2: Without loss of generality, we can confine attention to receiver strategies for which each action is a best response to some belief. Then, by Assumption 1, each receiver strategy prescribes only actions that are optimal for some type. Thus receiver strategies can be thought of as associating with each message m the type for whom the action ρ(m) is optimal; that is, it suffices to think of receiver strategies as elements of T M . Suppose that, for any given strategy ρ of the receiver, the sender uses a best reply; that best reply exists because, given the receiver’s strategy, each sender type maximizes her payoff over a finite set of alternatives. Then the resulting payoff for type (t λ) equals    max U ρ(m) t  m∈λ

808

A. BLUME AND O. BOARD

Given this behavior of the sender, we can assign the following expected payoff to the receiver’s strategy ρ:

    π(λ) max U ρ(m) t f (t) dt Q(ρ) = T m∈λ

λ∈Λ

Since U and the max operator are continuous functions, the integrand is continuous and therefore, by the Lebesgue dominated convergence theorem, Q is continuous. Therefore, by Weierstrass’s theorem, Q achieves a maximum on the compact set T M . Q.E.D. PROOF OF LEMMA 3: By Assumption 1 and common interest, ρ(m) is some type’s ideal point for all m ∈ M. Hence, a is the ideal action of some type t  . Strict concavity implies that type t  strictly prefers a to any of the finitely many other actions she can induce. By continuity, this remains true for an open set of types O(t  ) containing t  , and since f is everywhere positive, the set O(t  ) has positive probability. Q.E.D. PROOF OF LEMMA 4: To derive a contradiction, suppose not, that is, there is a language type λ∗ with π(λ∗ ) > 0 with two or more messages that induce the same action. It is without loss of generality to consider an optimal strategy profile in which the sender of any given language type uses only one out of any set of available messages that induce identical actions. Thus, suppose that m0  m1 ∈ λ∗ , ρ(m1 ) = ρ(m0 ), and λ∗ uses m0 , but not m1 . The common ex ante payoff from the optimal strategy profile (σ ρ) equals

   π(λ) U ρ(m) t σ(m|t λ)f (t) dt m∈M λ∈Λ

T

Since all messages that type λ∗ uses induce distinct actions, Lemma 3 implies that each of those messages is sent by an open set of types that has positive probability. Let Θ0 be the set of decision types for which language type λ∗ sends message m0 . Recall that different types have different best replies. Therefore, we can find a type t1 that is an element of an open subset of Θ0 and that satisfies ρ(t ˆ 1 ) = ρ(m1 ). By continuity, for a sufficiently small open ball Θ1 containing t1 and satisfying Θ1 ⊂ Θ0 , we have ρ(Θ ˆ 1 ) = ρ(m1 ). Now alter (only) type λ∗ ’s behavior by having her split the set Θ0 on which she sends m0 into two subsets, so that she sends m1 on Θ1 and continues to send m0 on Θ0 \ Θ1 . Denote the resulting sender strategy by σ˜ to distinguish it from the original strategy σ. Note that as long as we do not also modify the receiver strategy, this change in the sender strategy has no effect on the common ex ante payoff. If we use a1 to denote the action that is induced by message m1 , we can define the contribution

LANGUAGE BARRIERS

809

to the expected payoff from message m1 as    π(λ) W m1  a1 := λ∈Λ

  = π λ∗ +

    U a1  t σ˜ m1 |t λ f (t) dt T



    U a1  t σ˜ m1 |t λ∗ f (t) dt T

T

λ∈Λ\λ∗

  = π λ∗ +



    U a1  t σ m1 |t λ f (t) dt

π(λ)

    U a1  t σ˜ m1 |t λ∗ f (t) dt T

    U a1  t σ m1 |t λ f (t) dt

π(λ) T

λ∈Λ

Observe that when we change a1 , we affect the contribution to the ex ante payoff from message m1 only. Also, since a1 was optimal for m1 given the original sender strategy, we have     ∇a W m1  a1 = π λ∗

    ∇a U a1  t σ˜ m1 |t λ∗ f (t) dt T

It follows from our choice of Θ1 that ∇a W (m1  a1 ) = 0. This implies that the original profile (σ ρ) was not optimal. Q.E.D. PROOF OF PROPOSITION 3: We begin by proving existence. Without loss of generality, we can confine attention to receiver strategies for which each action is a best response to some belief. Then, since we are in the common-interest CS environment, each receiver strategy prescribes only actions that are optimal for some type of the sender. Thus, receiver strategies can be thought of as associating with each receiver message P the type for whom the action ρ(P) M is optimal; that is, it suffices to think of receiver strategies as elements of T 2 , the set of functions from the powerset of M into the sender’s type space. Suppose that, for any given strategy ρ of the receiver, the sender uses a best reply; that best reply exists because, given the receiver’s strategy, each sender type maximizes her payoff over a finite set of alternatives, the set of distributions over actions that are induced by each message. Then the resulting payoff for a sender of type t equals max m∈M

 P ∈P

πR (P )

   U ρ(P) t 1{m∈P}  P∈P

810

A. BLUME AND O. BOARD

Given this behavior of the sender, we can assign the following expected payoff to the receiver’s strategy ρ: 

   πR (P ) U ρ(P) t 1{m∈P} f (t) dt Q(ρ) = max T m∈M

P ∈P

P∈P

Since U and the max operator are continuous functions, the integrand is continuous and therefore, by the Lebesgue dominated convergence theorem, Q is continuous. Therefore, by Weierstrass’s theorem, Q achieves a maximum on M the compact set T 2 . It remains to show that there is indeterminacy of imperative meaning for sufficiently small positive ε. If the receiver’s language competence is not an issue, which corresponds to ε = 0, then any optimal equilibrium partitions T into M nonempty intervals Im , m ∈ M, with types belonging to the same interval sending the same message and the receiver’s optimal actions following any two messages m = m satisfying am = am . For sufficiently small positive ε, any optimal equilibrium E ε of a game in which πR has full support must approximately induce the same set of actions in the event that messages are understood as in one of the optimal equilibria E 0 of the game where messages are always understood. Without loss of generality, we can name the messages in ascending order of the actions they induce in E 0 . Now consider two receiver types, P f and P p , who only differ in that the latter type cannot distinguish messages m1 and m2 . With ε sufficiently small, the sets of types who send messages m1 and m2 , respectively, are approximately the same in E 0 and E ε , and the receiver responds in E ε to {m1 }, {m1  m2 }, and {m2 } with actions a1 < a12 < a2 . Hence, the varied-response condition is satisfied. The result then follows from Lemma 5. Q.E.D.

REFERENCES ALPERN, S. (2002): “Rendezvous Search: A Personal Perspective,” Operations Research, 50, 772–795. [786] ARROW, K. J. (1974): The Limits of Organization. New York: Norton. [781] BECHKY, B. (2003): “Sharing Meaning Across Occupational Communities: The Transformation of Understanding on the Production Floor,” Organization Science, 14, 312–330. [782] BLUME, A. (2000): “Coordination and Learning With a Partial Language,” Journal of Economic Theory, 95, 1–36. [782,803] BLUME, A., O. J. BOARD, AND K. KAWAMURA (2007): “Noisy Talk,” Theoretical Economics, 2, 395–440. [782] CHAKRABORTY, A., AND R. HARBAUGH (2010): “Persuasion by Cheap Talk,” American Economic Review, 100, 2361–2382. [784] CHOMSKY, N. (1965): Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. [783] CRAWFORD, V. P. (2003): “Lying for Strategic Advantage: Rational and Boundedly Rational Misrepresentation of Intentions,” American Economic Review, 93, 133–149. [804] CRAWFORD, V. P., AND H. HALLER (1990): “Learning How to Cooperate: Optimal Play in Repeated Coordination Games,” Econometrica, 58, 571–595. [782,786,803]

LANGUAGE BARRIERS

811

CRAWFORD, V. P., AND J. SOBEL (1982): “Strategic Information Transmission,” Econometrica, 50, 1431–1451. [782] CRÉMER, J., L. GARICANO, AND A. PRAT (2007): “Language and the Theory of the Firm,” Quarterly Journal of Economics, 122, 373–407. [782] DE JAEGHER, K. (2003): “A Game-Theoretic Rationale for Vagueness,” Linguistics and Philosophy, 26, 637–659. [782] DEWATRIPONT, M., AND J. TIROLE (2005): “Modes of Communication,” Journal of Political Economy, 113, 1217–1238. [782] ESKRIDGE, W. N., P. P. FRICKEY, AND E. GARRETT (2006): Legislation and Statutory Interpretation. New York: Foundation Press. [781] GALISON, P. (1997): Image & Logic: A Material Culture of Microphysics. Chicago, IL: The University of Chicago Press. [781] GLAZER, J., AND A. RUBINSTEIN (2001): “Debates and Decisions: On a Rationale of Argumentation Rules,” Games and Economic Behavior, 36, 158–173. [785] (2006): “A Study in the Pragmatics of Persuasion: A Game Theoretical Approach,” Theoretical Economics, 1, 395–410. [785] GROSSMAN, S. (1981): “The Role of Warranties and Private Disclosure About Product Quality,” Journal of Law and Economics, 24, 461–483. [785] HARSANYI, J. C. (1967): “Games of Incomplete Information Played by Bayesian Players, I,” Management Science, 14, 159–182. [791] (1968a): “Games of Incomplete Information Played by Bayesian Players, II,” Management Science, 14, 320–334. [791] (1968b): “Games of Incomplete Information Played by Bayesian Players, III,” Management Science, 14, 486–502. [791] HYMES, D. (1972): “On Communicative Competence,” in Sociolinguistics, ed. by J. Pride and J. Holmes. Harmondsworth: Penguin, 269–293. [783,796] JÄGER, G., L. P. METZGER, AND F. RIEDEL (2011): “Voronoi Languages: Equilibria in CheapTalk Games With High-Dimensional Types and Few Signals,” Games and Economic Behavior, 73, 517–537. [782,790,794,806] KREPS, D., AND R. WILSON (1982): “Sequential Equilibrium,” Econometrica, 50, 863–894. [799] LEVY, G., AND R. RAZIN (2007): “On the Limits of Communication in Multidimensional Cheap Talk,” Econometrica, 75, 885–893. [784] LEWIS, D. (1969): Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. [782,791,802] LI, M., AND K. MADARÁSZ (2008): “When Mandatory Disclosure Hurts: Expert Advice and Conflicting Interests,” Journal of Economic Theory, 139, 47–74. [784] LIPMAN, B. L. (2009): “Why Is Language Vague?” Working Paper, Boston University. [803] MARCH, J. G., AND H. A. SIMON (1958): Organizations. New York: Wiley. [782] MCLENNAN, A. (1998): “Consequences of the Condorcet Jury Theorem for Beneficial Information Aggregation by Rational Agents,” American Political Science Review, 92, 413–418. [786] MILGROM, P. R. (1981): “Good News and Bad News: Representation Theorems and Applications,” Bell Journal of Economics, 21, 380–391. [785] MILGROM, P. R., AND J. ROBERTS (1986): “Relying on the Information of Interested Parties,” RAND Journal of Economics, 17, 18–32. [785] MORGAN, J., AND P. C. STOCKEN (2003): “An Analysis of Stock Recommendations,” RAND Journal of Economics, 34, 183–203. [783] MORRIS, S. (2001): “Political Correctness,” Journal of Political Economy, 109, 231–265. [784] ONG, L. M. L., J. C. J. M. DE HAES, A. M. HOOS, AND F. B. LAMMES (1995): “Doctor-Patient Communication: A Review of the Literature,” Social Science & Medicine, 40, 903–918. [783] POLANYI, M. (1966): The Tacit Dimension. Garden City, NY: Doubleday & Company, Inc. [798] POSNER, R. A. (1987): “Legal Formalism, Legal Realism, and the Interpretation of Statutes and the Constitution,” Case Western Reserve Law Review, 37, 179–217. [781] QUINE, W. VAN O. (1960): Word & Object. Cambridge, MA: MIT Press. [803]

812

A. BLUME AND O. BOARD

RUBINSTEIN, A. (1996): “Why Are Certain Properties of Binary Relations Relatively More Common in Natural Language?” Econometrica, 64, 343–355. [782,803] (2000): Economics and Language. Cambridge, U.K.: Cambridge University Press. [782] WEBER, R. A., AND C. F. CAMERER (2003): “Cultural Conflict and Merger Failure: An Experimental Approach,” Management Science, 49, 400–415. [782] ZENGER, T. R., AND B. S. LAWRENCE (1989): “Organizational Demography: The Differential Effects of Age and Tenure Distributions on Technical Communication,” Academy of Management Journal, 32, 353–376. [782] ZOLLMAN, K. J. S. (2005): “Talking to Neighbors: The Evolution of Regional Meaning,” Philosophy of Science, 72, 69–85. [782]

Dept. of Economics, University of Arizona, Tucson, AZ 85721, U.S.A.; [email protected] and New York University School of Law, 40 Washington Square South, New York, NY 10012, U.S.A.; [email protected]. Manuscript received March, 2010; final revision received November, 2012.

Language Barriers

meanings were always clear, there would be no need for statutory interpretation of laws by courts (Posner (1987), ... Lee, Wei Li, John Moore, Stephen Morris, Gregory Pavlov, Joel Sobel and from seminar audi- ences at the .... known language competence; that is, for language to be useful, it does not have to be common ...

306KB Sizes 0 Downloads 212 Views

Recommend Documents

SMEs' Barriers Towards Internationalisation and ... - EBSCOhost
Analysis of survey data and subsequent findings from interviews indicate that differences exist between two groups of firms, that is, those that employ an export ...

Barriers to exit
Keywords: Trade; Passport costs; Institutions; Development; Labor. JEL classification: F22; F16; O1; J61. 1. Introduction. Despite the fact that international trade ...

Transport barriers in plasmas
3 Aeronautic Institute of Technology, S˜ao José dos Campos, Brazil. 4 Physics Department, Federal University of Paraná, Curitiba, Brazil. E-mail: [email protected]. Abstract. We discuss the creation of transport barriers in magnetically confined pla

Drought Barriers drawings 032014.pdf
SANJOAQUINRIVER. SLOUGH. LOCATION MAP. SACRAMENTO. RIVER. N. AND WEST FALSE RIVER - SACRAMENTO RIVER DELTA. 001 G-101 RIVER.

Junction Properties of Metal/Polypyrrole Schottky Barriers
Key words: polypyrrole; Schottky diode; junction properties; ternary eutectic melt ..... Table. I. Comparison of. Reported. Performance of. Metal–PPy. Junctions.

Actualizing Lean Construction Barriers Toward the Implementation.pdf
Actualizing Lean Construction Barriers Toward the Implementation.pdf. Actualizing Lean Construction Barriers Toward the Implementation.pdf. Open. Extract.

The Arlington Independent School District breaks down barriers of ...
minimum standard for technology access in every classroom, prioritizing the ... created,” says Jim Holland, an instructional technology specialist at the AISD.

Radiant Barriers: A Question & Answer Primer
service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or ... In this house, an attic radiant barrier could save 8 per- .... A few phone calls and a little .... two inches of free space at the roof peak, staple o

Barriers to Entry, Deregulation and Workplace Training
gains from training and improving investment incentives. ..... examine OECD data and show that entry rates of new firms are negatively ..... looks, however, implausible and, at best, applies to a relatively small number of specific markets. Hence ...

Literacy and Health Barriers (Literacies 4, 2004).pdf
Page 1 of 6. The second Health of Canadians report pointed. out that. Canadians with low literacy skills are. more likely to be unemployed and poor,. to suffer poorer health and to die earlier. than Canadians with high levels of. literacy...Canadians

Opportunities and Barriers in Probation Reform.A Case Study of ...
Opportunities and Barriers in Probation Reform.A Ca ... . Fishbein_M. Magula_W. Allen_G. Lacy_June 2002.pdf. Opportunities and Barriers in Probation Reform.

"Not In My Backyard": Removing Barriers to Affordable ... - HUD User
Jul 8, 1991 - II Elderly people cannot find small apartments to live near ..... charges that they must pay to finance business op erations. .... support of affordable housing, eliminating discrimi .... administrative and regulatory incentives be.

DoubleClick for Publishers: Break Down Barriers to Revenue
Let the world's first intelligent ad server think on your behalf to drive better .... code.google.com, is supported by a dedicated developer relations team. This.

Barriers to Firm Growth in Open Economies
By ignoring data on exports, the closed economy is assuming that Italy's .... 4A preference parameter in the technology follows Atkeson and Burstein ...... We use the European Firms In a Global Economy (EFIGE) database, which ..... reports two big di

Literacy and Health Barriers (Literacies 4, 2004).pdf
Determinants of Health in Canada. 6 fall 2004 LITERACIES #4. Page 3 of 6. Literacy and Health Barriers (Literacies 4, 2004).pdf. Literacy and Health Barriers ...