Games and Economic Behavior 49 (2004) 49–80 www.elsevier.com/locate/geb

Dynamic interactive epistemology Oliver Board Department of Economics, Amherst College, Amherst, MA 01002-5000, USA Received 20 September 2002 Available online 31 January 2004

Abstract The epistemic program in game theory uses formal models of interactive reasoning to provide foundations for various game-theoretic solution concepts. Much of this work is based around the (static) Aumann structure model of interactive epistemology, but more recently dynamic models of interactive reasoning have been developed, most notably by Stalnaker [Econ. Philos. 12 (1996) 133– 163] and Battigalli and Siniscalchi [J. Econ. Theory 88 (1999) 188–230], and used to analyze rational play in extensive form games. But while the properties of Aumann structures are well understood, without a formal language in which belief and belief revision statements can be expressed, it is unclear exactly what are the properties of these dynamic models. Here we investigate this question by defining such a language. A semantics and syntax are presented, with soundness and completeness theorems linking the two.  2003 Elsevier Inc. All rights reserved. JEL classification: C72; D82 Keywords: Interactive epistemology; Beliefs; Belief revision; Semantic; Syntactic; Language; Canonical structure

1. Introduction It is well established both theoretically and empirically that strategic reasoning requires agents to form not just conjectures about each other’s actions, but also about each other’s knowledge and beliefs, which can then be used to infer what actions they might take. In particular, the implications of common knowledge of rationality, where all the agents are rational, all know they are all rational, all know that they know, and so on, have been extensively analyzed. More recently, epistemic foundations have been provided for game theoretic solution concepts such as Nash equilibrium (Aumann and Brandenburger, 1995). E-mail address: [email protected]. 0899-8256/$ – see front matter  2003 Elsevier Inc. All rights reserved. doi:10.1016/j.geb.2003.10.006

50

O. Board / Games and Economic Behavior 49 (2004) 49–80

Comprehensive surveys of work in this area are provided by Dekel and Gul (1997) and Battigalli and Bonanno (1999). Much of this work is based around the Aumann structure model (see Aumann, 1976), in which each agent’s knowledge is represented by an information partition over a set of states, or possible worlds. For the purposes of the game theorist, however, Aumann structures have several important limitations. First, they describe a very strong concept of knowledge. An implication of modeling agents’ epistemic states with information partitions is that everything they know is true, and that they have complete introspective access to this knowledge, i.e. they know everything they know (positive introspection), and they know everything they do not know (negative introspection). Negative introspection in particular has widely been considered inappropriate. More generally, it has been thought important to analyze agents’ beliefs as well as their knowledge. And beliefs, unlike knowledge, can be false. These issues can be dealt with be replacing the information partitions with possibility correspondences (see e.g. Samet, 1990). Beliefs modeled by possibility correspondences at their most general do not satisfy any of the properties described above. By imposing certain restrictions on the correspondences we can recover these properties one by one. The second problem with using Aumann structures to model rational play in games is that they are essentially static: the epistemic states that they model are fixed, while in dynamic games1 agents have a chance to change their beliefs as the game progresses. In particular, conjectures about what strategies one’s opponents might be playing can be revised as moves are observed. A stark illustration of the importance of such revisions is given by Reny (1993), who shows that once the possibility of belief change is taken into account, the game-theoretic wisdom that common knowledge of rationality implies backward induction in games of perfect information is undermined. As long as the information that an agent learns is consistent with what she already knew or believed, this problem can be handled in the existing framework. The agent’s partition (or possibility correspondence) can be refined, in a manner analogous to Bayesian updating of probabilities, to take account of the new information. But, like Bayes rule, this process is not well defined when the information learned is incompatible with the agent’s previous beliefs, i.e. she is surprised. And modeling the response to such surprises is crucial: to evaluate the rationality of strategies in a dynamic game, we must have a theory about what the players would believe at every node in the game, even though some of these nodes will typically be ruled out by the players on the basis of the information they possess at the beginning of the game. Models of dynamic interactive reasoning have thus been developed. Stalnaker (1996) replaces the information partitions of the Aumann structure with plausibility orderings on the set of possible worlds, which encode information not just about each agent’s current beliefs, but also about how these beliefs will be revised as new information is learned, even if this new information is a surprise (e.g. it takes the form of an unexpected move made by one’s opponent). This seems to be a satisfactory resolution to the problem, and models 1 That is, games in which there is a flow of information as the game proceeds. These games are commonly represented by the extensive form.

O. Board / Games and Economic Behavior 49 (2004) 49–80

51

of this kind have been used by Stalnaker and others to analyze rational play in dynamic games. From a philosophical point of view, however, there is something unsatisfactory about the Aumann structure model and all its extensions, as identified by Aumann (1999) himself: “. . . the whole idea of ‘state of the world,’ and of a partition structure that reflects the players’ knowledge about the other players’ knowledge, is not transparent. What are the states? Can they be explicitly described? Where do they come from?” (p. 264). Fagin et al. (1999) elaborate further: “If we think of a state as a complete description of the world, then it must capture all of the agents’ knowledge. Since the agents’ knowledge is defined in terms of the partitions, the state must include a description of the partitions. This seems to lead to circularity, since the partitions are defined over the states, but the states contain a description of the partitions” (p. 332). Economists have developed an alternative model of interactive beliefs which seems to avoid this circularity. The hierarchical approach (Mertens and Zamir, 1985; Brandenburger and Dekel, 1993) takes as its starting point a set of states of nature, which describe facts of interest about the physical world, such as which strategy profile will be played. Each agent’s beliefs about the state of nature is represented by a probability distribution over the set of states of nature; their beliefs about these beliefs are then represented by a probability distribution over these distributions and the set of states of nature; and so on. In this way, we build up an infinite hierarchy of beliefs for each player, called her type (after Harsanyi, 1968). In contrast to the Aumann structure approach, where the infinite hierarchy of beliefs is generated implicitly by partitions over obscure states of the world, here it is explicitly constructed from levels of probability distributions over clearly defined states of nature. The question remains, however, as to whether a state of nature together with a description of each agent’s type provides a satisfactory description of a state of the world. For it is not clear that an agent’s type gives a complete description of her beliefs. Her type specifies what she believes about all the finite-level beliefs of her opponents, but does it actually describe what she believes about their types, what she believes about what they believe about her type, and so on? It turns out that as long as the types satisfy certain coherency conditions, we can answer this question in the affirmative. These coherency conditions amount to assuming that the agents satisfy positive and negative introspection, and guarantee that the belief hierarchies are closed. Furthermore, the hierarchical model can be extended to deal with the problem of belief revision. Battigalli and Siniscalchi (1999) have shown how to construct hierarchies of conditional probability systems; the level-0 probability systems describe each agent’s (probabilistic) beliefs about the physical world as before, but they also encode information about how these beliefs are revised. The level-1 systems represent the agents’ beliefs over these level-0 systems, and so on. Again, as long as the appropriate coherency conditions are satisfied, these hierarchies are closed and each agent’s type describes all of her beliefs. Any extra clarity these hierarchical constructions might bring, however, is paid for at a price of greatly-increased complexity. The complexity of these models may well be self defeating: Aumann (1999) describes them as “cumbersome and far from transparent. . . In fact, the hierarchy construction is so convoluted that we present it here with some diffidence” (pp. 265, 295). In addition, two more specific problems arise. The first concerns the coherency conditions that are required for closure of the hierarchies. As we have

52

O. Board / Games and Economic Behavior 49 (2004) 49–80

already discussed, it may not always be appropriate to assume that agents have complete introspective access to their epistemic states; this remains true even if we are dealing with belief rather than knowledge. In the case of conditional probability systems, the coherency assumption becomes even stronger: here it is assumed that agents have complete introspective access to their belief revision schemes as well. Ideally we would like to have a system that is flexible enough to work with or without positive and negative introspection. The second problem arises when we consider the non-probabilistic analogue of these belief hierarchies, where each level in the hierarchy describes simply which members of the previous level the agent considers possible, rather than assigning probabilities to each (the former is not generally derivable from the latter: a world may be considered possible even if it is assigned zero probability). In this case it turns out that, even with the appropriate coherency conditions, the infinite hierarchy does not in general provide a complete description of an agent’s uncertainty; that is, it does not tell us which types of her opponents she considers possible (Fagin, 1994; Heifetz and Samet, 1998; Brandenburger and Keisler, 1999). Thankfully there is a path between this Scylla and Charybdis, between the obscurity of Aumann structures and the complexity of belief hierarchies. Epistemic logic is based on a formal language which can express statements about the world and what agents believe about the world and about each other. The language is built up from a set of primitive formulas by means of an inductive rule. The primitive formulas and each step of the inductive process are entirely transparent. Hintikka (1962) showed how Kripke structures (Kripke, 1963) can be used be provide a semantics for this language, i.e. a set of rules for determining the truth or falsity of every sentence or formula in the language. Hence there is no issue about whether or not these structures provide a complete description of the agents’ uncertainty: the language itself defines the limits of what we can and cannot say about the agents’ beliefs. There is a very close connection between Kripke structures and Aumann structures: the former are a general version of the latter, where the information partitions are replaced by possibility correspondences (traditionally referred to as accessibility relations), plus the addition of an interpretation which assigns truth values to the primitive formulas. Kripke structures are general enough to model knowledge or belief, with or without the introspection assumptions. Certain properties of Kripke structures correspond to various axioms and rules governing the behavior of formulas in the language: these axioms and rules, jointly referred to as an axiom system, give us a precise characterization of sets of formulas which are true in different types of Kripke structure, and hence an elucidation of the particular concept of knowledge or belief that is being modeled. The axiom system and language form a syntax for the logic. But there is a gap still to be filled. In order to extend the results just described to structures such as Stalnaker’s, we must develop a language that is richer than that of epistemic logic. In Section 2 of this paper, we define such a language by adding revised belief operators to the standard language. Thus, if Bi φ is a formula of the language, then φ so is Bi ψ, to be interpreted “i believes that ψ on learning that φ.” We then present a semantics for this language consisting of belief revision structures, which look much like a generalized version of Stalnaker’s structures. A theorem links these structures to an axiom system which describes how these revised belief operators, and the rest of

O. Board / Games and Economic Behavior 49 (2004) 49–80

53

the language, behave. This axiom system is essentially the most basic axiom system of epistemic logic augmented by additional axioms and rules that correspond to some of the AGM axioms of belief revision (Alchourrón et al., 1985). These axioms are reproduced in Appendix A. Several extensions to the model, including the introduction of introspection and consistency axioms, and common belief operators, are developed in Section 3, and Section 4 comments on some issues which are not treated by our formalism. Before we start, however, we should comment more carefully on the relevance of this work for game theory. The importance of higher-order beliefs in strategic reasoning is well understood,2 and in the dynamic setting it is essential to model how these beliefs change as agents learn new information. Battigalli and Siniscalchi (2002) have used their hierarchical models of belief revision to provide an analysis of forward-induction reasoning in its various guises (including the intuitive criterion of Cho and Kreps, 1987), as well as an epistemic characterization of backward induction. The logical approach adopted here, although less direct in application than the hierarchies of Battigalli and Siniscalchi, forms the basis of an alternative, complementary, framework for analyzing dynamic games, and offers simplicity at the same time as transparency. The simplicity comes from the semantic structures that are used to provide truth conditions for formulas of the formal language: these structures are easily adapted to provide epistemic models of games (see Section 5 for an example). Unlike the constructions of Battigalli and Siniscalchi, which are infinite by definition, these models can be very small. And the axiom system and the language itself, which provide the syntax of the logic, are straightforward to interpret and give us transparency. This syntax lays out the “rules of argument” and allows nothing to be hidden in the formalism. Soundness and completeness theorems establish equivalence between what is true in every structure and what can be proved in the axiom system: the notoriously tricky task of proving that a formula can be derived from a given set of axioms and rules is thus reduced to the mathematical problem of checking that our structures have a particular property. This methodology is adopted by Board (2002a) in a companion paper. Other papers which adopt the logical approach to analyze dynamic games include Clausing (2001) and Feinberg (2001); we discuss the logical components of those papers, along with other related literature, in Section 6.

2. Dynamic interactive epistemology As discussed in the introduction, an important distinction is made in logic between a syntax and a semantics. A syntax consists of a formal language, defined by a set of formulas, and a proof procedure for generating theorems in that language. The proof procedure, usually expressed in the form of an axiom system, is often rather cumbersome: even basic theorems can be very tricky to prove. A semantics is made out of structures that give truth conditions for every formula in the language. A structure is a well-defined mathematical object, and usually very easy to work with, but hard to interpret. The task of the logician is to establish a connection between the syntax and the semantics. This 2 For a brief survey of the theoretical literature, see Morris and Shin (2003).

54

O. Board / Games and Economic Behavior 49 (2004) 49–80

can be done by means of soundness and completeness theorems, which link the theorems generated by the proof procedure with the truth conditions established by the structures. We start by describing the language we shall work with. 2.1. Language Our language Ln (Φ) is built up from a nonempty set Φ of primitive formulas and an inductive rule. The primitive formulas stand for statements expressing basic facts about the world, such as “agent i plays action ai .” The inductive rule enables us to build up more complex formulas standing for statements such as “agent i plays action ai and agent j plays action aj ,” and “agent j believes that agent i plays action aj .” Formally, Ln (Φ) is defined as the smallest set which satisfies the following conditions: (a) if φ ∈ Φ, then φ ∈ Ln (Φ); (b) if φ, ψ ∈ Ln (Φ), then ¬φ ∈ Ln (Φ) and (φ ∧ ψ) ∈ Ln (Φ); φ (c) if φ, ψ ∈ Ln (Φ), then Bi φ ∈ Ln (Φ) and Bi ψ ∈ Ln (Φ) for i = 1, . . . , n. For economy of notation, we take Φ and n to be fixed henceforth and omit them from the notation. We also omit parentheses whenever there is no risk of confusion, and use the following standard abbreviations: φ ∨ ψ for ¬(¬φ ∧ ¬ψ); φ ⇒ ψ for ¬φ ∨ ψ; and φ ⇔ ψ for (φ ⇒ ψ) ∧ (ψ ⇒ φ). As discussed in the introduction, L is the language of epistemic φ logic augmented by adding modal operators Bi that tell us what the agents believe after receiving the information that φ. Notice that the language cannot express iterated belief revisions; that is, there are no formulas expressing statements such as “agent i believes that χ on learning that φ and then learning that ψ.” We comment on this restriction in Section 4.2. We now present an axiom system and semantics for L. 2.2. Axiom system An axiom system AX consists of a set of axioms and inference rules. An axiom is simply a formula or set of formulas, and an inference rule allows us to infer one formula from a set of other formulas. A proof in AX is a finite sequence of formulas, each of which is either an (instance of) an axiom or follows from some of the preceding formulas by applying an inference rule. A proof of φ is a proof whose last formula is φ. We say that φ is provable in AX (or φ is a theorem of AX), and write AX  φ, if there is a proof of φ in AX. We shall consider the axiom system BRS for L, consisting of the following axioms and inference rules: Taut Dist Triv Succ IE(a)

true   φ φ φ Bi ψ ∧ Bi (ψ ⇒ χ) ⇒ Bi χ Bi φ ⇔ Bitrue φ φ Bi φ  φ∧ψ φ φ  Bi ψ ⇒ Bi χ ⇔ Bi χ

O. Board / Games and Economic Behavior 49 (2004) 49–80

IE(b) MP RE LE

55

 φ∧ψ  φ φ ¬Bi ¬ψ ⇒ Bi χ ⇔ Bi (ψ ⇒ χ) from φ and φ ⇒ ψ infer ψ φ from ψ infer Bi ψ φ ψ from φ ⇔ ψ infer Bi χ ⇔ Bi χ

Note that: any formulas in L may be substituted for φ, ψ, χ ; i ∈ {1, . . . , n}; true stands for any propositional tautology, and false stands for ¬true. This system is close to the system K of epistemic logic with extra axioms and rules, corresponding roughly to the φ AGM axioms of belief revision, to describe the behavior of the revised belief operators Bi . Taut, Dist (the distribution axiom), MP (modus ponens), and RE (the rule of epistemization) are familiar from epistemic logic, and need no further comment (but note that Dist and RE apply only to the revised belief operators). Jointly, these correspond to AGM axiom (K ∗ 1).3 Triv, the triviality axiom, says that if the information received by an agent is trivial (i.e. if it is a propositional tautology), then she does not revise her beliefs;4 a corresponding condition is implied by the AGM axioms. This also ensures that ordinary beliefs satisfies the same properties as revised beliefs. Succ, the analogue of (K ∗ 2), is the success axiom, which guarantees that the information received is indeed believed in the revised belief state. IE(a) and IE(b) are the axioms of informational economy, motivated by the criterion of informational economy: our beliefs are not in general gratuitous, and so when we change them in response to new evidence, the change should be no greater than is necessary to incorporate that new evidence. More specifically, IE(a) says that if an agent learns something she already believed, she doesn’t revise her beliefs at all; and IE(b) says that if she learns something consistent with her original beliefs, then her revised beliefs are formed simply by adding the new information to her existing stock of beliefs and closing under modus ponens. IE(b) corresponds directly to (K ∗ 7) and (K ∗ 8), and, in the presence of Triv, also to (K ∗ 3) and (K ∗ 4); IE(a) is implied by (K ∗ 7) and (K ∗ 8) in the presence of (K ∗ 5). Finally, LE, the rule of logical equivalence, corresponds to (K ∗ 6), and says that logically equivalent formulas should lead to identical belief revisions: it only the content of the information and not the way it is expressed that determines how beliefs are revised.5 2.3. Semantics The semantics for L is provided by a belief revision structure. This is based on a combination of Grove’s (1988) spheres model and the Kripke structure framework. The Kripkean accessibility relations are replaced by plausibility orderings at every world for 3 Under the numbering system of Gärdenfors (1988), reproduced in Appendix A. 4 A referee has pointed out that the Triv axiom could be replaced by the definition B (·) ≡ B true (·) (for some i i

instance of true), simplifying the language as well as axiom system, since now only the revised belief operator would be a primitive of the system. With game-theoretic applications in mind, however, there may be some advantage in retaining a distinction between prior and revised belief operators, to be interpreted as pre-game and in-game beliefs respectively. In games with absent-mindedness, for example, pre-game beliefs and uninformed in-game beliefs may differ: in this case the triviality axiom must be rejected. See Board (2003) for a logical analysis of such games. 5 This rules out what psychologists call framing effects.

56

O. Board / Games and Economic Behavior 49 (2004) 49–80

each agent, with the most plausible worlds for a given agent at a particular world taking the role of the accessible worlds for that agent at that world. But a plausibility ordering for an agent tells us not only her current epistemic state, it also encodes information about her belief revision policy. In turn, the structure generates the other agents’ beliefs about this belief revision policy, and so on, thus providing truth conditions for each formula of L. Formally, a belief revision structure M over Φ for n agents is a ordered triple W, π,  , where W is a set of possible worlds; π : W × Φ → {true, false} is an interpretation; and  is a vector of binary relations over W , giving the plausibility ordering of each agent at each world. We use w i to denote the plausibility ordering of agent i at world w. Intuitively, y means “from the point of view of agent i at world w, world x is at least as plausible x w i as world y.” Belief revision structures are used to give truth conditions to formulas. Formally, truth of formulas is characterized by the  relation: (M, w)  φ means that φ is true at world w in structure M. We use [φ]M to denote the set of worlds in which φ is true (the truth set of φ), i.e. [φ]M = {w | (M, w)  φ}. If φ is true at every world of a given structure, we say that φ is valid in M, and write M  φ. Finally, for a given class of structures C, we say that φ is valid with respect to C, and write C  φ, if M  φ for all M ∈ C. Before giving the formal definition of , we impose several restrictions on the form of belief revision structures. Define Wiw = {x | x w i y for some y}, the set of worlds which are conceivable to agent i at world w, though not necessarily accessible. Then, we assume that: R1 R2

w for all i, w: w i is complete and transitive on Wi ; w for all i, w: i is well-founded.

R1 ensures that each plausibility ordering divides all the worlds into ordered equivalence classes; the inconceivable worlds, i.e. those not in Wiw , are a class unto themselves and are to be considered least plausible. If w i is well-founded (R2), then there are no infiw w w nitely descending sequences of the form · · · wn ≺w i ≺ wn−1 ≺i · · · ≺i w0 (where x ≺i y w w if and only if x i y and not y i x). This guarantees that for every set X ⊆ W , if w w X ∩ Wiw = ∅, then minw i {X ∩ Wi } = ∅, where mini is defined in the usual way (i.e. w w mini (X) = {x ∈ X | for all y ∈ X, x i y }); intuitively, it says that if there are any conceivable worlds in a certain set, then there is a most plausible world in that set.6 Wellfoundedness is satisfied automatically in the case where W is finite. We call a belief revision structure ordered if it satisfies R1, and focused if it satisfies R2. Let M denote the class of all belief revision structures that are ordered and focused. We are now in a position to define . The definition proceeds by induction on the form of φ: 6 Well-foundedness of w is stronger than the limit assumption proposed by Lewis (1973): if [φ] ∩ W w = M i i w ∅, then minw i {[φ]M ∩ Wi } = ∅. Well-foundedness requires that every set which has a nonempty intersection w with Wi has a least element, while the limit assumption applies this condition only to sets which represent

formulas. We impose the stronger condition in order to preserve the clean cut between extra-linguistic reality (as represented by the frame (W and )) and the semantics (which maps the language into the frame).

O. Board / Games and Economic Behavior 49 (2004) 49–80

(M, w)  φ

(for φ ∈ Φ)

(M, w)  φ ∧ ψ

iff

π(w)(φ) = true;

iff (M, w)  φ

(M, w)  ¬φ

iff not (M, w)  φ;

(M, w)  Bi φ

iff (M, x)  φ

φ

(M, w)  Bi ψ

57

and (M, w)  ψ;

 w for all x ∈ minw i Wi ;   w iff (M, x)  ψ for all x ∈ minw i [φ]M ∩ Wi .

The first three rules are straightforward. The fourth rule gives truth conditions for formulas of the form Bi φ in much the same way as the Kripke semantics, with the most plausible worlds playing the role of the accessible worlds: agent i believes that φ if and only if φ is true in all the most plausible worlds. The fifth rule operates similarly: the worlds accessible to the agent when she learns that φ are precisely the most plausible worlds that are consistent with φ; thus agent i believes that ψ on learning that φ if and only if ψ is true in all the most plausible worlds in which φ is true.7 The five rules provide truth conditions for every formula in L. 2.4. Soundness and completeness Before stating the main theorem of the paper, we need to introduce some more terminology. An axiom system AX is said to be sound for a language L with respect to a class C of structures if every formula in L that is provable in AX is valid with respect to C. The system AX is said to be complete for L with respect to C if every formula in L that is valid with respect to C is provable in AX. We can think of AX as characterizing the class C if it provides a sound and complete axiomatization of that class, i.e. for all φ ∈ L, we have AX  φ if and only if C  φ. Soundness and completeness provide a tight connection between the syntactic notion of provability, which is hard to use but easy to understand, and the semantic notion of validity, which is easy to use but hard to understand. It turns out that a precise connection can be made between the axiom system BRS and belief revision structures. The following theorem tells us that every theorem φ that is provable in BRS is also true in every world of every belief revision structure: Theorem 1. BRS is a sound and complete axiomatization w.r.t. M for formulas in L. The proof of this and all other theorems is given in Appendix B. To get a more concrete feel for the formalism, the reader may at this point wish to skip forward to Section 5 where there is an example of the logic applied to game theory. 7 The purpose of the well-foundedness condition should now be clear: if it does not hold, B φ ψ could be i

(vacuously) true even though ψ was not true in any sufficiently plausible φ-world, because there might be no most plausible φ-world. Thus we would clearly have the wrong truth conditions for sentences of this form, and in fact for sentences of the form Bi φ.

58

O. Board / Games and Economic Behavior 49 (2004) 49–80

2.5. The canonical structure Before moving on to discuss various extensions of the logic presented in this section, we show how to construct a particularly important belief revision structure M c , called the canonical structure for BRS. To understand what the canonical structure is, we need some more definitions. For a given axiom system AX, we say that a formula φ is AX-consistent if ¬φ is not provable in AX. A finite set of formulas {φ1 , . . . , φk } is AX-consistent exactly if φ1 ∧ · · · ∧ φk is AX-consistent, and an infinite set of formulas is AX-consistent exactly if all its finite subsets are AX-consistent. Finally, at set of formulas S ⊆ L, is a maximal AX-consistent set if (a) it is AX-consistent, and (b) for all φ in L but not in S, the set S ∪ {φ} is not AX-consistent. The canonical structure has a world wS corresponding to every maximal BRS-consistent set S. This structure is analogous to the universal type space construction of Battigalli and Siniscalchi (1999), who extend the work of Mertens and Zamir (1985) and Brandenburger and Dekel (1993) to the dynamic setting. In both cases every allowable profile of epistemic types is represented: in the canonical structure by sets of formulas describing each agent’s beliefs and how these beliefs will be revised; and in the universal type space an infinite hierarchy of conditional probability systems for each agent. Both approaches rule out certain beliefs: according to the former, an epistemic type is allowable only if the formulas describing it are logically consistent according to the axiom system; in the hierarchical construction of Battigalli and Siniscalchi, the representation of beliefs by conditional probability systems allows only beliefs that satisfy an appropriate set of probability axioms, and additional coherency conditions are imposed on the hierarchies to ensure that the various levels of each hierarchy agree with each other. There are, however, important differences between the two approaches. While the universal type space of Battigalli and Siniscalchi describes the probabilistic beliefs of each agent, the canonical structure presented here tells us what the agents consider possible. It is clear that probabilistic beliefs cannot be recovered from the canonical structure. Nor can possibility correspondences be recovered from the universal type space, unless possibility is identified with strictly positive probability. In addition, the conditional probability systems used by Battigalli and Siniscalchi specify beliefs conditional on observable events only; in our terminology, this means that information can take the form of propositional formulas only (i.e. primitive formulas and their conjunctions and negations), which describe the physical world. Our language places no restrictions on the kind of information that may be received; in particular, the possibility that one agent may learn another’s beliefs is not ruled out. As far as we are aware, the canonical structure presented here is the first example of a belief-complete construction that is general in this sense. φ φ For the construction of M c , we introduce some new notation: let S/Bi = {ψ | Bi ψ ∈ φ S}, i.e. S/Bi is the set of formulas believed by i when she learns that φ at world wS . Let c M = W, π,  , where W = {wS : S is a maximal BRS-consistent set},  true if φ ∈ S π(wS )(φ) = for φ ∈ Φ, false if φ ∈ /S

O. Board / Games and Economic Behavior 49 (2004) 49–80

59

φ

w

wT i S wU if there is some φ ∈ T ∩ U such that S/Bi ⊆ T . Note that for each maximal BRS-consistent set of formulas S we have precisely one world wS . To show that wS really does correspond to S, we must prove the following proposition: Proposition 1. (M c , wS )  φ if and only if φ ∈ S. Proposition 1 says that S contains exactly those formulas which are true at wS . The proof is given in Appendix B. Another soundness and completeness theorem emerges as a corollary of this proposition. Corollary 1. BRS is a sound and complete axiomatization w.r.t. M c for formulas in L. For (soundness) if φ is provable in BRS, it must be contained in every maximal BRSconsistent set (see proof of Theorem 1), and by Proposition 1, it is therefore valid with respect to M c . And (completeness) if φ is valid with respect to M c , Proposition 1 tells us that it must be contained in every maximal BRS-consistent set. If follows that φ is provable in BRS: if not, ¬φ would be BRS-consistent and thus contained in some maximal BRSconsistent set (see again proof of Theorem 1); but {φ, ¬φ} is not BRS-consistent, and so φ and ¬φ cannot be contained in the same maximal BRS-consistent set. Although it might seem that the soundness part of Corollary 1 follows from Theorem 1, and that the completeness part of Theorem 1 follows from Corollary 1, this is not the case w / M. The reason is that some of the i S relations are not well-founded because M c ∈ wS (though they are complete and transitive on Wi in each case). Construct a sequence of maximal BRS-consistent sets containing the formulas in Table 1 and a maximal BRS¬φ ¬B φ ¬B B φ consistent set S such that S/Bi = T1 , S/Bi i = T2 , S/Bi i i = T3 , . . . , S/Bi = T∞ . wS S Then it follows from Proposition 1 and the definition of  that · · · wT3 ≺w i wT2 ≺i wT1 , wS i.e. we have an infinitely descending sequence and i is not well-founded. Nonetheless Corollary 1 tells us that the tight connection between valid formulas and formulas that are provable in BRS still holds. The canonical structure is useful for certain game-theoretic applications. Both forward and backward induction are based on the premise that players try to interpret their opponents’ strategy choices as rational whenever possible. But what it is rational for a player to do depends on her beliefs. Using a structure that rules out certain beliefs to analyze a game restricts the set of available explanations for a particular action. So if we are interested in which strategies are compatible with rationality and which are not we must Table 1 T1

T2

T3

T4

¬φ ¬Bi φ ¬Bi Bi φ ¬Bi Bi Bi φ .. .

φ ¬Bi φ ¬Bi Bi φ ¬Bi Bi Bi φ .. .

φ Bi φ ¬Bi Bi φ ¬Bi Bi Bi φ .. .

φ Bi φ Bi Bi φ ¬Bi Bi Bi φ .. .

···

T∞ φ Bi φ Bi Bi φ Bi Bi Bi φ

.. .

60

O. Board / Games and Economic Behavior 49 (2004) 49–80

work with a structure that includes all possible beliefs. The canonical structure does just this. See Battigalli and Siniscalchi (2002) and Board (2002a) for further elaboration of this point. We finish this section with a brief comment on the impossibility results of Fagin (1994), Heifetz and Samet (1998), Brandenburger and Keisler (1999) and others, which show that if epistemic types are represented by possibility sets (as they are here) rather than probability distributions, then a structure containing all epistemic types cannot exist. These results would seem to contradict our claim that the canonical structure does contain a representation of every epistemic type. Brandenburger (2002) explains how the two can be reconciled: “. . . completeness is impossible if literally all possibility sets are wanted. But if we make topological assumptions that serve to rule out certain kinds of possibility sets, then a (restrictedly) complete structure may exist” (p. 4). Working with a formal language has precisely this effect. It is formulas of this language and not arbitrary sets of worlds that are the content of beliefs (and of information), and in any given model there may be sets of worlds that do not represent any formula of the language. Similarly, in the context of a hierarchical framework, Mariotti et al. (2002) evade the impossibility result by assuming that the underlying space of uncertainty (the set of states of nature) is compact and Hausdorff and the possibility sets are compact. Like Battigalli and Siniscalchi, they consider beliefs conditional on non-epistemic events only.

3. Extensions 3.1. Introspection Consider the following additional axioms: TPI TNI

φ

χ

φ

Bi ψ ⇒ Bi Bi ψ, φ χ φ ¬Bi ψ ⇒ Bi ¬Bi ψ.

TPI and TNI are the axioms of total positive introspection and total negative introspection, and state that agents have complete introspective access to their own minds, including not only their current beliefs but also how these beliefs would be or would have been revised. Let BRSI be the axiom system formed by the addition of TPI and TNI to BRS. To illustrate the strength of these axioms, we consider three implications. The φ φ φ φ φ φ first is introspection of current beliefs: Bi ψ ⇒ Bi Bi ψ and ¬Bi ψ ⇒ Bi ¬Bi ψ. The knowledge analogues of these principles emerge as properties of the Aumann structure model discussed in the introduction and widely used in economic theory, but their universal applicability has been questioned by Geanakoplos (1992), among others. Second, TPI and TNI imply that agents have correct beliefs about their future beliefs, whatever information φ φ φ φ they receive: Bi ψ ⇒ Bi Bi ψ and ¬Bi ψ ⇒ Bi ¬Bi ψ. Finally, it is implied that agents φ φ can recall their prior beliefs: Bi ψ ⇒ Bi Bi ψ and ¬Bi ψ ⇒ Bi ¬Bi ψ. This assumption is inappropriate in certain games and decision problems, such as the absent-minded driver

O. Board / Games and Economic Behavior 49 (2004) 49–80

61

paradox of Piccione and Rubinstein (1997). Bonanno (2003) provides a careful analysis of this and other memory axioms in the context of extensive form games. Imposing an additional restriction on the form of the w i relations provides a semantic characterization of TPI and TNI: R3

for all i, w, x, y, z: if x ∈ Wiw , then y xi z if and only if y w i z.

Intuitively, R3 says an agent has the same plausibility ordering in every world that is conceivable to her. If a belief revision structure satisfies R3 we call it absolute, and let A be the set of belief revision structures which satisfy R1–R3. Then the following result formalizes the link between TPI and TNI, and absoluteness. Theorem 2. BRSI is a sound and complete axiomatization w.r.t. A for formulas in L. 3.2. Consistency The observant reader will have noticed that there is no axiom in BRS corresponding to the AGM consistency axiom (K ∗ 5). In the AGM system, this axiom ensures that agents’ beliefs are logically consistent whenever possible, i.e. whenever the information learned is logically consistent (if the information is not consistent, then (K ∗ 2) forces inconsistent beliefs on the agent). But any attempt to axiomatize this in our logic will lead to circularity, since the notion of logical consistency presupposes a particular axiom system. The AGM system and Friedman and Halpern’s (1994) more expressive logic of belief change avoid this problem by working with two languages, one for describing facts about the world which can be learned, and another for talking about beliefs. Their consistency axioms ((K ∗ 5) and PS respectively) apply to the second language, and make reference to the logical consistency only of formulas of the first. In addition to the analytical convenience of working with one language rather than two, an advantage of our approach is that no restrictions are imposed on the form that information may take. This issue is discussed in more detail in Section 6. An independent reason to be suspicious of the AGM consistency axiom is that it affords no purely semantic representation. To guarantee the validity of this axiom, we would need to restrict our attention to belief revision structures which contain enough worlds: for each logically consistent formula, we need at least one world in which that formula is true. Hence in Grove’s (1988) semantics for the AGM system, the set of worlds is identified with the set of maximal consistent sets of formulas of the object language, and Friedman and Halpern make the assumption that their structures are saturated, i.e. there is a least one minimal world for every consistent formula of the object language. But logicians tend to think of the frame (in this case the worlds and the plausibility orderings) as a representation of the extra-linguistic reality, which is mapped onto a formal language by an interpretation and semantic rules. A reality that can described only in syntactic terms seems artificial.8 In the place of (K ∗ 5), we consider the weak consistency axiom: 8 It may, however, be reasonable to impose linguistic constraints on belief revision structures for the sake of particular applications. For example, if we wish to model rational play in a game, we may wish to assume that

62

O. Board / Games and Economic Behavior 49 (2004) 49–80

WCon

φ

φ ⇒ ¬Bi false.

WCon says that as long as the information an agent receives is true, her revised beliefs are consistent,9 and is represented by the following assumption, which says that the actual world is always conceivable: R4

for all i, w: w ∈ Wiw .

Call a belief revision structure satisfying R4 inclusive. Let I the class of all inclusive belief revision structures which also satisfy R1 and R2, and let BRSC be the axiom system consisting of BRS and WCon. Then: Theorem 3. BRSC is a sound and complete axiomatization with respect to I for formulas in the language L. 3.3. A simplification If we are willing to accept the introspection axioms, TPI and TNI, and consistency axiom, WCon, discussed above, the belief revision structures which provide the semantics for L can be greatly simplified. Let BRSIC be the resulting axiom system (i.e. BRSIC = BRS + TPI + TNI + WCon). Theorem 4 says that R1–R4 give a semantic characterization of BRSIC: Theorem 4. BRSIC is a sound and complete axiomatization with respect to A ∩ I for formulas in the language L. It turns out that if R1–R4 are satisfied, the plausibility orderings of each agent can be replaced by a single binary relation, Pi , defined as follows: w Pi x

if and only if

w xi x.

The intuition is as follows: recall that R3 says that each agent has the same plausibility ordering at every world conceivable to her, and R4 says that the actual world is always conceivable. If both conditions are satisfied, the plausibility orderings divide the worlds into distinct subsets, which can then be described by a single relation. Although some information is lost by this transformation, since many different families of plausibility orderings for a given agent map onto the same Pi relation, all of the semantically relevant information is preserved. Truth conditions can be given in terms of the Pi relations there is at least one world corresponding to every strategy profile, or even that there is a world for every consistent set of beliefs the players might hold (as in the canonical structure of Section 2.5). These restrictions represent contingent facts about particular situations, not matters of logic alone. 9 For the purposes of modeling rational play in extensive games, replacing the AGM consistency axiom WCon is without loss of generality: the information structure of extensive form games is such that the information received is always true, and so WCon guarantees that agents maintain consistency of beliefs. For more on this point, see Board (2002b).

O. Board / Games and Economic Behavior 49 (2004) 49–80

63

which for every formula match the standard truth conditions. First observe that the set of conceivable worlds can be defined as follows: Wiw = {x | x Pi w or w Pi x}. To show that this definition is correct, we must prove Proposition 2. {x | x Pi w or w Pi x} = {x | x w i y for some y}. Next, we show how the Pi relation can be used to define truth of formulas. The truth conditions for primitive formulas, conjunctions and negations are the same as before; truth φ of formulas of the forms Bi φ and Bi ψ are defined as follows:   (M, w)  Bi φ iff (M, x)  φ for all x ∈ mini Wiw ,   φ (M, w)  Bi ψ iff (M, x)  ψ for all x ∈ mini [φ]M ∩ Wiw , where mini (X) = {x ∈ X | for all y ∈ X, x Pi y}. The equivalence of these truth conditions and the standard conditions follows immediately from Proposition 3: w Proposition 3. For all X ⊆ Wiw , min i (X ∩ Wiw ) = minw i (X ∩ Wi ).

The structures resulting from this simplification bear a very close resemblance to the belief revision models developed by Stalnaker (1996). Our Pi relations work in exactly the same way as the reverse of his Qi relations: if we define w Qi x if and only x Pi w, then the truth conditions for formulas expressing beliefs and revised beliefs are identical. Furthermore, it can be shown our Pi relations satisfy the same properties he requires of his Qi relations (i.e. they are reflexive and transitive, and if two worlds are related (in either direction) to a third world, then those two worlds are related (in some direction) to each other). Thus it follows from Theorem 4 that the axiom system BRSIC provides a precise syntactic characterization of Stalnaker’s purely semantic logic. 3.4. Common belief One of the strengths of the logic we are developing here, and of epistemic logic in general, is that they give us a complete description of agents’ beliefs about agents’ beliefs. As we discussed in the introduction, this is particularly useful for game theory since such beliefs are often considered necessary for sophisticated strategic reasoning. In particular, we can give an account of the notion of common belief frequently used by economists. But common belief cannot be expressed in the language L defined above, since infinite conjunctions of formulas in L are not themselves formulas of L. To remedy this problem, we augment the language with the modal operators E (“everyone believes that. . . ”), and C (“it is common belief that. . . ”).10 Formally, LC is defined by adding the following condition to the definition of L in Section 2.1: 10 Operators representing mutual and common revised beliefs could be added in the same way, with a

corresponding set of axioms. But if our interest is in providing epistemic characterization results for solution concepts in game theory, this is probably not necessary: these results typically consider restrictions on the prior

64

O. Board / Games and Economic Behavior 49 (2004) 49–80

(d) if φ ∈ LC , then Eφ ∈ LC and Cφ ∈ LC . It is straightforward to extend our axiom system to incorporate the E operator: E

Eφ ⇔



i∈{1,...,n} Bi φ.

E says simply that everyone believes that φ if and only if every agent believes that φ. Unfortunately, the axiomatic characterization of common belief is trickier. The problem is that although common belief is an infinite concept, our axioms must be finite in length. It turns out that the following axiom-rule pair (familiar from epistemic logic) will serve our purpose: FP IR

Cφ ⇒ E(φ ∧ Cφ), from φ ⇒ E(ψ ∧ φ) infer φ ⇒ Cψ.

FP and IR which are known as the fixed-point axiom and the induction rule, are harder to interpret. We shall merely remark that jointly they imply that common belief has all the properties of (individual) belief. For example, if Bi satisfies TPI, so too does C. Let BRSC (respectively, BRSI C , BRSCC , and BRSICC ) denote the axiom system formed by adding E, FP, and RI to BRS (respectively, BRSI, BRSC, and BRSIC). The definition of truth for the augmented language, LC is extended exactly as we would expect. Eφ is true just if everyone believes that φ: (M, w)  Eφ

iff (M, w)  Bi φ

for all i ∈ {1, . . . , n};

and Cφ is true if everyone believes that φ, everyone believes that everyone believes that φ, and so on. So, letting E 0 φ be an abbreviation for φ, and E k+1 φ be an abbreviation for EE k φ, we have: (M, w)  Cφ

iff (M, w)  E k φ

for k = 1, 2, . . . .

The following theorem confirms the equivalence of the syntactic and semantic characterization of common belief: Theorem 5. (a) BRSC is a sound and complete axiomatization w.r.t. M for formulas in LC . (b) BRSI C is a sound and complete axiomatization w.r.t. A for formulas in LC . (c) BRSCC is a sound and complete axiomatization w.r.t. I for formulas in LC . (d) BRSICC is a sound and complete axiomatization w.r.t. A ∩ I for formulas in LC .

beliefs of the players (i.e. their beliefs before the game is played), and the way they are disposed to revise these beliefs.

O. Board / Games and Economic Behavior 49 (2004) 49–80

65

4. Comments 4.1. Knowledge While our language allows us to make statements about agents’ beliefs (and how these beliefs are revised), economists often make assumptions about agents’ knowledge. Knowledge could be modeled by adding another set of modal operators to our language: Ki (“i knows that. . . ”). As we discussed in the introduction, in the economics literature knowledge is most commonly analyzed using Aumann’s (1976) information partition model. The properties of this model are well understood. An Aumann structure can be provided with an interpretation and used to provide truth conditions for a language containing knowledge operators. The properties of the knowledge operators can then be precisely described by a set of axioms which are sound and complete with respect to the class of all (enriched) Aumann structures. In addition to the appropriate analogues of Taut, Dist, MP and RE, this axiom system contains: T PI NI

Ki φ ⇒ φ, Ki φ ⇒ Ki Ki φ, ¬Ki φ ⇒ Ki ¬Ki φ.

T, the truth axiom, is uncontroversial: it says simply that what is known must be true; PI (positive introspection) and NI (negative introspection), on the other hand, are even more controversial in the context of knowledge than in the context of belief. They say respectively that an agent knows what she knows and knows what she does not know. The problem is the following: as long as we accept the truth axiom, the concept of knowledge imposes an external condition on the agent’s cognitive state. Thus even if the agent has complete introspective access to what she believes and does not believe, the introspection axioms do to carry over to knowledge through logic alone. So it seems that we must reject PI and NI, and take T as a starting point in the analysis of knowledge. But philosophers have long argued that true belief, while necessary, is not a sufficient condition for knowledge. For a true belief to be classified as knowledge, it is required in addition that it be somehow justified in an appropriate manner. Reflecting this requirement, Stalnaker (1996) appeals to an analysis of knowledge called the defeasibility analysis. The idea behind this account is that “if a person has knowledge, then that person’s justification must be sufficiently strong that it is not capable of being defeated by evidence that he does not possess” (Pappas and Swain, 1978). Stalnaker uses his (semantic) model of belief revision to formalize this idea, by defining knowledge as follows: an agent knows that φ if and only if φ is true, she believes that φ, and she continues to believe that φ if any true information is received. Truth conditions for formulas of the form Ki φ can be provided as follows:11 (M, w)  Ki φ

iff

(M, w)  φ

ψ

and if (M, w)  ψ, then (M, w)  Bi φ.

11 If we assume that R4 holds, this can be replaced by the much simpler (M, w)  K φ iff (M, x)  φ for all i x w i w, which is precisely the condition given by Stalnaker.

66

O. Board / Games and Economic Behavior 49 (2004) 49–80

But we have been unable to find a finite axiomatization: the direct translation of Stalnaker’s definition involves a formula of infinite length. Let ψ1 , ψ2 , . . . be an enumeration of all the formulas of L. Then the appropriate axiom would be: Know

   ψ   ψ  Ki φ ⇔ φ ∧ ψ1 ⇒ Bi 1 φ ∧ ψ2 ⇒ Bi 2 φ ∧ · · · .

(Note that we do not need to include the formula Bi φ on the right-hand side, since, in the ψ presence of Taut and Triv, it is implied by ψ ⇒ Bi φ when true is substituted for ψ.) An alternative approach would be to treat Stalnaker’s definition as providing a necessary but not sufficient condition for knowledge:   ψ  Know Ki φ ⇒ φ ∧ ψ ⇒ Bi φ . It is an open question whether the axiom system consisting of BRS and Know is sound and complete given the proposed semantics. 4.2. Iterated belief revision In an extensive form game, of course, beliefs may need to be revised more than once: new information may be received at each round of the game. But the language L is not rich enough to express iterated belief revisions. Although it would be a simple task to augment the language to allow such iterations, it is not obvious what the axioms governing these iterations should be. Nor is it clear how the semantics should be extended: the rules for revision give us a new set of most plausible worlds after a formula φ is learned (that is, the lowest ranking members of [φ]), but we are not told the relative plausibilities of all the other worlds. So we need some way of preserving the relative plausibility data given by the w i relations, while taking into account the new information φ. More precisely, we need to construct a new ordering that represents the revised epistemic state, so that we can re-apply the revision rule as more information is learned. Further discussion of these issues is beyond the scope of this paper. The interested reader is referred to Spohn (1988). These issues can be avoided if we restrict our attention to extensive form games of perfect recall, which are the focus of much of modern game theory. Such games can be analyzed without loss of generality by considering only single revisions if we are willing to accept a plausible assumption. In such games, the sequence of information that the players receive as the game progresses and they observe moves made by their opponents is of a very particular kind: each new piece of information logically implies every previous piece, since the set of strategies consistent with any given information set for a player is a subset of the set of strategies consistent with any predecessor information sets.12 If ψ logically implies φ, it seems reasonable to assume that learning φ and then ψ will generate the same beliefs as if one learns ψ at first: in both cases exactly the same information is learned. This assumption is adopted by Board (2002a). 12 Board (2002b) suggests that this information structure is unduly restrictive, even under the assumption of perfect recall, and shows that it rules out certain interesting situations.

O. Board / Games and Economic Behavior 49 (2004) 49–80

67

Fig. 1. Centipede game.

5. An example: the failure of backward induction In this section we present a simple example to show how our logic can be applied to game theory. Consider the three-round centipede game in Fig. 1. We shall show that common belief in rationality at the beginning of the game does not imply that the backward induction outcome will result.13 Our language consists of only three primitive formulas, l1 , l2 , and l3 , to be interpreted as “player 1 chooses L1 ,” “player 2 chooses L2 ,”14 and “player 1 chooses L3 ,” respectively. Statements about which strategy profile is chosen and what the outcome is can be built up from these three formulas using conjunctions and negations: for example, l1 ∧¬l2 ∧¬l3 represents the proposition “player 1 chooses L1 T3 and player 2 chooses T2 ,” and ¬l1 represents the proposition “the backward induction outcome is realized.” Player i is said to be rational (Rati ) if she believes her chosen strategy is certain to yield a higher payoff than any other, at every node at which she is on move.15 Note that we are not introducing any new formulas into the language: whether or not a player is rational is determined completely by her strategy choice and her first-order beliefs, i.e. her beliefs about which strategy profile will be chosen. Thus Rati is simply an abbreviation for a longer formula, as follows: Rat1 ≡ (l1 ∧ ¬l3 ∧ B1 l2 ) ∨ (¬l1 ∧ ¬l3 ∧ B1 ¬l2 ),     Rat2 ≡ l2 ∧ B2l1 l3 ∨ ¬l2 ∧ B2l1 ¬l3 . These equivalences state that player 1 is rational if and only if she chooses L1 T3 and she believes (at node 1) that player 2 will choose L2 , or if she chooses T1 T3 and she believes that player 2 will choose T2 ; and player 2 is rational if and only if he chooses L2 and he believes (at node 2) that player 1 will choose L3 , or if he chooses T2 and he believes that player 1 will choose T3 . Finally, we shall use CBR as shorthand for Rat1 ∧ Rat2 ∧ C(Rat1 ∧ Rat2 ) (“everyone is rational and there is common belief in rationality”). 13 Of course, this is not a new result (see, e.g., Reny, 1993 and Ben-Porath, 1997). The aim here is simply to show the logic at work. 14 Or “player 2 chose/will choose/would have chosen/would choose L .” There is no notion of time in our logic, 2 so formulas should be interpreted as past, present or future, indicative or subjunctive depending on the viewpoint. The primitive formulas play two roles: they describe the way the game is actually played, and they provide a set of counterfactuals for evaluating the payoffs if the action taken at any node deviates from the specified action. 15 In Board (2002a) we use the more standard notion of expected utility maximization. Adopting the current definition allows us to avoid the use of probabilities while strengthening the result.

68

O. Board / Games and Economic Behavior 49 (2004) 49–80

A natural belief revision structure, M, for this game consists of one world for each strategy profile, labeled LLL, LLT , . . . , T T T , with π defined in the obvious way (i.e. π(LT T )(l3 ) = false, for example). Now suppose that LLT is the actual world, and that the plausibility ordering satisfies: LLT ≺LLT w 1 T LT TTT T LT TTT TTT

≺LLT 2 ≺T1 LT ≺T2 LT ≺T1 T T ≺T2 T T

for all w = LLT ,

LLL ≺LLT w 2 w

for all w = T T T ,

LLL ≺T2 LT w w

for all w = T LT , LLL, for all w = T LT , LLL,

for all w = T T T ,

LT T ≺T2 T T w

for all w = T T T , LT T .

The remainder of the ordering can be specified in any way that satisfies R1–R4. First observe that Rat1 and Rat2 hold at LLT , T LT , and T T T . At T LT , for instance, player 1 considers only world T T T to be possible: she believes that player 2 will choose T2 , so T1 T3 is indeed the rational choice. Player 2, on the other hand, believes only world T LT to be possible; but at node 2, when he is on move, he believes only world LLL to be possible: here he believes that player 1 will choose L3 , and therefore L2 is rational. It follows that CBR holds at LLT .16 But the backward induction outcome is not chosen. Formally, we have shown that (M, LLT )  ¬(CBR ⇒ ¬l1 ). We can now appeal to Theorem 5(d) to show that BRSICC  CBR ⇒ ¬l1 . We have proved that we cannot prove that common belief in rationality implies backward induction. The intuition behind our counterexample is as follows: even though player 2 believes at the beginning of the game that player 1 is rational and therefore will choose T3 if node 3 is reached, he no longer believes this at node 2: if player 1 were rational, according to the beliefs player 2 believes she has, she should not have played L1 , and node 2 should not have been reached. So he plays L2 . Player 1 correctly believes that player 2 will reason this way, so she plays L1 after all, tricking player 2 into thinking that she is irrational. A more detailed account of how our logic of belief revision can be applied to games, along with an analysis of the positive implications of common belief in rationality, can be found in Board (2002a).

6. Related literature Much of the related literature has been discussed in the main text of this paper. Here we provide a summary and mention some important omissions. 16 This can be shown quickly using the notion of reachability (see Fagin et al., 1995, p. 23): only worlds LLT , T LT and T T T are reachable from world LLT .

O. Board / Games and Economic Behavior 49 (2004) 49–80

69

The semantics of our logic bear a close resemblance to those of conditional logic (Lewis, 1973; Burgess, 1981). The belief revision structures considered above are essentially a multi-agent version of Burgess’ models. R1 corresponds to his transitivity and connectivity requirements. The counterpart of R2 is Lewis’ limit assumption (L), and R3 and R4 correspond to local absoluteness (A-) and total reflexivity (T), respectively. But their models are used to provide truth conditions for conditional formulas, while here we are interested in belief revision; the axioms of conditional logic are very different from the axioms of belief revision. Stalnaker (1996) also uses semantic structures very similar to those employed here; the results of Section 3.3 show that his models are essentially belief revision structures which satisfy R1–R4. But he provides no formal language and no syntax. Thus the results of this paper are complementary to his: our axiom system BRSIC can be thought of as characterizing his models. Friedman and Halpern’s (1994) logic of belief change does provide a syntax as well as a semantics for characterizing the belief revision process, with soundness and completeness theorems linking the two. The key difference between their work and our own is that they use two distinct formal languages, one for describing facts about the physical world which can be learned, and another for talking about beliefs. Thus in their system agents cannot learn about each other’s beliefs. Friedman and Halpern suggest that this restriction is necessary to avoid a triviality result established by Gärdenfors (1986), but the results of this paper show that this is not the case. Triviality can φ ψ φ∧ψ χ ) is not adopted as an axiom. be avoided as long as the Ramsey test (Bi Bi χ ⇔ Bi It could be argued, however, that there is no advantage to this extra flexibility, since epistemic events are not observable and therefore cannot play the role of inputs in the belief revision process. Clearly this argument does not apply to beliefs about oneself, which can be acquired by reflection over the course of time. But whether or not we can have direct perceptual access to others’ mental states is a philosophically controversial issue. McDowell (1982), for example, claims that we can. This is not a form of mind reading. Our perceptions of each others’ minds can be epistemically direct and yet causally mediated, as indeed are our perceptions of physical objects. If we cannot learn directly about each others’ beliefs, as Levi (1988) seems to argue, then we must explain where our beliefs about each others’ mental states come from. Many of these beliefs are acquired through conversation. If the learning procedure is not direct, then presumably Alice acquires the belief that Bob believes that φ not by updating on “Bob believes that φ” but by updating on “Bob said ‘φ’ ” and combining it with the prior belief “Bob said ‘φ’ ⇒ Bob believes that φ.” But certainly we are not usually conscious of making such inferences; and if the way we learn about the beliefs of others when we converse with them seems non-inferential, then unless there is some positive reason for doubting the veracity of this seeming, it is rational to accept this as the real situation. Even if we reject the view that others’ mental states are directly observable, however, we could accept that we sometimes behave as if they are, and treat what people say as a direct expression of their beliefs. Indeed, Lewis (1975) takes such behavior to be part of the definition of language use. The cheap talk literature (e.g. Crawford and Sobel (1982)) attempts to explain language use as an equilibrium outcome, but elsewhere it has proved fruitful to adopt an assumption of sincerity (e.g. Geanakoplos and Polemarchakis (1982)). Less abstract applications where sincerity may be a reasonable assumption include

70

O. Board / Games and Economic Behavior 49 (2004) 49–80

consultancy advice, where agents pay to receive specialist opinions, and expert testimony in the courtroom, which yields information about the expert’s beliefs, and not hard facts about the physical world. Working with a language that is rich enough to describe a wide range of revised beliefs may be useful for another, theoretical, reason. The results of Battigalli (1996) and Kohlberg and Reny (1997) show that the consistency requirement of sequential equilibrium can be characterized by independence conditions imposed on the conditional beliefs of the players of a game. These results require that the conditional probability measures be defined for every nonempty set of strategy profiles; yet many of these subsets do not correspond to events that could be observed on any play path of the game. Allowing “virtual learning” is essential for their approach. It seems probable that extending the class of virtually learnable events to include epistemic events would make no difference in this case. But in psychological games (see Geanakoplos et al., 1989; Dufwenberg, 2002) the payoffs themselves depend on players’ beliefs. In this setting, it is quite plausible that interesting restrictions on the way players revise their beliefs in response to all types of information would generate restrictions on actual beliefs conditional on observable events that would not obtain if one considered only nonepistemic information. Clausing (2001) formulates a logic of belief revision which is general in the sense we have been discussing: no restrictions are imposed on what can be learned.17 In addition, Clausing’s language allows him to express conditional statements of the form “if φ were true, then it would be the case that ψ.” He states (but does not prove) soundness and completeness theorems for a weak and a strong version of his logic. There are close parallels between this work and the current paper; in particular, Clausing’s axioms of belief revision (B1–B7) and the belief revision component of the axiom system BRS developed here are both based on the AGM axioms. Where the logics differ most is in the semantics. In the place of the plausibility orderings employed here, Clausing uses state selection functions to provide truth conditions for formulas involving belief revision operators.18 For each world w and every formula φ, an agent’s state selection function picks out a set of worlds, fi (w, φ) ⊆ 2W . fi (w, φ) is interpreted as the set of worlds the agent considers possible at world w if she learns that φ, and plays precisely the same role as the most plausible worlds at which φ is true in our semantic framework. But plausibility orderings, unlike state selection functions, make no reference to the language. This allows us to establish results about games by purely mathematical reasoning, which can then be translated (by soundness and completeness) into theorems of logic, as in the example of Section 5. Alternative models of interactive belief revision have been developed by Battigalli and Siniscalchi (1999) and Brandenburger and Keisler (2002), who show how the hierarchical approach of Mertens and Zamir (1985) can be extended to the dynamic setting. The differences between these models and the current work have been discussed in the introduction and in Section 2.5. 17 We are grateful to an anonymous referee for pointing out this reference. 18 The results of Halpern (1998) suggest that it may be possible to replace the selection functions and with

plausibility orderings.

O. Board / Games and Economic Behavior 49 (2004) 49–80

71

Before concluding this section we should mention the work of Feinberg (2002). Feinberg also develops a logic for reasoning about dynamic games, and uses it to establish various game-theoretic results. The language that Feinberg works with is richer than that considered here in two respects: it can express probabilistic beliefs, and it can express statements about utilities (though completeness of the logic is proved only for formulas that do not contain these operators). On the other hand, there is no belief revision component to the logic. Rather each player is represented by a different agent for each node at which she is on move, and any links between the beliefs of these agents are expressed in a meta language. Although the logic is dynamic in the sense that it describes player’s beliefs throughout the game, it does not attempt to model the development of these beliefs as new information is learned.

7. Conclusion The aim of this paper has been to develop a dynamic model of interactive reasoning which combines analytical simplicity with clarity of interpretation. Belief revision structures are similar to the models used very successfully by Stalnaker to analyze rational play in extensive form games (Stalnaker, 1996), and to shed light on the forward and backward induction procedures (Stalnaker, 1998). These structures provide truth conditions for a formal language. Soundness and completeness theorems establish tight connections between the formulas that are true in various classes of belief revision structure, and those that are provable in certain axiom systems, thereby giving us a precise understanding of what the structures mean.

Acknowledgments I thank Paolo Battigalli, Timothy Williamson, an associate editor and two anonymous referees, whose detailed comments led to substantial improvements in the content and exposition of this paper. Helpful comments from Michael Bacharach, John Foster, Meg Gleason, Graham Hubbs, Mamoru Kaneko, Bob Stalnaker and Johan van Bentham, are also gratefully acknowledged.

Appendix A. The AGM axioms for belief revision This appendix gives a short description of the AGM belief revision theory. A more complete account is provided by Gärdenfors (1988). The AGM theory (so-called after Alchourrón et al., 1985) provides a set of axioms which, they argue, any reasonable belief revision system should satisfy. The axioms do not determine a unique belief revision function. An agent’s epistemic state at any one point in time is represented by a set of formulas of propositional calculus, the agent’s belief set K. This is interpreted as the set of all formulas the agent believes. Belief statements are thus

72

O. Board / Games and Economic Behavior 49 (2004) 49–80

expressed in a meta-language (“inclusion in K”). It is assumed that belief sets are closed under logical consequence. The AGM axioms impose restrictions on the form of K∗φ , the agent’s revised belief set after information φ is learned. Although we normally assume that belief sets are logically consistent, it is convenient to define the absurd belief set Kfalse in which everything is believed (i.e. Kfalse contains every formula of propositional calculus). Finally, K+ φ is used to denote the expansion of K by φ, formed by adding φ to K and closing under logical consequence. We can now state the axioms: (K∗ 1) (K∗ 2) (K∗ 3) (K∗ 4) (K∗ 5) (K∗ 6) (K∗ 7) (K∗ 8)

K∗φ is a belief set; φ ∈ K∗φ ; K∗φ ⊆ K+ φ; ∗ if ¬φ ∈ / K, then K+ φ ⊆ Kφ ; ∗ Kφ = Kfalse if and only if φ is logically inconsistent; if φ ⇔ ψ, then K∗φ = K∗ψ ; K∗φ∧ψ ⊆ (K∗φ )+ ψ; ∗ ∗ if ¬ψ ∈ / Kφ , then (K∗φ )+ ψ ⊆ Kφ∧ψ .

Appendix B. Proofs Proof of Theorem 1. Soundness. The proof of soundness is straightforward, and proceeds by induction on the length of a proof of φ. Every element of a proof is either an axiom or follows from previous elements by the application of a rule, so we must show that every axiom is valid with respect to M and that each rule is truth preserving. We consider the cases of Triv, IE(a) and LE, and leave the rest as an exercise. Triv: we must show that M  Bi φ ⇔ Bitrue φ. Propositional reasoning and the definition of  imply that (M, w)  true, for all M and w. Thus [true]M = W , and Wiw = [true]M ∩ Wiw . From the definition of  again, it follows that (M, w)  Bi φ iff (M, w)  Bitrue φ. φ φ∧ψ φ φ IE(a): we must show that M  Bi ψ ⇒ (Bi χ ⇔ Bi χ). Suppose (M, w)  Bi ψ; w w w w w then (M, x)  ψ for all x ∈ mini {[φ]M ∩ Wi }, and mini {[φ]M ∩ Wi } = mini {[φ]M ∩ [ψ]M ∩ Wiw }. From the definition of , we have [φ]M ∩ [ψ]M = [φ ∧ ψ]M ; therefore w w w minw i {[φ]M ∩[ψ]M ∩Wi } = mini {[φ ∧ψ]M ∩Wi }. It follows immediately that (M, w)  φ∧ψ φ Bi χ iff (M, w)  Bi χ , as required. φ ψ LE: we must show that if M  φ ⇔ ψ, then M  Bi χ ⇔ Bi χ . Suppose that w M  φ ⇔ ψ. Then by the definition of , [φ]M = [ψ]M , and so minw i {[φ]M ∩ Wi } = φ ψ w w mini {[ψ]M ∩ Wi }. It follows immediately that (M, w)  Bi χ iff (M, w)  Bi χ , as required. Completeness. We start with some definitions. For a given axiom system AX, we say that a formula φ is AX-consistent if ¬φ is not provable in AX. A finite set of formulas {φ1 , . . . , φk } is AX-consistent exactly if φ1 ∧ · · · ∧ φk is AX-consistent, and an infinite set of formulas is AX-consistent exactly if all its finite subsets are AX-consistent. Finally, given

O. Board / Games and Economic Behavior 49 (2004) 49–80

73

two sets of formulas S, T with S ⊆ T ⊆ L, we say that S is a maximal AX-consistent subset of T if (a) it is AX-consistent, and (b) for all φ in T but not in S, the set S ∪ {φ} is not AX-consistent. Now, to prove completeness, we must show that every formula in L that is valid with respect to M is provable in BRS. It is sufficient to prove that (∗) Every BRS-consistent formula in L is satisfiable with respect to M. For assume that we can prove (∗), and that φ is a valid formula in L. If φ is not provable in BRS, then neither is ¬¬φ, so, by definition, ¬φ is BRS-consistent. It follows from (∗) that ¬φ is satisfiable with respect to M, contradicting the validity of φ with respect to M. Before proceeding, we need another round of definitions. Let Sub(φ) be the set of all subformulas of φ; formally, ψ ∈ Sub(φ) if either (a) ψ = φ, or (b) φ is of the form φ ¬φ  , φ  ∧ φ  , Bi φ  , or Bi φ  , and ψ ∈ Sub(φ  ) or ψ ∈ Sub(φ  ); and let Sub+ (φ) consist of all the formulas in Sub(φ) and their negations and conjunctions, i.e. Sub+ (φ) is the smallest set such that (a) if ψ ∈ Sub(φ) then ψ ∈ Sub+ (φ); and (b) if ψ, χ ∈ Sub+ (φ), then ¬ψ, ψ ∧ χ ∈ Sub+ (φ). Let Sub++ (φ) consist of all formulas of Sub+ (φ) together χ with all formulas of the form Bi ψ and Bi ψ, where ψ, χ ∈ Sub+ (φ); and let Sub++ neg (φ) consist of all the formulas in Sub++ (φ) and their negations. Finally, let Con(φ) be the 19 set of maximal BRS-consistent subsets of Sub++ neg (φ). It is easy to show that every BRS++ consistent subset of Subneg (φ) can be extended to an element of Con(φ) by addition of formulas; and if S is a member of Con(φ), it must satisfy the following properties: • • • • • •

for every ψ ∈ Sub++ (φ), exactly one of ψ and ¬ψ is in S; if ψ ∧ χ ∈ S then ψ ∈ S and χ ∈ S; if ψ ∨ χ ∈ S then ψ ∈ S or χ ∈ S; if ψ ∈ S and ψ ⇒ χ ∈ S, then χ ∈ S; if ψ ⇔ χ then ψ ∈ S if and only if χ ∈ S; if ψ ∈ Sub++ neg (φ) and BRS  ψ, then ψ ∈ S.

To prove (∗), we construct a special structure Mφ ∈ M for each φ. Mφ has a world wS corresponding to every S ∈ Con(φ); we show that for all ψ ∈ Sub(φ), we have (∗∗) (Mφ , wS )  ψ if and only if ψ ∈ S, i.e. a formula in Sub(φ) is true at world wS exactly if it is one of the formulas in S. This is sufficient to prove (∗), since if φ is BRS-consistent, it is contained in some set S ∈ Con(φ); it then follows from (∗∗) that (Mφ , wS )  φ, and so φ is satisfiable with respect to M as required.

19 See, e.g., Fagin et al. (1995, pp. 52, 53).

74

O. Board / Games and Economic Behavior 49 (2004) 49–80 φ

φ

For construction of Mφ we introduce some new notation: let S/Bi = {ψ | Bi ψ ∈ S}, φ i.e. S/Bi is the set of formulas believed by i when she learns that φ. We now define Mφ . Let Mφ = W, π,  , where   W = wS | S ∈ Con(φ) ,  true if ψ ∈ S π(wS )(ψ) = for all ψ ∈ Φ, false if ψ ∈ /S S wT w i wU

if there is some ψ ∈ Sub+ (φ) ∩ T ∩ U such that S/Bi ⊆ T . ψ

We prove (∗∗) by induction on the structure of formulas: supposing that it holds for all subformulas of ψ ∈ Sub(φ), we show it holds for ψ. The cases where ψ is a primitive χ formula, a conjunction or a negation are straightforward. Suppose ψ is of the form Bi ζ . χ We prove the “if” direction first, and assume that ψ ∈ S. This implies that ζ ∈ S/Bi . wS wS Consider the set mini {[χ]Mφ ∩ Wi }. If this set is empty, then it follows immediately χ from the definition of  that (Mφ , wS )  Bi ζ . wS wS S Suppose then that there is some wT ∈ minw i {[χ]Mφ ∩ Wi }, i.e. wT i wU for all ξ wS wU ∈ {[χ]Mφ ∩ Wi }. Then there is some ξ ∈ Sub+ (φ) ∩ T such that S/Bi ⊆ T . We must ξ ξ show that ζ ∈ T . Since S/Bi ⊆ T , S/Bi must be a BRS-consistent set. It follows that χ S/Bi is a BRS-consistent set too. Suppose not: then there is some finite set of formulas χ F = {φ1 , φ2 , . . . , φk } ⊆ S/Bi such that BRS  ¬(φ1 ∧ φ2 ∧ · · · ∧ φk ). Letting η denote (φ1 ∧ φ2 ∧ · · · ∧ φk ), we have: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

BRS  ¬η BRS  η ⇒ ξ χ χ BRS  Bi η ⇒ Bi ξ  χ∧ξ χ χ  BRS  Bi ξ ⇒ Bi η ⇔ Bi η χ χ∧ξ BRS  Bi η ⇒ Bi η  ξ ∧χ  ξ ξ BRS  ¬Bi ¬χ ⇒ Bi η ⇔ Bi (χ ⇒ η)  χ∧ξ  ξ ξ BRS  ¬Bi ¬χ ⇒ Bi η ⇔ Bi (χ ⇒ η) BRS  (χ ⇒ η) ⇒ ¬χ ξ ξ BRS  Bi (χ ⇒ η) ⇒ Bi ¬χ  χ  ξ ξ BRS  Bi η ∧ ¬Bi ¬χ ⇒ Bi η  χ  χ ξ BRS  Bi φ1 ∧ · · · ∧ Bi φk ∧ ¬Bi ¬χ  ξ  ξ ⇒ Bi φ1 ∧ · · · ∧ Bi φk

assumption 1, Taut, MP 2, RE, Dist, Taut, MP IE(a) 3, 4, Taut, MP IE(b) 6, LE, Taut, MP 1, Taut, MP 8, RE, Dist, Taut, MP 5, 7, 9, Taut, MP 10, Dist, Taut, MP

By the hypothesis of induction, we know that χ ∈ T , since wT ∈ [χ]Mφ . It follows ξ ξ χ χ that ¬Bi ¬χ ∈ S since Bi ¬χ ∈ Sub++ (φ). Since Bi φ1 , . . . , Bi φk ∈ S, we have ξ ξ ¬Bi φ1 , . . . , ¬Bi φk ∈ / S, or else S would be inconsistent according to line 11 above. Since ξ ξ ξ ξ ξ Bi φ1 , . . . , Bi φk ∈ Sub++ (φ), it follows that Bi φ1 , . . . , Bi φk ∈ S. Thus F ⊆ S/Bi , i.e. ξ S/Bi is not a BRS-consistent set, contradicting our original assumption. χ So S/Bi is a BRS-consistent set, and it therefore has a maximal BRS-consistent χ χ extension, U . And since Bi χ ∈ S (single instance of Succ), we have χ ∈ (S/Bi ) ⊆ U .

O. Board / Games and Economic Behavior 49 (2004) 49–80 w

w

75 w

w

Thus wU i S wT , by construction; and since wT ∈ mini S {[χ]Mφ ∩ Wi S }, wT i S wU . ρ So there is some ρ ∈ Sub+ (φ) ∩ T ∩ U such that S/Bi ⊆ T . χ χ We started off by assuming that Bi ζ ∈ S; we also know that ¬Bi ¬ρ ∈ S, since ρ ∈ U χ ρ ρ and S/Bi ⊆ U ; and that ¬Bi ¬χ ∈ S, since χ ∈ T and S/Bi ⊆ T . Furthermore, 12. 13. 14. 15. 16.

 χ∧ρ  χ  χ χ BRS  ¬Bi ¬ρ ⇒ Bi  ζ ⇔ Bi ζ ∨ Bi (ρ ⇒ ζ ) χ χ χ∧ρ BRS  ¬Bi ¬ρ ∧ Bi ζ  ⇒ Bi ζ χ χ ρ∧χ BRS  ¬Bi ¬ρ ∧ Bi ζ ⇒ Bi  ζ  ρ∧χ ρ ρ ρ BRS  ¬Bi ¬χ ⇒ Bi ζ ⇔ Bi ζ ∨ B i (χ ⇒ ζ )  χ χ ρ ρ ρ BRS  ¬Bi ¬ρ ∧ Bi ζ ∧ ¬Bi ¬χ ⇒ Bi ζ ∨ Bi (χ ⇒ ζ )

IE(b) 12, Taut, MP 13, LE, Taut, MP IE(b) 14, 15, Taut, MP

Since χ, ζ, ρ ∈ Sub+ (φ), Bi ζ and Bi (χ ⇒ ζ ) are both in Sub++ (φ). So either ρ ρ ρ ∈ S or Bi (χ ⇒ ζ ) ∈ S: if ¬Bi ζ ∈ S and ¬Bi (χ ⇒ ζ ) ∈ S, S would be inconsistent ρ according to line 16 above. But S/Bi ⊆ T , so either ζ ∈ T , or χ ⇒ ζ ∈ T , in which case ζ ∈ T again because χ ∈ T . w w Thus for every wT ∈ mini S {[χ]Mφ ∩ Wi S }, ζ ∈ T , and so (Mφ , wT )  ζ by the χ hypothesis of induction. It follows from the definition of  that (Mφ , wS )  Bi ζ . χ χ For the “only if” direction, assume (Mφ , wS )  Bi ζ . It follows that the set (S/Bi ) ∪ {¬ζ } is not BRS-consistent. If it were, it would have a maximal BRS-consistent χ χ extension T . Now, Bi χ ∈ S (single instance of Succ), and so χ ∈ (S/Bi ) ⊆ T . Thus wS by construction, wT i wU for all U such that χ ∈ U , and (given the hypothesis of w w induction), wT ∈ mini S {[χ]Mφ ∩ Wi S }. The hypothesis of induction also tells us that (Mφ , wT )  ¬ζ , since ¬ζ ∈ T , and it follows immediately from the definition of  that χ χ (Mφ , wS )  ¬Bi ζ , contradicting our original assumption. So (S/Bi ) ∪ {¬ζ } is not BRSχ consistent. It follows from Taut, Dist, MP and RE that Bi ζ ∈ S, as desired (see Fagin et al., 1995, p. 55). χ So we have proved the inductive step for the case where ψ is of the form Bi ζ . The case where φ is of the form Bi χ follows quickly. Pick any instance of true ∈ Sub+ (φ). Since BRS  Bi χ ⇔ Bitrue χ (instance of Triv), Bi χ ∈ S if and only if Bitrue χ ∈ S. Furthermore, since true is a propositional tautology, it follows from the definition of  that (Mφ , wT )  true for all wT , i.e. [true] = W . Again from the definition of , we have (Mφ , wT )  Bi χ if and only if (Mφ , wT )  Bitrue χ . We have shown that (∗∗) holds for all formulas ψ ∈ Sub(φ). To complete the proof of completeness, we need to show that Mφ ∈ M, i.e. that Mφ really is a belief revision structure. It is clear that W is a well-defined set of possible worlds, and π is an interpretation. We wS S need to show that for all S, w i is complete and transitive on Wi , and is well-founded. wS S For completeness, assume that wT , wU ∈ Wi . We need to show that either wT w i wU wS wS + w or wU i wT . From the definitions of Wi and i , there is some ψ ∈ Sub (φ) ∩ T such ψ χ that S/Bi ⊆ T , and some χ ∈ Sub+ (φ) ∩ U such that S/Bi ⊆ U . Since S is a maximal ψ∨χ ψ∨χ BRS-consistent set, either (a) Bi ¬ψ ∈ S or (b) ¬Bi ¬ψ ∈ S. We consider each case ψ∨χ ψ∨χ in turn. First (a) Bi ¬ψ ∈ S. We know that Bi (ψ ∨ χ) ∈ S (instance of Succ) and ρ

ρ Bi ζ

ρ

76

17. 18. 19. 20.

O. Board / Games and Economic Behavior 49 (2004) 49–80

 ψ∨χ  ψ∨χ ψ∨χ BRS  Bi ¬ψ ∧ Bi (ψ ∨ χ) ⇒ Bi χ  (ψ∨χ)∧χ ψ∨χ ψ∨χ  BRS  Bi χ ⇒ Bi ζ ⇔ Bi ζ BRS  (ψ ∨ χ) ∧ χ ⇔ χ (ψ∨χ)∧χ χ BRS  Bi ζ ⇔ Bi ζ

Dist IE(a) Taut 19, LE

ψ∨χ

(ψ∨χ)∧χ

Line 17 implies that Bi χ ∈ S. So it follows from line 18 that Bi ζ ∈ S if ψ∨χ (ψ∨χ)∧χ ψ∨χ ζ ∈ S, i.e. S/Bi = S/Bi . Furthermore, line 20 implies that and only if Bi (ψ∨χ)∧χ χ χ ψ∨χ S S/Bi = S/Bi . So S/Bi = S/Bi ⊆ U . But ψ ∨ χ ∈ T ∩ U , so wU w i wT . ψ∨χ Next (b) ¬Bi ¬ψ ∈ S. Since 21. 22.

 (ψ∨χ)∧ψ  ψ∨χ ψ∨χ ¬ψ ⇒ Bi ζ ⇔ Bi (χ ⇒ ζ ) BRS  ¬Bi ψ∨χ ψ∨χ BRS  Bi ζ ⇒ Bi (χ ⇒ ζ ) ψ∨χ

ψ∨χ

IE(b) RE, Dist, Taut, MP

(ψ∨χ)∧ψ

if Bi ζ ∈ S then Bi (χ ⇒ ζ ) ∈ S (from line 22) and Bi ζ ∈ S (from line 21), ψ∨χ (ψ∨χ)∧ψ ψ w so S/Bi ⊆ S/Bi = S/Bi ⊆ T . But ψ ∨ χ ∈ T ∩ U , so wT i S wU . wS S To show transitivity, suppose that for some wT , wU , wV ∈ Wi , wT w i wU and ψ wS wU i wV . Then there is some ψ ∈ T ∩ U such that S/Bi ⊆ T , and there is some χ χ ∈ U ∩ V such that S/Bi ⊆ U . Since S is a maximal BRS-consistent set, either (a) ψ∨χ ψ∨χ Bi ¬ψ ∈ S or (b) ¬Bi ¬ψ ∈ S. We consider each case in turn. ψ∨χ χ (a) Bi ¬ψ ∈ S. We have just shown in part (a) above that this implies S/Bi = ψ∨χ ψ∨χ S/Bi ⊆ U . But ψ ∈ U , contradicting the assumption that Bi ¬ψ ∈ S. ψ∨χ ψ∨χ (b) ¬Bi ¬ψ ∈ S. We have just shown in part (b) above that this implies S/Bi ⊆ ψ wS S/Bi ⊆ T . But ψ ∨ χ ∈ T ∩ U ∩ V , so wT i wV as required. Note that nothing in this w proof makes use of the fact that wV ∈ Wiw : it follows that i S is in fact transitive on the entire domain W . w Well-foundedness of i S follows immediately from the finiteness of W . To show that W is finite, we must show that Con(φ) is finite, since W  = Con(φ). It is clear that there is a finite number maximal BRS-consistent subsets of Sub(φ), since Sub(φ) is itself a finite set. And each maximal BRS-consistent subset of Sub(φ) has a unique extension to a maximal BRS-consistent subset of Sub+ (φ): suppose S is a maximal BRS-consistent subset of Sub(φ), and S + ⊇ S is a maximal BRS-consistent subset of Sub+ (φ); then ψ ∈ S + only if either (a) ψ ∈ S, or (b) ψ is of the form ¬χ and χ ∈ / S + , or (c) ψ is of the form + χ ∧ ζ and χ, ζ ∈ S . And if there is no maximal BRS-consistent subset S of Sub(φ) such that S ⊆ S + , S + cannot be a maximal BRS-consistent subset of Sub+ (φ). Thus there is a one-to-one correspondence between the maximal BRS-subsets of Sub(φ) and the maximal BRS-consistent subsets of Sub+ (φ). Propositional reasoning tells us that there are at most 2Sub(φ) logically distinct formulas in Sub+ (φ), corresponding to the combinations of truth-value assignments to the formulas in Sub(φ). And if ψ and χ are logically equivalent ++ is a maximal BRS-consistent subset of Sub++ (φ), then (i.e. BRS  ψ ⇔ χ ), and Sneg neg ++ if and only if B χ ∈ S ++ (Triv, RE and Dist); B ψ ∈ S ++ if and only if Bi ψ ∈ Sneg i neg neg i ζ

++ (RE and Dist); and B ζ ∈ S ++ if and only if B ζ ∈ S ++ . So each maximal Bi χ ∈ Sneg neg neg i i ζ

ψ

χ

O. Board / Games and Economic Behavior 49 (2004) 49–80

77

BRS-consistent subset of Sub+ (φ) has a finite number of extensions to maximal BRSconsistent subsets of Sub++ neg (φ). Thus Con(φ) is a finite set as required. 2 Proof of Proposition 1. The proof of Proposition 1 follows the same steps as the proof of (∗∗) in Theorem 1, and is not repeated here. 2 Proof of Theorem 2. Soundness. We must check that TPI and TNI are valid with respect φ χ φ φ χ φ to A, i.e. A  Bi ψ ⇒ Bi Bi ψ and A  ¬Bi ψ ⇒ Bi ¬Bi ψ. φ w (TPI) Suppose that M ∈ A, and (M, w)  Bi ψ. Then for every x ∈ minw i {[φ]M ∩ Wi } y w w w we have (M, x)  ψ. Now for every y ∈ mini {[χ]M ∩ Wi }, z i u if and only if z i u, y y φ w so minw i {[φ]M ∩ Wi } = mini {[φ]M ∩ Wi }. It follows that (M, y)  Bi ψ, and therefore χ φ (M, w)  Bi Bi ψ. φ (TNI) Suppose that M ∈ A, and (M, w)  ¬Bi ψ. Then there is some x ∈ minw i {[φ]M ∩ w }, z y u if and only if Wiw } such that (M, x)  ψ. Now for every y ∈ minw {[χ] ∩ W M i i i y y φ w w z w i u, so mini {[φ]M ∩ Wi } = mini {[φ]M ∩ Wi }. It follows that (M, y)  ¬Bi ψ, and χ φ therefore (M, w)  Bi ¬Bi ψ. Completeness. To prove completeness, we must show that every BRSI-consistent formula in L is satisfiable with respect to A. We proceed in the same way as in the proof of Theorem 1, and construct a structure Mφ ∈ A for every formula φ ∈ L. The construction of Mφ is exactly the same as before, except that the set Sub++ (φ) is enlarged: ψ ∈ Sub++ (φ) χ is the smallest set of formulas such that (a) if ψ, χ ∈ Sub+ (φ) then ψ, Bi ψ, Bi ψ ∈ χ ξ χ Sub++ (φ); (b) if ξ ∈ Sub+ (φ) and Bi ψ ∈ Sub++ (φ), then Bi Bi ψ ∈ Sub++ (φ) and ξ χ Bi ¬Bi ψ ∈ Sub++ (φ). The proof that (Mφ , wS )  ψ if and only if ψ ∈ S is unaffected, S as is the proof that w is complete and transitive on WiwS for all S. Finiteness of W i ζ ξ χ still holds as well, since for every formula of the form Bi Bi · · · Bi ψ ∈ S if and only if ξ χ Bi · · · Bi ψ ∈ S, given TPI and TNI. It remains to show that M is absolute. ψ w Suppose that wT ∈ Wi S . Then there is some ψ ∈ Sub+ (φ) ∩ T such that S/Bi ⊆ T . wS wS wT We must show that wU i wV if and only if wU i wV . If wU i wV , then there is χ χ χ some χ ∈ Sub+ (φ) ∩ U ∩ V such that S/Bi ⊆ U . Suppose that Bi ζ ∈ / S. Then ¬Bi ζ ∈ S, χ ψ χ ψ χ and since BRSI  ¬Bi ζ ⇒ Bi ¬Bi ζ (instance of TNI), we also have Bi ¬Bi ζ ∈ S. But ψ χ χ χ χ T S/Bi ⊆ T , so ¬Bi ζ ∈ T , and Bi ζ ∈ / T . Thus T /Bi ⊆ S/Bi ⊆ U , and wU w i χwV , as wT + required. If wU i wV , then there is some χ ∈ Sub (φ) ∩ U ∩ V such that T /Bi ⊆ U. ψ χ χ χ Suppose that Bi ζ ∈ S. Since BRSI  Bi ζ ⇒ Bi Bi ζ (instance of TPI), we also have ψ χ ψ χ χ χ w Bi Bi ζ ∈ S. But S/Bi ⊆ T , so Bi ζ ∈ T . Thus S/Bi ⊆ T /Bi ⊆ U , and wU i S wV , completing the proof. 2 Proof of Theorem 3. Soundness. We must check that WCon is valid with respect to I, φ i.e. I  φ ⇒ ¬Bi false. Suppose that M ∈ I, and (M, w)  φ, i.e. w ∈ [φ]M . By the inclusion assumption, w ∈ Wiw , so [φ]M ∩ Wiw = ∅, and by well-foundedness of w i , w } = ∅. So there is some world x ∈ minw {[φ] ∩ W w }. Since it is not minw {[φ] ∩ W M M i i i i

78

O. Board / Games and Economic Behavior 49 (2004) 49–80 φ

the case that (M, x)  false, it is not the case that (M, w)  Bi false, and it follows from φ the definition of  that (M, w)  ¬Bi false. Completeness. The proof for completeness is the same as for Theorem 1, except that it S must also be shown Mφ ∈ I, i.e. that every plausibility ordering w i in Mφ satisfies the inclusion assumption. Let φ1 , . . . , φn be an enumeration of the formulas in Sub(φ), and for some S ∈ Con(φ) let φ1 = φ1 if φ1 ∈ S, and φ1 = ¬φ1 if φ1 ∈ / S. Propositional reasoning φ  ∧···∧φn

implies that φ1 ∧ · · · ∧ φn ∈ S, and since BRSC  (φ1 ∧ · · · ∧ φn ) ⇒ ¬Bi 1 (instance of WCon), we have that

φ  ∧···∧φn S/Bi 1

φ  ∧···∧φn false ¬Bi 1

false

∈ S for some false ∈ Sub+ (φ). It follows φ  ∧···∧φn  φ1 ∧ · · · ∧ φ1 ∧···∧φn that Bi ψ ∈ S.

is a BRSC-consistent set. Furthermore, since BRS  Bi 1 φ  ∧···∧φn

φn (instance of Succ), φ1 ∧ · · · ∧ φn ∈ S/Bi 1 φ  ∧···∧φn S/Bi 1

. Now suppose

+

and ψ ∈ Sub (φ). Since Sub+ (φ) consists only of formulas in Then ψ ∈ Sub(φ) and their conjunctions and negations, it follows from propositional reasoning that φ  ∧···∧φ 

n either BRSC  (φ1 ∧ · · · ∧ φn ) ⇒ ψ or BRSC  (φ1 ∧ · · · ∧ φn ) ⇒ ¬ψ. But S/Bi 1 is   a BRSC-consistent set, so we must have BRSC  (φ1 ∧ · · · ∧ φn ) ⇒ ψ, and therefore ψ ∈ S.

φ  ∧···∧φn

We have shown that S/Bi 1 so wS ∈ WiwS as required. 2

w

w

⊆ S. The definition of i S implies that wS i S wS , and

Proof of Theorem 4. Soundness follows immediately from Theorem 2 and Theorem 3. To prove completeness, we follow the construction of Mφ described in the proof of Theorem 2, and the same steps imply that Mφ ∈ A. The completeness part of the proof of Theorem 3 can then be followed to show that Mφ ∈ I. 2 Proof of Proposition 2. First suppose that x ∈ {x | x Pi w or w Pi x}. Then either (a) x Pi w, in which case it follows immediately from the definition of Pi that x w i w, as required; or (b) w Pi x; from the definition of Pi we have w xi x; x xi y for some y from R4, and so x w i y from R3, as required. w Now suppose that x ∈ {x | x w i y for some y}. We know from R4 that w i z for w some z. It follows from R1 that either x i w, in which case x Pi w by definition; or x w w i x, in which case R3 gives us w i x as so w Pi x. In both cases x ∈ {x | x Pi w or w Pi x} as required. 2 Proof of Proposition 3. First suppose x ∈ min i (X ∩ Wiw ). Then x Pi y for all y ∈ y X ∩ Wiw , and from the definition of Pi , x i y for all y ∈ X ∩ Wiw . It follows from w w w R3 that x w i y for all y ∈ X ∩ Wi and so x ∈ mini (X ∩ Wi ). w w w Now suppose that x ∈ mini (X ∩ Wi ). Then x i y for all y ∈ X ∩ Wiw , and from y R3 we have x i y for all y ∈ X ∩ Wiw . The definition of Pi gives us x Pi y for all w y ∈ X ∩ Wi , and so x ∈ min i (X ∩ Wiw ). 2 Proof of Theorem 5. The proof of Theorem 5 follows the same steps as the proof of Theorem 3.3.1 in Fagin et al. (1995), and is omitted. 2

O. Board / Games and Economic Behavior 49 (2004) 49–80

79

References Alchourrón, C.E., Gärdenfors, P., Makinson, D., 1985. On the logic of theory change: partial meet functions for contraction and revision. J. Symbolic Logic 50, 510–530. Aumann, R.J., 1976. Agreeing to disagree. Ann. Statist. 4, 1236–1239. Aumann, R.J., 1999. Interactive epistemology I: knowledge. Int. J. Game Theory 28, 263–300. Aumann, R.J., Brandenburger, A., 1995. Epistemic conditions for Nash equilibrium. Econometrica 63, 1161– 1180. Battigalli, P., 1996. Strategic independence and perfect Bayesian equilibria. J. Econ. Theory 70, 201–234. Battigalli, P., Bonanno, G., 1999. Recent results on belief, knowledge and the epistemic foundations of game theory. Res. Econ. 53, 149–225. Battigalli, P., Siniscalchi, M., 1999. Hierarchies of conditional beliefs and interactive epistemology in dynamic games. J. Econ. Theory 88, 188–230. Battigalli, P., Siniscalchi, M., 2002. Strong belief and forward induction reasoning. J. Econ. Theory 106, 356–391. Ben-Porath, E., 1997. Rationality, Nash equilibrium and backwards induction in perfect-information games. Rev. Econ. Stud. 64, 23–46. Board, O.J., 2002a. Algorithmic characterization of rationalizability in extensive form games. Working paper. Department of Economics, Oxford. Board, O.J., 2002b. The deception of the Greeks: generalizing the information structure of extensive form games. Greek Econ. Rev. 22, 1–16. Board, O.J., 2003. The not-so-absent-minded driver. Res. Econ. 57, 189–200. Bonanno, G., 2003. Memory and perfect recall in extensive games. Games Econ. Behav. In press. Brandenburger, A., 2002. On the existence of a ‘complete’ possibility structure. Working paper. Harvard Business School. Brandenburger, A., Dekel, E., 1993. Hierarchies of beliefs and common knowledge. J. Econ. Theory 59, 189–198. Brandenburger, A., Keisler, H.J., 1999. An impossibility theorem on beliefs in games. Working paper. Harvard Business School. Brandenburger, A., Keisler, H.J., 2002. Epistemic conditions for iterated admissibility. Working paper. Harvard Business School. Burgess, J.P., 1981. Quick completeness proofs for some logics of conditionals. Notre Dame J. Formal Logic 22, 76–84. Cho, I.K., Kreps, D.M., 1987. Signalling games and stable equilibria. Quart. J. Econ. 102, 179–221. Clausing, T., 2001. Belief revision in games of perfect information. Working paper. Univ. of Magdeburg. Crawford, V., Sobel, J., 1982. Strategic information transmission. Econometrica 50, 1431–1451. Dekel, E., Gul, F., 1997. Rationality and knowledge in game theory. In: Kreps, D.M., Wallis, K.W. (Eds.), Advances in Economics and Econometrics: Theory and Applications: Seventh World Congress, Volume I. Cambridge Univ. Press, Cambridge, UK, pp. 87–172. Dufwenberg, M., 2002. Marital investments, time consistency and emotions. J. Econ. Behav. Organ. 48, 57–69. Fagin, R., 1994. A quantitative analysis of modal logic. J. Symbolic Logic 59, 209–252. Fagin, R., Halpern, J.Y., Moses, Y., Vardi, M.Y., 1995. Reasoning About Knowledge. MIT Press, Cambridge, MA. Fagin, R., Geanakoplos, J., Halpern, J.Y., Vardi, M.Y., 1999. The hierarchical approach to modeling knowledge and common knowledge. Int. J. Game Theory 28, 331–365. Feinberg, Y., 2001. Subjective formulation and analysis of games and solutions—Part 2. Working paper. Stanford University. Feinberg, Y., 2002. Subjective reasoning in dynamic games. Working paper. Stanford University. Friedman, N., Halpern, J.Y., 1994. Conditional logics of belief change. In: Proceedings of the Twelfth National Conference on Artificial Intelligence. AAAI Press, Menlo Park, CA, pp. 915–921. Gärdenfors, P., 1986. Belief revisions and the Ramsey test for conditionals. Philos. Rev. XCV, 81–93. Gärdenfors, P., 1988. Knowledge in Flux. MIT Press, Cambridge, MA. Geanakoplos, J.D., Pearce, D., Stacchetti, E., 1989. Psychological games and sequential rationality. Games Econ. Behav. 1, 60–79. Geanakoplos, J.D., 1992. Common knowledge. J. Econ. Perspect. 6, 53–82. Geanakoplos, J.D., Polemarchakis, H.M., 1982. We can’t disagree forever. J. Econ. Theory 28, 192–200.

80

O. Board / Games and Economic Behavior 49 (2004) 49–80

Grove, A., 1988. Two modellings for theory change. J. Philos. Logic 17, 157–170. Halpern, J.Y., 1998. Set-theoretic completeness for epistemic and conditional logic. Ann. Math. Artificial Intelligence 26, 1–27. Harsanyi, J., 1968. Games with incomplete information played by ‘Bayesian’ players, parts I–III. Manage. Sci. 14, 159–182, 320–334, 486–502. Heifetz, A., Samet, D., 1998. Knowledge spaces with arbitrarily high rank. Games Econ. Behav. 22, 260–273. Hintikka, J., 1962. Knowledge and Belief. Cornell Univ. Press, Ithaca, NY. Kohlberg, E., Reny, P.J., 1997. Independence on relative probability spaces and consistent assessments in game trees. J. Econ. Theory 75, 280–313. Kripke, S., 1963. A semantical analysis of modal logic I: normal modal propositional calculi. Z. Math. Logik Grund. Math. 9, 67–96. Levi, I., 1988. Iterations of conditionals and the Ramsey test. Synthese 76, 49–81. Lewis, D., 1973. Counterfactuals. Basil Blackwell, Oxford. Lewis, D., 1975. Languages and Language. In: Gunderson, K. (Ed.), Minnesota Studies in the Philosophy of Science, vol. VII. Univ. of Minnesota Press, pp. 3–35. Mariotti, T., Meier, M., Piccioni, M., 2002. Hierarchies of beliefs for compact possibility models. Working paper. London School of Economics. Mertens, J.-F., Zamir, S., 1985. Formulation of Bayesian analysis for games of incomplete information. Int. J. Game Theory 14, 1–29. McDowell, J., 1982. Criteria, defeasibility, and knowledge. Proc. Br. Acad. 68, 455–479. Morris, S., Shin, H.S., 2003. Global games: theory and applications. In: Dewatripont, M., Hansen, L.P., Turnovsky, S.J. (Eds.), Advances in Economics and Econometrics: Theory and Applications: Eighth World Congress, vol. 1. Cambridge Univ. Press, Cambridge, UK, pp. 56–114. Pappas, G., Swain, M. (Eds.), 1978. Essays on Knowledge and Justification. Cornell Univ. Press, Ithaca, NY. Piccione, M., Rubinstein, A., 1997. On the interpretation of decision problems with imperfect recall. Games Econ. Behav. 20, 3–24. Reny, P.J., 1993. Common belief and the theory of games with perfect information. J. Econ. Theory 59, 257–274. Samet, D., 1990. Ignoring ignorance and agreeing to disagree. J. Econ. Theory 52, 190–207. Spohn, W., 1988. Ordinal conditional functions: a dynamic theory of epistemic states. In: Harper, W.L., Skyrms, B. (Eds.), Causation in Decision, Belief Change, and Statistics, vol. II. Kluwer, Dordrecht, pp. 105–134. Stalnaker, R., 1996. Knowledge, belief and counterfactual reasoning in games. Econ. Philos. 12, 133–163. Stalnaker, R., 1998. Belief revision in games: forward and backward induction. Math. Soc. Sci. 36, 31–56.

Dynamic interactive epistemology - CiteSeerX

Jan 31, 2004 - a price of greatly-increased complexity. The complexity of these ...... The cheap talk literature (e.g. Crawford and Sobel ...... entire domain W.

294KB Sizes 0 Downloads 323 Views

Recommend Documents

Dynamic interactive epistemology - CiteSeerX
Jan 31, 2004 - A stark illustration of the importance of such revisions is given by Reny (1993), .... This axiom system is essentially the most basic axiom system of epistemic logic ..... Working with a formal language has precisely this effect.

Dynamic interactive epistemology -
Available online 31 January 2004. Abstract ..... lence classes; the inconceivable worlds, i.e. those not in Ww i , are a class unto themselves ...... Business School.

Dynamic Sender-Receiver Games - CiteSeerX
impact of the cheap-talk phase on the outcome of a one-shot game (e.g.,. Krishna-Morgan (2001), Aumann-Hart (2003), Forges-Koessler (2008)). Golosov ...

Scalable Media Streaming to Interactive Users - CiteSeerX
Computer Science Department. Federal ... a high degree of interactivity has been observed in the ac- cesses to ... width Skimming and Patching, two state-of-the-art stream- ..... lyzed a three-year log of accesses to MANIC, an educational.

Distributed Coordination of Dynamic Rigid Bodies - CiteSeerX
in the body frame {Bi} of agent i, and ̂ωi is its corresponding ..... 3-D space. 1The aircraft model is taken from the Mathworks FileExchange website.

Optimal Dynamic Actuator Location in Distributed ... - CiteSeerX
Center for Self-Organizing and Intelligent Systems (CSOIS). Dept. of Electrical and ..... We call the tessellation defined by (3) a Centroidal Voronoi. Tessellation if ...

Dynamic Moral Hazard and Project Completion - CiteSeerX
May 27, 2008 - tractable trade-off between static and dynamic incentives. In our model, a principal ... ‡Helsinki School of Economics and University of Southampton, and HECER. ... We can do this with some degree of generality; for example, we allow

Uniform value in dynamic programming - CiteSeerX
that for each m ≥ 0, one can find n(m) ≥ 1 satisfying vm,n(m)(z) ≤ v−(z) − ε. .... Using the previous construction, we find that for z and z in Z, and all m ≥ 0 and n ...

Uniform value in dynamic programming - CiteSeerX
Uniform value, dynamic programming, Markov decision processes, limit value, Black- ..... of plays giving high payoffs for any (large enough) length of the game.

Quotidian Medical Epistemology
books, and internet resources are among the conduits. The absence or ... the last six years: ♢. “Coffee May Protect Against Diabetes,” according to WebMD. 2.

Interactive lesion segmentation on dynamic contrast ...
*[email protected]; phone: +1.512.471.1771; fax: +1.512.471.0616; http://www.bme.utexas.edu/research/informatics/. Medical Imaging 2006: Image ...

Interactive and Dynamic Visual Port Monitoring ... - Semantic Scholar
insight into the network activity of their system than is ... were given access to a data set consisting of network ... Internet routing data and thus is limited in its.

Textual vs. Graphical Interaction in an Interactive Fiction ... - CiteSeerX
Interactive fiction (IF) is a story-based genre of games where the user is given a more active role than, for example, a mere reader who turns on the pages of the ...

Dynamic Surface Matching by Geodesic Mapping for 3D ... - CiteSeerX
point clouds from scanner data are registered using a ran- domized feature matching ..... tion Technology for Convivial Society”. References. [1] N. Ahmed, C.

Dynamic programming for robot control in real-time ... - CiteSeerX
performance reasons such as shown in the figure 1. This approach follows .... (application domain). ... is a rate (an object is recognized with a rate a 65 per cent.

Textual vs. Graphical Interaction in an Interactive Fiction ... - CiteSeerX
a more graphical representation with advances in graphical technology. Even though a ... We analyzed the data using a qualitative analysis method known as ...

Authoring Dynamic Storylines in Interactive Virtual ...
[email protected]. [email protected]. Abstract. The last few decades has seen the emergence of a number of interactive virtual environments.

Dynamic Request Splitting for Interactive Cloud ... - IEEE Xplore
mance Variability, Interactive Multi-tier Applications, Geo- distribution. I. INTRODUCTION. CLOUD computing promises to reduce the cost of IT organizations by ...

Implementing an Interactive Visualization System on a ... - CiteSeerX
Department of Computer Science. University of Massachusetts-Lowell. One University Avenue. Lowell, MA 01854. Abstract. The use of a massively parallel ...

Implementing an Interactive Visualization System on a ... - CiteSeerX
formed by connecting line segments at their endpoints according to a set of geometric ... supercomputer version of the color icon and briefly outline the benefits and ..... The memory limitations of the Terasys forces the use of memory to be an impor