Chapter 13 (pp 213-227) in P. Bourgine & J-P. Nadal eds, Cognitive Economics: An Interdisciplinary Approach. Springer Verlag, 2003.

CONDITIONAL STATEMENTS AND DIRECTIVES David Makinson

OUTLINE In this brief overview, we describe some of the main kinds of conditional assertion to be found in common language and scientific discourse, and how logicians have sought to model them. In the first part, we recall the pure and simple notion of the truth-functional (alias material) conditional proposition, and then how it can occur in combination with other elements, creating more complex kinds of conditional assertion. We discuss the subtle distinction between conditional probability and the probability of a conditional. Finally, we explain the notion of a counterfactual conditional and sketch its links with operations of updating and revising our beliefs. In the second part, we draw attention to the difference between a conditional proposition and a conditional directive. The logical analysis of directives has lagged behind that of the propositions, and we outline the recent concept of an input/output operation, designed for the task. 1

1.1

CONDITIONAL PROPOSITIONS

What is a Conditional Proposition?

Suppose that in your house the telephone and the internet are both accessed by the same line, but the telephone is in the living room on the ground floor, and the computer is upstairs. You explain to a guest: If the telephone is in use then internet is inaccessible. This is an example of a conditional proposition. Verbally, it is of the form if…then…. This is perhaps the most common and basic form for conditionals in English – although, as we will see, there are many others. Such conditional statements are familiar in daily life, as well as in mathematics and the sciences.

1.2

What is a Truth-functional Conditional?

The simplest of all models for conditionals is the truth-functional one, also often known as material implication. Its very simplicity, so much less subtle than ordinary language, hid it from view for a long time. The idea of a truth-table dates from the beginning of the twentieth century two thousand years after Aristotle first began codifying the discipline called logic. After the event, it was realized that in antiquity, certain Stoic philosopher/logicians had grasped the concept of material implication although, as far as anybody knows, they did not used the graphic device of a table. But their ideas never became part of mainstream thinking in logic, which until the late nineteenth century was dominated by the Aristotelian tradition. To make a truth-table, whether for conditionals or other particles such as and, or, and not, we need an idealizing assumption. We need to assume that the assertions that we are dealing with are always either true or false, but never both; as it is usually put, that they are two-valued. It is also assumed that if a proposition appears more than once in the context under study, its truth-value is the same in each instance. The truth-values are written as T,F or more commonly, 1,0. Certain ways of forming complex propositions out of simpler ones are called truth-functional, in the sense that the truth-value of the complex item is a function (in the usual mathematical sense) of the truth-values of its components. In the case of the material conditional, this may be expressed by the following table, where the right-hand column is understood as listing the values of a function of two variables, whose values are listed in the left columns. a

x

a→x

1

1

1

1

0

0

0

1

1

0

0

1

In other words: when a is true and x is false, the material conditional a→x is false, but in all the other three cases a→x is true. As simple as that.

1.3

Some Odd Properties of the Material Conditional

Most of the properties of the truth-functional conditional are very natural. For example, it is reflexive (the conditional proposition a→a is always true, for any proposition a) and also transitive (a→y is true whenever a→x and x→y are). But there are others, reflecting the simple definition, that are less natural. Among them are the following, often known as the paradoxes of material implication: •

Any conditional a→x with a false antecedent a is true, no matter what the consequent x is, and no matter whether there is any kind of link between the two. For example, the proposition ‘If Sydney is the capital of Australia then Shakespeare wrote Hamlet’ is true, simply because of the falsehood of its antecedent (the capital is in fact Canberra). We

2

could replace the consequent by any other proposition, even its own negation, and the material conditional would remain true. •

Any conditional a→x with a true consequent x is true, no matter what the antecedent a is, whether or not there is any kind of link between the two. For example, the proposition ‘If the average temperature of the earth’s atmosphere is rising then wombats are marsupials’ is true, simply because of the truth of its consequent. We could replace the antecedent by any other proposition, even its own negation, without affecting the truth of the entire material conditional.

Given any two propositions a and x whatsoever, either the conditional a→x or its converse x→a is true. For example, either it is true that if my car is in the garage then your computer is turned on, or conversely it is true that if your computer is turned on then my car is in the garage – when these propositions are understood truth-functionally.

3

1.4

Implicit Generalization

You tell a student: ‘if a relation is acyclic then it is irreflexive’. What kind of conditional is this? In effect, you are implicitly making a universal generalization. You are saying that: for every relation r, if r is acyclic, then it is irreflexive. There is an implicit claim of generality. For those who have already seen the notation of quantifiers in logic, the statement says that ∀r((A(r)→I(r)), where → is material implication, ∀ is the universal quantifier, and the letters A,I stand for the corresponding predicates: A(r) for ‘r is acyclic’ and I(r) for ‘r is irreflexive’. The truth-functional connective is present, but it is not working alone. To show that this proposition is false, you would have to find at least one relation that is acyclic but not irreflexive, i.e. that satisfies the antecedent A(r) but falsifies the consequent I(r), briefly that gives the combination (1,0) for antecedent and consequent. In fact, in this example, the combinations (1,1), (0,1), (0,0) may all be found for suitable relations r, but that does not detract from the truth of the conditional. There is no relation r for which the combination (1,0) exists, and that is enough to count the conditional as true. We can generalize on the example. In pure mathematics, the if…then…construction is typically used as a universally quantified material conditional, with the universal quantification often left implicit. The same happens in daily language. In the example from the electronics shop, the conditional in the window is implicitly generalizing over all customers and sales. In the telephone/internet example, we are quantifying over times or occasions, saying something like ‘whenever the telephone is in use, the computer cannot access internet’, i.e. ‘at any time t, if the house telephone is in use at time t then internet is inaccessible at t’. Again in the notation of logic, this may be written as ∀t(T(t)→¬A(t)), where the letters T,A serve as predicates, i.e. T(t) means ‘the telephone is in use at time t’, and A(t) means ‘internet is accessible at time t’. But for ordinary discourse, even this level of simplicity is exceptional. Suppose you are working on your computer without a virus protection. Your friend advises: ‘if you open an attachment with a virus, it will damage your computer’. In the spirit of the previous example, we can see this as a universal quantification over times or occasions with an embedded material implication. Thus as first approximation we have: for every time t (within a certain range left implicit), if you open an attachment with a virus at time t, then your computer will be damaged, i.e. ∀t(A(t)→D(t)). But this representation sins by omission, for it leaves unmentioned two aspects of the advice: •

Futurity. Your hard disk will not necessarily be damaged immediately, even if it is immediately infected. There may be a time lapse.

Causality. The introduction of the virus is causally responsible for the damage.

The representation also sins by commission, for it says more than we probably mean. It says always, when we may mean something a bit less. Ordinary language conditionals often have the property of:

4

Defeasibility. Your friend may not wish to say that such an ill-advised action will always lead to damage, but that it will do so usually, probably, under natural assumptions, or barring exceptional circumstances.

All three dimensions are pervasive in the conditionals of everyday life. Logicians have attempted to tackle all of them. We say a few words on each.

1.5

Futurity

Success is most apparent in analysing futurity and other forms of temporal cross-reference. Since the 1950s, logicians have developed a range of what are known as temporal logics. Roughly speaking, there are two main kinds of approach. One is to remain within the framework of material implication and quantification over moments of time, but recognize additional layers of quantificational complexity. In the example, the representation becomes something like: ∀t(A(t)→∃t′(t′>t∧D(t′))), although this still omits any indication of the vague upper temporal bound on the range of the second, existential, quantifier. Another approach is to introduce non-truth-functional connectives on propositions to do the same job. These are called temporal operators, and belong to a broad class of non-truth-functional connectives called modal operators. Writing x for ‘it will always be the case that x’, the representation becomes (a→¬ ¬ d), where the predicates A(t), D(t) are replaced by propositions a,d, and appropriate principles are devised to govern the temporal propositional operator . The study of such temporal logics is now a recognized and relatively stable affair.

1.6

Causality

The treatment of causality as an element of conditionals has not met with the same success, despite some attempts that also date back to the middle of the twentieth century. The reason for this is that, to be honest, we do not have a satisfying idea of what causality is. Part of its meaning lies in the idea of regular or probable association, and for this reason, can be considered as a form of implicit quantification. But it is difficult to accept, as the eighteenth-century philosopher David Hume did, that this is all there is to the concept, and even more difficult to specify clearly what is missing. There is, as yet, no generally accepted way of representing causality within a proposition, although there are many suggestions.

1.7

Defeasibility and Probability

The analysis of defeasibility has reached a state somewhere between those of futurity and causality – not as settled as the former, but much more advanced than the latter. It is currently a very lively area of investigation. Two general lines of attack emerge: a quantitative one using probability, and a non-quantitative one using ideas that are not so familiar to the general scientific public. The non-quantitative approach, giving rise to what is known as nonmonotonic logic, is reviewed at length in a companion chapter in this volume (‘Supraclassical inference without

5

probability’) and so we will leave it aside here, making only some remarks on the quantitative approach. The use of probability to represent uncertainty is several centuries old. For a while, in the nineteenth century, leading logicians were actively involved in the enterprise – for example Boole himself wrote extensively on logic and probability, as did Augustus de Morgan. But for most of the twentieth century, the two communities tended to drift apart with little contact, until recent years under the influence of Ernest Adams. From the point of view of probability theory, there is a standard way of representing conditionality. Given any probability function p on a field of sets, and any element a of the field such that p(a) ≠ 0, the conditionalization of p on a is a function pa, defined by the equality pa(x) = p(a∧x)/p(a) where the slash is ordinary division. In the limiting case that p(a) = 0, pa is left undefined. This concept may be used to formulate probabilistic truth-conditions for defeasible conditionals. As a preliminary, a little rephrasing is needed in probability theory itself. Probability functions, as defined by the Kolmogorov axioms, usually take as their domain a field of subsets of an arbitrarily given set. But in the finite case we may equally well take the domain to be the set of Boolean propositional formulae that are generated by some finite set of elementary letters. We may then introduce the notion of a threshold probabilistic implication relation. Let P be any non-empty family of probability distributions p on the finite propositional language under consideration, and let t be a fixed real in the interval [0,1]. Suppose a,x are formulae of the language. We say that a probabilistically implies x (under the set P of distributions, modulo the threshold t) and we write a |~P,t x, iff for all p ∈ P, if p(a) ≠ 0 then pa(x) ≥ t, i.e. iff p(a∧x)/p(a) ≥ t for all p ∈ P with p(a) ≠ 0. Note that there is not one probabilistic conditional relation but a family of them, one for each choice of a set P of probability distributions and threshold value t. Note also that the definition gives probabilistic conditions for a certain kind of conditional to be true; not for determining a probability for that conditional. From a logician’s point of view, relations of threshold probabilistic implication are rather badly behaved. They are in general nonmonotonic, in the sense that we may have a |~P,t x but not a∧b |~P,t x. This is only to be expected, given that we are trying to represent a notion of uncertainty, but less pleasant is the fact that the relations are badly behaved with respect to conjunction of conclusions. That is, we may have a |~P,t x and a |~P,t y but not a |~P,t x∧y. Essentially for this reason, the relations may also fail a number of other properties, notably an important one known as cumulative transitivity alias cut. But despite these inelegancies, from a practical point of view relations of threshold probabilistic implication have been used to represent uncertain conditionality in a variety of domains.

1.8

Conditional Probability versus Probability of a Conditional

Note again that the definition of a |~P,t x treats it as a conditional proposition that is true iff the conditional probabilities pa(x) for p ∈ P are all suitably high. It does not assign pa(x) as the probability of the conditional proposition. This is a subtle conceptual distinction, but an important one. We will discuss it further in this section.

6

It has for long been recognized that pa(x) is distinct from the probability p(a→x) of the material implication a→x. For example, when p(a∧x) = 0 while 0 < p(a) < 1 we have pa(x) = p(a∧x)/p(a) = 0 while p(a→x) = p(¬a∨x) ≥ p(¬a) > 0. This suggests the question whether there is any kind of conditional connective, call it ⇒, such that p(a⇒x) = pa(x) whenever the right-hand side is defined, i.e. whenever p(a) ≠ 0. This equation is sometimes known as the PCCP equality, where the acronym stand for 'probability of conditional as conditional probability'. For a long time it was vaguely presumed that some such conditional connective must exist waiting to be identified; but in a celebrated paper of 1976 David Lewis showed that this cannot be the case. The theorem is a little technical, but worth stating explicitly (without proof), even in a general review like the present one. Consider a finite propositional language L, i.e. with only finitely many elementary letters and thus only finitely many non-equivalent formulae using the usual truth-functional connectives. Take any class P of probability distributions on L. Suppose that P is closed under conditionalization, that is, whenever p ∈ P and a ∈ L then pa ∈ P. Suppose finally that there are a,b,c ∈ L and p ∈ P such that a,b,c are pairwise inconsistent under classical logic, while each separately has non-zero probability under p. This is a condition that is satisfied in all but the most trivial of examples. Then, the theorem tells us, there is no function ⇒ from L2 into L such that px(y) = p(x⇒y) for all x,y ∈ L with p(x) ≠ 0. An impressive feature this result is that it is does not depend on making any apparently innocuous but ultimately questionable assumptions about properties of the connective ⇒. The only hypothesis made on it is that it is a function from L2 into L. Indeed, an analysis of the proof shows that even this hypothesis can be weakened. Let L0 be the purely Boolean part of L, i.e. the part built up with just the truth-functional connectives from elementary letters. Then there is no function ⇒ even from L02 into L that satisfies the property that for all x,y ∈ L0 with p(x) ≠ 0, px(y) = p(x⇒y). In other words, the result does not depend on iteration of the conditional connective ⇒ in the language. Interestingly, however, it does depend on iteration of the operation of the operation of conditionalization in the sense of probability theory – the proof requires consideration of a conditionalization (px)y of certain conditionalizations px of a probability distribution p. Lewis’ impossibility theorem does not show us that there is anything wrong with the notion of conditional probability. Nor does it show us that there is anything incorrect about with classical logic. It shows that there is a difference between conditional probability on the one hand and the probability of a conditional proposition on the other, no matter what kind of conditional proposition we have in mind. There is no way in which the latter concept may be reduced to the former. Some logicians have argued that if we are prepared to abandon certain features of classical logic and/or probability, then conditional probability can be identified with the probability of a suitable conditional. However, it is generally felt that the conceptual costs of mutilating standard systems are too high for any advantages obtained. For an entry into the literature, see the 'Guide to further reading' at the end of this paper.

7

1.9

Counterfactual conditionals

8

from our main thread. Once again we refer to the ‘Guide’ at the end of the paper for entry points to the literature.

1.10

So Why Work with the Truth-Functional Conditional?

Given that the conditional statements of ordinary language usually say much more than is contained in the truth-functional analysis, why should we bother with them? There are several reasons. From a pragmatic point of view: •

If you are doing computer science, then the truth-functional conditional will give you a great deal of mileage. It forms the basis of any language that communicates with a machine.

If you are doing pure mathematics, the truth-functional conditional is exactly what you need, provided that you recognize that it is usually used with an implicit universal quantification. Pure mathematics, communicated formally, never uses any other kind of conditional.

From a more theoretical point of view, there are even more important reasons: •

The truth-functional conditional is the simplest possible kind of conditional. And in general, it is a good strategy to begin any formal modelling in the simplest way, elaborating it later if needed.

Experience has shown that when we do begin analysing any of the other kinds of conditional mentioned above (including even such a maverick one as the counterfactual), the truth-functional one turns up in one form or another, hidden inside. It is impossible even to begin an analysis of any of these more complex kinds of conditional unless you have a clear understanding of the truth-functional one.

Thus despite its eccentricities and limitations, the material conditional should not be thrown away. It gives us the kernel of conditionality, even if not the whole fruit. 2

2.1

CONDITIONAL DIRECTIVES

What is a conditional directive?

So far, we have been looking at conditional propositions, where a proposition is something that can be regarded as true or false. But when we come to consider conditional directives, we are faced with a new problem. By a conditional directive we mean a statement that tells us what to do in a given situation. It may be expressed in the imperative mood as in ‘if a major bank collapses, ease credit’, or in the indicative mood as in ‘if a major bank collapses, credit should be eased’. The imperative formulation has a purely directive function, whereas the indicative one can be used to do several things: to issue a directive, report the fact that such a directive has been made (or is implicit in

9

one made), or express acceptance of the directive – indeed, most commonly, a combination of all three. In what follows, we will focus on the purely directive function and ignore the elements of report and acquiescence, irrespective of the grammatical mood in which the directive is expressed. Philosophically, it is widely accepted that directives and propositions differ in a fundamental respect. Propositions such as ‘unemployment has reached the two-digit level’ are capable of bearing truth-values. In other words, once vagueness and ambiguity are eliminated they are either true or false. But directives like ‘increase the supply of money!’ are items of another kind. They may be complied with (or not). They may also be assessed from the standpoint of other directive as, for example, when a legal injunction is judged from a moral point of view (or vice versa). But it makes no sense to describe directives as true or as false. If this is the case, how is it possible to construct a logic of directives, and in particular, of conditional directives? This question is important for any scientific discipline that deals with policy as well as fact and theory. Even if one prefers to separate as far as possible the scientific aspect from the policy one, the latter is not just a matter of ‘anything goes’. Recommendations, advice, instructions, the formulation of goals, norms and the inferences that one may make about them, all require rationality. There is some kind of logic involved. Conditional directives, whose antecedent is a proposition expressing a fact, play a pivotal role. Classical truth-functional logic can appear powerless to deal with this question, for its fundamental concepts revolve around the contrast between truth and falsehood which are absent from directives. We will sketch an approach to the problem, recently developed by the present author with Leendert van der Torre. It does not discard classical logic in favour of some supposed non-classical logic, but rather provides a more imaginative and structured way of applying classical consequence. Called input/output logic, it enables us to represent conditional directives and determine their consequences, without treating them as bearing truth-values.

2.2

Input/output operations

For simplicity, we write a conditional directive ‘in condition a, do x’ as a⇒x. To break old habits, we emphasise again that we are not trying to formulate conditions under which a⇒x is true, for as we have noted, directives are never true, nor false. Our task is a different one. Given a set G of conditional directives, which we call a code, we wish to formulate criteria under which a conditional directive a⇒x is implicit in G. We would like to define a two-place operation out, such that out(G,a) is the set of all propositions x such that a⇒x is implicit in G. The simplest kind of input/output operation, called simple-minded output, is depicted in Figure 1. It has three phases. First, the input a is expanded to its classical closure Cn(a), i.e. the set of all propositions y that are consequences of a under classical (truth-functional) logic. Next, this set Cn(a) is ‘passed through’ G, which delivers the corresponding immediate output G(Cn(a)). Here G(X) is defined in the standard set-theoretic manner as the image of a set under a relation, so that G(Cn(a)) = {x: for some b ∈ Cn(a), (b,x) ∈ G}. Finally, this is expanded by classical closure again to out1(G,a) = Cn(G(Cn(a))).

10

Cn(G(Cn(a)))

G a

G(Cn(a))

Cn(a)

Despite its simplicity, this is already an interesting operation. It gives us the implicit content of an explicitly given code of conditional directives, without treating the directives themselves as Figure 1: Simple-Minded propositions: only the items serving as input and asOutput output are so treated. out1(G,a) = Cn(G(Cn(a))) It should be noted that the operation out1(G,a) does not satisfy the principle of identity, which in this context is called throughput. That is, in general we do not have that a ∈ out1(G,a). It also fails contraposition. That is, in general x ∈ out1(G,a) does not imply ¬a∈ out1(G,¬x). Reflection on how we think of conditional directives in real life confirms that this is how it should be. As an example, let the code G consist of just three conditional directives: (b,x), (c,y), (d,z). We call b,c,d the bodies of these directives, and x,y,z their respective heads. Let the input a be the conjunction b∧(c∨d)∧e∧¬d. Then the only bodies of elements of G that are classical consequences of a are b,c, so that G(Cn(a)) = {x,y} and thus out1(G,A) = Cn(G(Cn(a))) = Cn(x,y). It can easily be shown that simple-minded output is fully characterized by just three rules. When formulating these rules it is convenient to treat output as a one-place operation, writing a⇒x ∈ out1(G) instead of x ∈ out1(G,a). Further, to eliminate needless repetitions, since the code G is held constant in all the rules, when formulating them we abbreviate a⇒x ∈ out1(G) as just a⇒x. With these notational conventions, the rules characterizing simple-minded output are as follows: Strengthening Input (SI): Conjoining Output (AND): Weakening Output (WO):

From a⇒x to b⇒x whenever a ∈ Cn(b) From a⇒x, a⇒y to a⇒x∧y From a⇒x to a⇒y whenever y ∈ Cn(x).

It can be shown that these three rules suffice to provide passage from a code G to any element a⇒x of out1(G), by means of a derivation tree with leaves in G∪{t⇒t} where t is any classical tautology, and with root the desired element. 2.3 Stronger output operations Simple-minded output lacks certain features that may be appropriate for some kinds of directive. In the first place, the treatment of disjunctive inputs is not very sophisticated. Consider two inputs a and b. By classical logic, we know that if x ∈ Cn(a) and x ∈ Cn(b) then x ∈ Cn(a∨b). But there

11

is nothing to tell us that if x ∈ out1(G,a) = Cn(G(Cn(a))) and x ∈ out1(G,b) = Cn(G(Cn(b))) then x ∈ out1(G,a∨b) = Cn(G(Cn(a∨b))), essentially because G is an arbitrary set of ordered pairs of propositions. In the second place, even when we do not want inputs to be automatically carried through as outputs, we may still want outputs to be reusable as inputs – which is quite a different matter. Explicit definitions can be given for operations satisfying each of these two features. They can be illustrated by diagrams in the same spirit as that for simple-minded output. They can also be characterized by straightforward rules. However, in this brief review we will go no further, directing the reader to the ‘Guide to further reading’. SUMMARY There are many kinds of conditional in human discourse. They can be used to assert, and they can be used to direct. On the level of assertion, the simplest kind of conditional is the truth-functional, alias material, conditional. It almost never occurs pure in daily language, but provides the kernel for a range of more complex kinds of conditional assertion, involving such features as universal quantification, temporal cross-reference, causal attribution, and defeasibility. Unlike conditional assertions, conditional directives cannot be described as true or false, and their logic has to be approached in a more circumspect manner. Input/output logic does this by examining the notion of one conditional directive being implicit in a code of such directives, bringing the force of classical logic to play in the analysis without ever assuming that the directives themselves carry truth-values. GUIDE TO FURTHER READING General The literature on conditional propositions is extensive. A good pit-stop after reading the present paper is the overview paper of Edgington (2001), which discusses many of the topics reviewed here, not always from the same perspective. The truth-functional conditional All textbooks of modern logic present and discuss the truth-functional conditional. A well-known elementary text that carries the discussion further than most is Quine (1982).

Temporal conditionals The pioneering work on temporal logics was Prior (1957). For a short introductory review, see e.g. Venema (2001), and for longer overview van Benthem (1995).

12

Defeasible conditionals: quantitative approaches David Lewis’ impossibility result was established in his paper (1976). Those wishing to follow further into the interconnections between logic and probability may begin with the short review paper of Hájek (2001) and then Part I of the book of Adams (1998). Part II of Adams (1998) continues, more controversially, with an attempt to save the PCCP equation from Lewis' theorem. This is done at the cost of denying truth-values to conditionals. Another attempt to save the PCCP equation may be found in Dubois and Prade (2001). They do it at the cost of falling back onto a three-valued logic and a modified probability theory. A more philosophical attempt may be found in Bradley (2002).

Defeasible conditionals: non-quantitative approaches Three of the pioneering classics are: Reiter (1980), Poole (1988), Shoham (1988). For a more recent overview of the literature, see Makinson (1994). In the present volume, the chapter 'Supraclassical inference without probability' shows how these nonmonotonic relations emerge naturally from classical consequence.

Counterfactual conditionals The best place to begin is the classic presentation Lewis (1973). For a comparative review of the different uses of minimalization in the semantics of counterfactuals, preferential conditionals, belief revision, update and deontic logic, see Makinson (1993). An introduction to the logic of belief revision may be found in the overview of Gärdenfors and Rott (1995). For a discussion of counterfactual conditionals and belief change in the context of game theory, see Stalnaker (1998).

Conditional directives For an introduction, see Makinson and van der Torre (to appear), with more detail in the papers (2000), (2001) by the same authors.

REFERENCES Adams, Ernest W. 1998. A Primer of Probability Logic. CSLI Publications: Stanford. Bradley, Richard, 2002. 'Indicative conditionals' Erkenntnis 56:345-378. Dubois, Didier and Henry Prade, 2001. 'Possibility theory, probability theory and multiple-valued logics: a clarification' Annals of Mathematics and Artificial Intelligence 32: 35-66. Edgington, Dorothy, 2001. ‘Conditionals’, pp 385-414 of The Blackwell Guide to Philosophical Logic ed. Lou Goble. Blackwell: Oxford. 13

Gärdenfors, Peter and Hans Rott, 1995. ‘Belief revision’, pp 35-132 of Handbook of Logic in Artificial Intelligence and Logic Programming, vol.4: Epistemic and Temporal Reasoning, ed. Gabbay, Hogger and Robinson. Oxford University Press. Hájek, Alan, 2001. ‘Probability, logic, and probability logic’, pp 362-384 of The Blackwell Guide to Philosophical Logic, ed. Lou Goble. Blackwell: Oxford. Lewis, David, 1973. Counterfactuals. Blackwells: Oxford. Lewis, David, 1976. 'Probabilities of conditionals and conditional probabilities' The Philosophical Review 85: 297-315. Reprinted with a postscript as pp 133-156 of his Philosophical Papers, Oxford University Press 1987. Makinson, David, 'Five faces of minimality', 1993. Studia Logica 52: 339-379. Makinson, David, 1994. 'General Patterns in Nonmonotonic Reasoning', pp 35-110 of Handbook of Logic in Artificial Intelligence and Logic Programming, vol. 3: Nonmonotonic Reasoning and Uncertain Reasoning, ed. Gabbay, Hogger and Robinson. Oxford University Press. Makinson, David and Leendert van der Torre, 2000. 'Input/output logics' Journal of Philosophical Logic 29: 383-408. Makinson, David and Leendert van der Torre, 2001. 'Constraints for input/output logics” Journal of Philosophical Logic 30: 155-185. Makinson, David and Leendert van der Torre, to appear. 'What is input/output logic?' in Foundations of the Formal Sciences II: Applications of Mathematical Logic in Philosophy and Linguistics. Dordrecht: Kluwer, Trends in Logic Series. Poole, David, 1988. 'A logical framework for default reasoning' Artificial Intelligence 36: 27-47. Reiter, Ray, 1980. 'A logic for default reasoning' Artificial Intelligence 13: 81-132. Reprinted pp 68-93 of M.Ginsberg ed, Readings in Nonmomotonic Reasoning. Morgan Kaufmann: Los Altos CA, 1987. Prior, A.N., 1957. Time and Modality. Oxford University Press. Quine, W.V.O. 1982. Methods of Logic, fourth edition. Harvard University Press. Shoham, Yoav, 1988. Reasoning About Change. MIT Press: Cambridge USA. Stalnaker, Robert, 1998. ‘Belief revision in games: forward and backward induction’ Mathematical Social Sciences 36: 31-56. van Benthem, Johan, 1995. 'Temporal Logic', pp 241-350 of Handbook of Logic in Artificial Intelligence and Logic Programming, vol 4: Epistemic and Temporal Reasoning, ed. Gabbay, Hogger and Robinson. Oxford University Press 1995. Venema, Yde, 2001. 'Temporal logic' pp 203-223 of The Blackwell Guide to Philosophical Logic, ed. Lou Goble. Blackwell: Oxford. ACKNOWLEDGEMENTS The author wishes to thank Philippe Mongin and Bernard Walliser for valuable comments on a draft.

14

15

## CONDITIONAL STATEMENTS AND DIRECTIVES

window: 'If you buy more than Â£200 in electronic goods here in a single purchase, .... defined by the Kolmogorov axioms, usually take as their domain a field of subsets of an ..... The best place to begin is the classic presentation Lewis (1973).