Dissertation

Utility Maximization and Increasing Risk Aversion

ausgef¨ uhrt zum Zwecke der Erlangung des akademischen Grades eines Doktors der Naturwissenschaften unter der Leitung von O. Univ.Prof. Dr. Walter Schachermayer E 105 Institut f¨ ur Finanz– und Versicherungsmathematik eingereicht an der Technischen Universit¨at Wien Fakult¨at f¨ ur Technische Naturwissenschaften und Informatik

von

Christopher Summer Matrikelnummer 9301116 Badstraße 40a/5/4, 2340 M¨odling

Wien, Oktober 2002

Kurzfassung Wir betrachten einen Agenten in einem ¨okonomischen Modell, der ein zu einem Zeitpunkt T f¨alliges Finanzderivat C, z.B. eine Option, verkauft hat. Der Agent hat die M¨oglichkeit, in Aktien S i eines unvollst¨andigen Marktes zu investieren. Dazu w¨ahlt er RT eine selbstfinanzierende Handelsstrategie ϑ, sodaß das stochastische Integral 0 ϑdS den aus dieser Strategie resultierenden Gewinn beziehungsweise Verlust beschreibt. Die Variable c beschreibt das Startkapital des Agenten, und somit ist das Verm¨ogen des Agenten RT zum Zeitpunkt T durch Z(ϑ) = c + 0 ϑdS − C gegeben. Wir nehmen weiters an, daß die Pr¨aferenzen des Agenten durch eine Nutzenfunktion U (x) beschrieben werden, und daß es das Ziel des Agenten ist, durch Wahl einer optimalen Strategie ϑ? seinen erwarteten Endnutzen zu maximieren:    Z T E[U (Z(ϑ))] = E U c + ϑdS − C 7→ max . 0

ϑ

Die Thematik der Nutzenmaximierung ist eine grundlegende Problemstellung. Daher existiert diesbez¨ uglich eine sehr umfangreiche Literatur, sowohl mit wirtschaftlichem als auch mit mathematischem Schwerpunkt. Diese Arbeit untersucht das Verhalten der optimalen Strategie ϑ? und des Endverm¨ogens Z(ϑ? ), wenn der Agent risikoaverser wird: damit meinen wir, daß der Agent seine Einstellung gegen¨ uber finanziellen Risiken ¨andert und eine neue, ,,risikoaversere“ — dieser Begriff wird exakt definiert — Nutzenfunktion w¨ahlt. In diesem Zusammenhang ergeben sich die folgenden Fragen: konvergiert die optimale Strategie, wenn die Risikoaversion unendlich groß wird? Wenn ja, welche Aussagen kann man bez¨ uglich der Konvergenzgeschwindigkeit treffen? Kann man die Grenzstrategie (falls existierend) mit anderen Mitteln charakterisieren? Insbesondere: wie ist die Beziehung zwischen der Grenzstrategie und einer Superhedging-Strategie, d.h. einer Strategie ϑ, die nicht-negatives Verm¨ogen Z(ϑ) zum Endzeitpunkt T garantiert? Insbesondere, ist jeder Grenzstrategie eine Superhedging-Strategie und/oder gilt eine Inklusion in die andere Richtung? Es ist zumindest intuitiv plausibel, daß ein Zusammenhang zwischen Superhedging und steigender Risikoaversion besteht. Kapitel 1 beginnt mit einer kurzen Einf¨ uhrung in das Gebiet der Nutzenmaximierung und liefert die exakte mathematische Definition des von Arrow [1] und Pratt [18] eingef¨ uhrten Begriffs der Risikoaversion. Abschnitt 1.2 gibt zwei Beispiele, die den Begriff 3

4

Kurzfassung

der steigenden Risikoaversion — mit ,,steigende Risikoaversion“ ist immer gegen unendlich steigende Risikoaversion gemeint — illustrieren. Das zweite Beispiel beschreibt ein Modell, in dem es mehrere unterschiedliche Superhedging-Strategien gibt, wohingegen eine eindeutige Grenzstrategie, die auch eine Superhedging-Strategie ist, existiert. Daher kann man die Grenzstrategie als eine besondere Superhedging-Strategie interpretieren. Diese Thematik wird in Abschnitt 3.4 weiter behandelt. Die Arbeit [5] von Delbaen, Grandits, Rheinl¨ander, Samperi, Schweizer und Stricker besch¨aftigt sich mit der Maximierung von exponentiellem Nutzen in einem sehr allgemeinen Semimartingale-Modell. Die exponentielle Nutzenfunktion Uα (x) = −e−αx ist, f¨ ur einen gegen ∞ strebenden Risikoaversionsparameter α, das Standardbeispiel f¨ ur eine Familie von Nutzenfunktionen mit steigender Risikoaversion. Delbaen et al. zeigen das folgende Resultat: angenommen der Agent verf¨ ugt u ¨ber ein Startkapital c? in der minimal notwendigen H¨ohe um Superhedging zu erm¨oglichen, dann konvergiert der Negativeanteil (Z(ϑα ))− des optimalen Endverm¨ogens — ϑα ist die f¨ ur die Nutzenfunktion Uα (x) optimale Strategie — unter steigender Risikoaversion α → ∞ gegen 0. Diese Konvergenz findet in L1 (P) statt, wobei P das tats¨achliche, physikalische Wahrscheinlichkeitsmaß ist. In Abschnitt 2.2 wird dieses Resultat mit Hilfe von Orlicz-Raum Theorie erweitert. Es wird gezeigt, daß die Konvergenz auch in L1 (Q) h¨alt, wobei Q ein beliebiges, absolut stetiges lokales Martingalmaß Q mit endlicher relativer Entropie bez¨ uglich P ist. F¨ ur den Fall eines vollst¨andigen Marktes folgt daraus in weiterer Folge, daß nicht nur der Negativanteil (Z(ϑα ))− , sondern auch das Endverm¨ogen Z(ϑα ) gegen 0 konvergiert. Kapitel 3 besch¨aftigt sich auch mit der Handelsstrategie ϑα , nicht nur mit dem Endverm¨ogen Z(ϑα ). Wir betrachten ein endliches Modell mit exponentieller Nutzenfunktion Uα (x) = −e−αx . In Abschnitt 3.1 wird durch einen iterativen Algorithmus ein spezielles Endverm¨ogen Z ? definiert. Die entsprechende Strategie ϑ? , d.h. Z ? = Z(ϑ? ), ist eindeutig, vorausgesetzt man schließt triviale Mehrdeutigkeiten aus, die nur in degenerierten F¨allen auftreten bzw. leicht umgangen werden k¨onnen. Dar¨ uberhinaus ist die Strategie ϑ? unabh¨angig vom zugrundeliegenden Wahrscheinlichkeitsmaß und vom Startkapital. Wir zeigen, daß f¨ ur α → ∞ die optimale Strategie mit einer Konvergenzgeschwindigkeit von 1/α gegen die Strategie ϑ? strebt. In Abschnitt 3.2 wird das Endverm¨ogen als maximales Element einer vollst¨andigen Pr¨aordnung charakterisiert. Abschnitt 3.3 f¨ uhrt den Begriff einer balancierten Strategie beziehungsweise eines balancierten Endverm¨ogens ein und liefert somit die dritte Charakterisierung von ϑ? beziehungsweise Z ? . Dieser Begriff ist nach unserem Wissen neu, und die folgenden Abschnitte besch¨aftigen sich n¨aher mit diesem Konzept. Zuerst wird der Begriff einer balancierten Strategie dazu verwendet, um in Abschnitt 3.4 den Zusammenhang zwischen Superhedging und steigender Risikoaversion zu kl¨aren. Es wird gezeigt, daß jede balancierte Strategie nicht nur eine Superhedging-Strategie ist, sondern sogar zu der Teilmenge der sogenannten minimalen Superhedging-Strategien geh¨ort, ein von Kramkov in [14] eingef¨ uhrter Begriff. Der letzte Abschnitt in Kapitel 3 verwendet ebenfalls den Begriff der balancierten Strategie, um die Konvergenzresultate aus Abschnitt 3.1 f¨ ur die exponentielle Nutzenfunktion auf beliebige Nutzenfunktionen mit steigender

Kurzfassung

5

Risikoaversion zu verallgemeinern. Im letzten Kapitel wird der Begriff der balancierten Strategie auf ein unendliches Modell erweitert: wir betrachten ein Ein-Perioden-Modell, in dem der zugrundeliegende Wahrscheinlichkeitsraum unendlich ist. Die daf¨ ur notwendige Verallgemeinerung der Definition einer balancierten Strategie erfolgt in Abschnitt 4.1. Der folgende Abschnitt 4.2 illustriert diese Definition und zeigt die fundamentalen und u ¨berraschenden Unterschiede zwischen dem endlichen und dem unendlichen Fall. Der auff¨alligste Unterschied ist die Tatsache, daß es nicht l¨anger eine eindeutige balancierte Strategie gibt, sondern ¯ von balancierten Strategien. Da das Konzept der balancierten Strateeine Menge Θ gien zur Charakterisierung m¨oglicher Grenzstrategien dienen soll, scheint dies auf den ersten Blick unbefriedigend. Wir zeigen allerdings anhand eines expliziten Beispiels, daß es sich hierbei nicht um eine Unzul¨anglichkeit der Definition einer balancierten Strategie f¨ ur den unendlichen Fall handelt, sondern vielmehr um ein charakteristisches Merkmal: in diesem Beispiel konvergiert die optimale Strategie — im Gegensatz zu den endlichen Modellen in Kapitel 3 – nicht gegen eine fixe Strategie. Abschnitt 4.3 enth¨alt Konvergenzaussagen vergleichbar mit den Resultaten im endlichen Fall, allerdings mit der ¯ Die ¯ anstatt nur einer balancierten Strategie ϑ. Menge der balancierten Strategien Θ Resultate bez¨ uglich der Konvergenzgeschwindigkeit sind ebenfalls komplexer. In dem in Abschnitt 3.5 beschriebenen endlichen Modell ist die Konvergenzrate immer der Kehrwert der Risikoaversion. Im unendlichen Modell geben wir einerseits Bedingungen an, die diese Konvergenzrate garantieren, andererseits zeigen wir mit Hilfe eines konkreten Beispiels, daß im allgemeinen die Konvergenzgeschwindigkeit beliebig langsam werden kann. Abschnitt 4.4 besch¨aftigt sich eingehender mit dem Umstand, daß es im allgemeinen eine Menge von balancierten Strategien gibt. Aufbauend auf dem erw¨ahnten Beispiel in Ab¯ minimal ist. Das schnitt 4.2 zeigen wir, daß die Menge der balancierten Strategien Θ ¯ durch eine ,,echte“ Teilmenge zu ersetzen und trotzbedeutet, daß es nicht m¨oglich ist, Θ dem zu garantieren, daß die optimale Strategie gegen diese Teilmenge konvergiert. Den Kapiteln 2 und 3 liegt die Arbeit [8] von Grandits und Summer zugrunde. Die Hauptresultate von Kapitel 4 finden sich in Cheridito und Summer [3].

Abstract We consider an economic agent who has sold a contingent claim C, e.g., an option, maturing at time T . The agent has the possibility to invest in assets S i of an incomplete market. RT Choosing a trading strategy ϑ, her level of wealth at time T is Z(ϑ) = c + 0 ϑdS − C, RT where the stochastic integral 0 ϑdS represents the gains respectively losses arising from the trading in the assets and c denotes the initial endowment of the agent. We assume that the incentive of the agent, who values her level of wealth by means of the utility function U (x), is to maximize her expected utility of the outcome Z(ϑ) by choosing an optimal strategy ϑ? :    Z T E[U (Z(ϑ))] = E U c + ϑdS − C 7→ max . 0

ϑ

This is a very fundamental problem, and thus there exists a huge amount of literature, both economic and mathematical, on this subject. The topic we are going to investigate is the behavior of the agent’s optimal strategy ϑ? and outcome Z(ϑ? ) when the agent becomes more and more risk averse. By this we mean that the agent changes her attitude towards risk, thereby changing her utility function to a ‘more risk averse’ one, a notion which will be made precise. The following questions arise in this context: Does the optimal strategy converge when the risk aversion tends to ∞? If yes, can anything be said about the speed of convergence? Can the limiting strategy be characterized by other means? How are the relations between the limiting strategy and a superhedging strategy, i.e., a strategy that guarantees a non-negative level of wealth at terminal time T ? In particular: Is every limiting trading strategy a superhedging strategy and/or every superhedging strategy a limiting one? It is at least intuitively obvious that there should be a connection between superhedging and high risk aversion. Chapter 1 starts with a short introduction into the theory of utility maximization, focusing on the notion of risk aversion as introduced by Arrow [1] and Pratt [18]. Section 1.2 illustrates the idea of increasing risk aversion — by ‘increasing risk aversion’ we always mean risk aversion increasing to ∞ — with two simple, illustrative examples. The second of these examples provides us with a model for which more than one superhedging strategy exists, whereas the limit of the optimal strategy for increasing risk aversion — which is also a superhedging strategy — is unique. Thus the limiting strategy can be seen as a distinct superhedging strategy. We elaborate further on this topic in Section 3.4. 6

Abstract

7

The paper [5] by Delbaen, Grandits, Rheinl¨ander, Samperi, Schweizer, and Stricker deals with exponential utility maximization in a very general semimartingale setting. The family of exponential utility functions Uα (x) = −e−αx is, for α → ∞, the arch example for utility functions with increasing risk aversion. Delbaen et al. show the following result: Provided the agent starts with an appropriate initial endowment c? , the minimal endowment necessary to be able to afford superhedging, the negative part (Z(ϑα ))− of the optimal outcome — ϑα denotes the optimal strategy for the utility function Uα (x) — will, under increasing risk aversion α → ∞, tend to 0. The convergence takes place in L1 (P), where P is the physical, real world measure. In Section 2.2 we use elementary Orlicz space theory to extend this convergence result and show that convergence also takes place in L1 (Q), for all absolutely continuous local martingale measures Q with finite relative entropy with respect to P. Under the assumption of a complete market, we also show that the outcome Z(ϑα ) converges to 0, not only the negative part (Z(ϑα ))− . In Chapter 3 we investigate also the trading strategy ϑ itself, not only the behavior of the outcome Z(ϑ). We restrict ourselves to a finite setting and assume that the utility function is given by Uα (x) = −e−αx . In Section 3.1 we define the special outcome Z ? obtained by an iterative algorithm. The corresponding strategy ϑ? , i.e., Z ? = Z(ϑ? ), is unique, provided we exclude trivial ambiguities which arise in degenerate cases and can be easily remedied. We prove that the optimal trading strategy converges, as the risk parameter α tends to ∞, to this special strategy ϑ? at a convergence rate of 1/α. The strategy ϑ? is independent of the probability measure and the initial endowment. Section 3.2 characterizes the outcome Z ? as the unique maximal element of all possible outcomes Z(ϑ) with respect to a total preordering. In Section 3.3 we introduce the notion of a balanced strategy (respectively a balanced outcome) and thus provide a third characterization of ϑ? and Z ? . To our knowledge this notion is new in the literature, and a deeper investigation of this concept constitutes the main topic of the remaining sections. First, the notion of a balanced strategy is used in Section 3.4 to clarify the relationship between superhedging and increasing risk aversion: We show that every balanced strategy is not only a superhedging strategy, but belongs to the subset of the so called minimal superhedging strategies, introduced by Kramkov in [14]. We also give an example showing that not every minimal superhedging strategy is balanced. The last section of Chapter 3 uses again the notion of a balanced strategy to generalize the convergence results in Section 3.1, which hold for exponential utility functions, to an arbitrary family of utility functions with increasing risk aversion. The last chapter generalizes the notion of a balanced strategy to an infinite setting. We consider a one time-period model with an infinite underlying probability space. Section 4.1 adapts the definition of a balanced strategy to such a model. In Section 4.2 we give examples, which illustrate this definition and which point out the fundamental and surprising differences between the finite and the infinite setting. The most striking difference is the fact that in general there is no longer one unique balanced strategy, but a whole set ¯ of balanced strategies. This might seem unsatisfactory, since the concept of balanced Θ

8

Abstract

strategies is meant to characterize limiting strategies. We give however an explicit example which shows that this is not a shortcoming of the definition of balanced strategies in the infinite setting, but an intrinsic feature. In this example the optimal strategy does not converge to a single limiting strategy, as it does in the finite setting. The next section gives convergence results comparable to those in the finite setting, but with a limiting set instead of a unique limiting strategy. The situation is also more complex with respect to the convergence rate. We always have a convergence rate of the reciprocal of the risk aversion in the finite setting described in Section 3.5. In the infinite setting we give conditions which ensure that we still get this convergence rate, while in general — and we provide a concrete example — the speed of convergence gets arbitrarily slow. Section 4.4 investigates closer the fact that there might exist a whole set of balanced strategies. Refining the ideas of the above mentioned example in Section 4.2, we show that the set ¯ is minimal as the set of possible limiting strategies. This means of balanced strategies Θ that we can not replace it by a ‘true’ subset and still guarantee that the optimal strategy gets arbitrarily close to the subset. Chapter 2 and Chapter 3 are based on the paper [8] by Grandits and Summer. The main results from Chapter 4 can be found in Cheridito and Summer [3].

Contents Kurzfassung

3

Abstract

6

1 Introduction

11

1.1 Utility maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2

Two introductory examples . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2 The semimartingale model

16

2.1 Known results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2

First extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3 Finite setting

25

3.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2

Total preordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.3

Balanced strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.4

Different notions of superhedging . . . . . . . . . . . . . . . . . . . . . . . 38

3.5

General utility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4 Generalization of balanced strategies

47

4.1 The infinite setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.3

Convergence results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 ¯ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Minimality of Θ

4.4

Bibliography

79

Curriculum Vitae

81 9

Chapter 1 Introduction 1.1

Utility maximization

Utility maximization, or to be more precise the maximization of expected utility, is a very basic topic in economics. A rigorous treatment of this subject within the framework of mathematical finance was initiated by the two seminal papers by Merton [16] and Samuelson [22]. Since then the problem of utility maximization has found its way into numerous textbooks in this area: from the more economic point of view, e.g., Huang and Litzenberger, [11], as well as from the more mathematical side, e.g., Karatzas and Shreve, [13] or F¨ollmer and Schied, [6]. Nevertheless it is still a very active area of research, a resent overview can, e.g., be found in Schachermayer [23]. Let us start by giving an introduction to the kind of questions we are going to treat in the following and describing the setting in which this will be done. We assume that there is an economic market described by a probability space (Ω, (Ft )0≤t≤T , P). Ω denotes the possible states of the world, the filtration Ft represents the information structure at time t, up to the finite time horizon T < ∞, and P is the probability measure describing the likelihood of certain events. Since the exact assumptions are different in the individual chapters, we refrain from giving more explicit assumptions now. In the market there are a certain number of assets (Sti )0≤t≤T . We take one asset as num´eraire, meaning that we assume that this special asset, called the bond or cash account, is constant equal to 1 and that the other assets are denoted in units of the bond. The other assets, in the following called stocks, are assumed to be risky, they are described by an adapted, i.e., Ft -measurable, stochastic processes. The market we consider is furthermore assumed to be frictionless. This means that the agents who trade in the market, i.e., buy and sell assets, do not face any transaction costs. Thus the gain or loss of an agent engaged in the market following a trading strategy ϑ is given by the R stochastic integral ϑdS — the precise definitions will be given later —, also meaning that the trading strategy is self-financing, there is no money exogenously added or withdrawn. The agents face no constraints like short-selling constraints prohibiting them from holding 11

12

Chapter 1. Introduction

a negative amount of a certain stock. In addition we always assume that the market is free of arbitrage, basically meaning that it is not possible to make money out of nothing. The precise no-arbitrage assumption will later be given for the distinct models. We now consider a special agent who has at time 0 sold a contingent claim C that matures at time T . Such a claim is modelled as an FT -measurable random variable. The agent will have to pay back the random amount of C units of the bond at final time T . Let us also assume that her current wealth level is c. Thus the agent will, if she chooses the trading strategy ϑ, at final time T hold Z T Z(ϑ)(ω) := c + ϑ(ω)dS(ω) − C(ω). 0

We call this random variable the outcome of the trading strategy ϑ. How does the agent choose her trading strategy? We assume that her preferences have an expected utility representation, following the theory introduced by von Neumann and Morgenstern, [26]. This means that she tries to maximize her expected utility over all admissible trading strategies ϑ: E[U (Z(ϑ))] 7→ max . ϑ

The von Neuman-Morgenstern utility function U (x) is assumed to be strictly increasing — corresponding to the idea ‘more is better’ — and strictly concave. The concavity reflects the issue of risk aversion: people are assumed to dislike risk and thus prefer for example to get 100 Euro for sure compared to having a 50–50 chance of getting 200 Euro or nothing. This is exactly what concavity implies: λU (x) + (1 − λ)U (y) < U (λx + (1 − λ)y),

∀λ ∈ (0, 1),

in our example 21 U (0) + 12 U (200) < U (100). We are going to focus on a special topic of utility maximization: the (absolute) risk aversion. The risk aversion r(x) of an agent with utility function U (x) is given by r(x) := −

U 00 (x) . U 0 (x)

Since U (x) is assumed to be strictly increasing, which means (provided U (x) is sufficiently differentiable, which we assume) U 0 (x) > 0, and strictly concave, i.e., U 00 (x) < 0, the risk aversion r(x) is greater than 0. The measure r(x) of risk aversion was introduced by Arrow [1] and Pratt [18]. As indicated above it is very reasonable to use the curvature, the ‘degree of concavity’ U 00 (x) as a measure for the risk aversion. The reason why U 00 (x) is divided by U 0 (x), which corresponds to a normalization, is the fact that von NeumannMorgenstein utility functions are only unique up to positive linear transformations. This means that U (x) and U˜ (x) := aU (x) + b, for a, b ∈ R, a > 0 correspond to the same preference relation of the agent. Dividing by the derivative U 0 (x) respectively U˜ 0 (x)

1.1. Utility maximization

13

ensures that the risk aversion rU (x) equals the risk aversion rU˜ (x). As an additional motivation for the connection between the intuitive idea of risk aversion and the definition of r(x) = −U 00 (x)/U 0 (x), consider, on a heuristic level, a one time-period model without contingent claim such that Z(ϑ) = c + ϑ4S, where 4S = S1 − S0 . Then we get from the first order condition that the optimal ϑ? has to satisfy E[U 0 (c + ϑ? 4S)4S] = 0. If we want ϑ? ≥ β > 0, i.e., the agent optimizing her utility would invest at least the positive amount β into the risky asset, we know that E[U 0 (c + β4S)4S] ≥ 0 has to hold. This follows since ϑ 7→ E[U 0 (c + ϑ4S)4S] is decreasing. If we now expand U 0 (x) into a Taylor series around c and neglect higher order terms (which we might, if E[4S], which can be interpreted as risk, is not too big), we get   U 0 (c)E[4S] + βU 00 (c)E (4S)2 ≥ 0. Thus, if we assume that the expected return E[4S] > 0 (otherwise a risk averse agent would never invest a positive amount), E[4S] U 00 (c) ≥ − β, E[(4S)2 ] U 0 (c) which states that, given the market parameters E[4S] and E[(4S)2 ], the agent will invest the positive amount β into the risk stock, provided her risk aversion is not too big. The question we consider now is the following: what happens if the agent becomes more and more risk averse. Suppose the agent changes her utility function, which might for example be due to exogenous events which alter her perception of risk. Let us assume that there is a whole family of utility functions (Uα (x))α∈A for a parameter set A ⊂ R. We say that this family of utility functions has increasing risk aversion 1 (increasing to ∞), if the corresponding functions rα (x), describing the risk aversion, tend uniformly to infinity on a certain interval I ⊂ R, i.e., lim inf rα (x) = ∞

α→α∞ x∈I

for α∞ ≤ ∞. We will be more explicit about the interval I in the corresponding chapters. The arch example for such a family of utility functions is the family of exponential utility functions Uα (x) := −e−αx for α > 0. There we have rα (x) = α which tends for α → ∞ uniformly on R to ∞. 1

Be careful not to confuse this kind of ‘increasing risk aversion’ with the one described for example in [11]. There it means that the function x 7→ r(x) is increasing.

14

Chapter 1. Introduction

The question about the behavior of the agent under increasing risk aversion has several aspects. What can be said about the outcome Z(ϑ), what about the trading strategy ϑ concerning convergence, rate of convergence, set of limiting outcomes/strategies? Since high risk aversion implies that the agent is very opposed to losses, there should obviously be connections with superhedging, where no losses are allowed. Is every limiting strategy a superhedging strategy, and/or the other way around? Before we answer these kind of questions in the following chapters, let us give two illustrating examples.

1.2

Two introductory examples

Will we will start with an easy, straight forward example. Example 1.2.1. Suppose we are in a complete one time-period model, S0 = 0, P[S1 = g] = p1 , P[S1 = −b] = p2 = 1 − p1 , 0 < g, 0 < b, 0 < p1 < 1. Let us further assume that there is no contingent claim, i.e., C = 0, and that the initial endowment c = 0. At time 0 the agent buys ϑ ∈ R stocks, so she has at time 1 the payoff ϑg in the good state and −ϑb in the bad one: ( ϑg if S1 (ω) = g, Z(ϑ)(ω) = −ϑb if S1 (ω) = −b. If her utility function is Uα (x) = −e−αx , α > 0, and she wants to maximize her expected utility, she has to minimize Pα (ϑ) := p1 e−αϑg + (1 − p1 )eαϑb . So we set the derivative Pα0 (ϑ) = −p1 αge−αϑg + (1 − p1 )αbeαϑb equal to 0 and get the optimal (Pα (x) is convex, therefore the first order condition is sufficient) 1 p1 g ϑα := log . α(g + b) (1 − p1 )b What happens if α → ∞? The agent becomes more and more risk averse, losses would get more and more costly. So one could expect, that ϑα converges to the only superhedging strategy for our example, the only strategy that guarantees Z(ϑ)(ω) ≥ 0 for all ω ∈ Ω, namely ϑ¯ = 0. Let us do the calculations. Our conjecture is indeed true: lim ϑα = lim

α→∞

α→∞

1 p1 g log =0 α(g + b) (1 − p1 )b

We will later show that under certain assumptions this conjecture is always true, actually ¯ < ∞. our result yields for this example lim supα→∞ α|ϑα − ϑ|

1.2. Two introductory examples

15

We will now slightly change the above, almost trivial example. This will have hardly any impact on the utility maximization problem by itself, but the connection to superhedging changes substantially. Example 1.2.2. Let us extend the above example by introducing an additional state and therefore making the market incomplete: S0 = 0, P[S1 = g] = p1 , P[S1 = −b] = p2 , P[S1 = 0] = p3 = 1 − p1 − p2 , 0 < g, 0 < b, 0 < p1 , p2 < 1. Let the claim C equal 0 if the stock moves, i.e., if S1 = g or S1 = −b, and C = 1 if S0 = S1 = 1. As initial endowment we take c = 1. Thus the outcome equals    1 + ϑg if S1 (ω) = g, Z(ϑ)(ω) =

0   1 − ϑb

if S1 (ω) = 0,

if S1 (ω) = −b.

If we look at this model from the utility maximizing point of view, again using Uα (x) = −e , not much changes. We have to minimize −αx

Pα (ϑ) :=p1 e−α(1+ϑg) + p2 e−α(1−ϑb) + (1 − p1 − p2 )e−α·0     p1 p1 −α −αϑg αϑb =e (p1 + p2 ) e + 1− e + (1 − p1 − p2 ). p1 + p2 p1 + p2 This is essentially the same problem as above, the additional state does, since 4S1 = 0 for this state, not really change the optimization problem. If we however look at this example from the superhedging point of view, this additional state changes the situation completely. In the complete case of Example 1.2.1 we had only one superhedging strategy, namely ϑ¯ = 0. But now any ϑ ∈ [−1/g, 1/b] is a superhedging strategy. If one wants to distinguish one special superhedging strategy, the strategy ϑ¯ = 0 is a very natural choice since it is the limit of the utility maximizing strategies ϑα .

Chapter 2 The semimartingale model The main topic of the paper [5] by Delbaen, Grandits, Rheinl¨ander, Samperi, Schweizer, and Stricker is the robustness of the duality relations for the problem of hedging a contingent claim by maximizing expected exponential utility. But in a subsection, [5, Subsection 5.2], the paper also deals with risk averse asymptotics. Since the results in that subsection served as a starting point for the research presented here and are the first to be extended, we start by describing the setting and the relevant results of that paper.

2.1

Known results

We are given a probability space (Ω, F, P), a time horizon T ∈ (0, ∞], and a filtration F = (Ft )0≤t≤T satisfying the usual conditions, i.e., right-continuity and completeness, so that we can and will choose right-continuous versions of all (P, F)-semimartingales with left-limits. The discounted price process of d risky assets is described by the Rd valued semimartingale S = (St )0≤t≤T . In the market there is also a riskless asset, the bank account, being equal to 1. We assume that the agent has a short position in a contingent claim1 C, i.e., an FT -measurable random variable describing the payoff at terminal time T . Furthermore the agent starts with an initial endowment c and can choose a trading strategy (ϑt )0≤t≤T ∈ L(S), i.e., ϑ is an F-predictable S-integrable Rd valued process and ϑit describes the number of shares of asset i held at time t ∈ [0, T ]. To avoid arbitrage opportunities, we have to impose some further restrictions on the possible trading strategies, but we need some more notation for that, so we will come back to this issue a little bit later. The gain of employing strategy ϑ is given by the stochastic integral R G(ϑ) := ϑdS. The wealth of the agent starting with initial endowment c and following strategy ϑ is denoted by Vt (ϑ) := Vt (c, ϑ) := c + Gt (ϑ), t ∈ [0, T ]. The final outcome, after paying the liabilities arising from her short position in the claim C, is Z(ϑ) := Z(c, ϑ) := VT (c, ϑ) − C. 1

In [5] the contingent claim is denoted by B, but in discussions this often caused confusion, since it was assumed that B denotes the bond. Therefore I took the liberty of using the symbol C.

16

2.1. Known results

17

The agent is assumed to be an expected utility maximizer with exponential utility function. By that we mean that she chooses a strategy ϑ that maximizes for fixed α > 0   E −e−αZ(ϑ) . We denote by Ma the set of absolutely continuous local martingale measures for S, i.e., Ma := {Q : Q probability measure, Q  P, S local martingale under (Q, F)} , and by Me the set of equivalent local martingale measures for S, i.e., Me := {Q : Q probability measure, Q ∼ P, S local martingale under (Q, F)} . The relative entropy of a probability measure Q with respect to P is given by    dQ if Q  P E dP log dQ dP H(Q|P) := +∞ otherwise, and Mf := Mf (P) := {Q ∈ Ma : H(Q|P) < ∞} is the set of absolutely continuous local martingale measures with finite entropy with respect to the physical measure P. Notice that by convexity of x log x and Jensen’s inequality H(Q|P) ≥ 0 and Q 7→ H(Q|P) is convex. Now we are ready to formulate the three basic assumptions from the paper [5]. Assumptions. S is locally bounded,

(2.1.1)

Mf (P) ∩ Me 6= ∅,

(2.1.2)

i.e., there exists some equivalent local martingale measure with finite entropy, and   C is bounded from below and E e(α+ε)C < ∞ for some ε > 0.

(2.1.3)

The weaker assumption of the existence of some absolutely continuous local martingale measure with finite entropy implies by Frittelli [7, Theorem 2.1] the existence of a unique probability measure QE in Mf that minimizes the relative entropy H(Q|P) over all Q ∈ Mf . Note that assumption (2.1.1) is required to apply [7, Remark 2.1] and extend the result from the bounded to the locally bounded case. If we require the stronger assumption (2.1.2), i.e., that there exists an equivalent, not only absolutely continuous local martingale measure with finite entropy, then [7, Theorem 2.2] implies that QE is in Mf ∩ Me . Both cited theorems go back to the work of Csisz´ar [4].

18

Chapter 2. The semimartingale model

We have to introduce some more notation. First we define a probability measure PC by dPC := KC eαC , dP   where KC = 1/E eαC ∈ (0, ∞) is the normalizing constant. Note that assumption (2.1.3) guarantees that KC ∈ (0, ∞) and therefore PC is well-defined. It is not hard to see and shown in [5] that assumption (2.1.3) implies Mf (P) = Mf (PC ) and that assumption (2.1.2) holds for P if and only if it holds for PC . Therefore we can under our three assumptions define the unique element QE,C of Mf ∪ Me that minimizes the relative entropy H(Q|PC ) with respect to PC over all Q ∈ Mf . We can now state the following duality result. Theorem 2.1.1 (Theorem 2.1 in [5]). Under the assumptions (2.1.1), (2.1.2) and (2.1.3) the following duality relation holds:    −αZ(c,ϑ)   sup E −e = − exp − inf H(Q|P) + αc − EQ[αC] , Q∈Mf

ϑ∈Θ1

where  Θ1 := ϑ ∈ L(S) : e−αGT (ϑ) ∈ L1 (PC ) and G(ϑ) is a QE,C -martingale , and the supremum respectively infimum are attained by elements ϑα ∈ Θ1 respectively QE,C ∈ Mf ∩ Me . Remark 2.1.2. Note that the paper [5] by Delbaen et al., being mainly concerned with the stability of this duality result for different sets of strategies, contains also more sophisticated results of this type for different sets Θ2 and Θ3 . In this connection also the papers of Kabanov and Stricker [12] and Schachermayer [24] should be mentioned. But for our purpose the above theorem suffices. Let us now proceed to Subsection 5.2, ”Risk-Averse Asymptotics” in the paper [5]. First the indifference price, a concept introduced by Hodges and Neuberger [10], is considered. The indifference price is the price of the contingent claim C, at which the agent is indifferent between having the claim in her portfolio or not. Mathematically let   Jα (c, C) := sup E −e−α(c+GT (ϑ)−C) ϑ∈Θ1

be the maximal expected utility the agent can obtain if she starts with initial capital c and has a short position in the contingent claim C. Then the indifference price pα (c, C) is defined by Jα (c, 0) = Jα (c + pα (c, C), C). Now it is a reasonable question to ask what happens to the indifference price pα (c, C) when α, the risk aversion parameter, tends to infinity. For the Brownian filtration case

2.2. First extensions

19

this question was answered by Rouge and El Karoui [21] using a dynamic programming approach. See also Becherer [2], who for example also looks at the case α → 0. Using the duality relation described by Theorem 2.1.1, it is not hard to answer this question in full generality. Theorem 2.1.3 (Corollary 5.1 in [5]). Given that the assumptions (2.1.1), (2.1.2) and (2.1.3) hold for all α > 0, the indifference price converges to the superreplication price c? : lim pα (c, C) = sup EQ[C] := c? .

α→∞

Q∈Me

So the behavior of the indifference price under increasing risk aversion is well understood. And it makes good sense from the economic viewpoint: we know, see, e.g., Kramkov [14], that the superreplication price c? is the minimal initial endowment required to be able to superhedge the contingent claim C, i.e., to generate a level of wealth VT (ϑ) that is greater or equal to the contingent claim C almost surely. An agent who is willing to bear some risk might not ask for the whole superreplication price c? , for her the possible surplus in certain states of the world offsets the possible losses in other states. But the more risk avers the agent becomes, the more she dislikes those potential losses, the harder it gets to make up for those losses by a surplus in other states. In the end, if her risk aversion gets infinite, she is not going to accept any losses at all and therefore requires the whole superreplication price c? . But what about the strategy ϑ and the outcome Z(ϑ)? Does the strategy converge? If yes, to which strategy? If no, does at least the outcome converge? Dealing with these questions, Delbaen et al. show the following theorem, where ϑα ∈ Θ1 denotes the optimal strategy, i.e.,    α  sup E −e−αZ(c,ϑ) = E −e−αZ(c,ϑ ) . ϑ∈Θ1

α

The existence of ϑ follows from Theorem 2.1.1. Theorem 2.1.4 (Theorem 5.2 in [5]). Given that the assumptions (2.1.1), (2.1.2) and (2.1.3) hold for all α > 0, then

lim Z(c? , ϑα )− L1 (P) = 0. α→∞

So this theorem tells us that, provided we start with enough initial endowment, the negative part of the outcome Z(c? , ϑα )− tends to zero. This means that the strategy ϑα behaves in the limit like a superhedging strategy, by not allowing any losses.

2.2

First extensions

We are now going to extend and generalize the result presented in the previous section. First we extend Theorem 2.1.4 to hold not only for the physical measure P but for all

20

Chapter 2. The semimartingale model

Q ∈ Mf . This is done in Theorem 2.2.4. Using this result, we then show that under the assumption of a complete market not only the negative part of the final outcome Z(c? , ϑα )− converges to 0, but also the positive part Z(c? , ϑα )+ and therefore the whole outcome Z(c? , ϑα ), see Corollary 2.2.5. We will use some basic definitions and results from Orlicz space theory. Any book on the subject, like Krasnoselskii and Rutickii [15] or Musielak [17], can be used as a reference. We will use Rao and Ren [19], since there the needed results are presented in a condensed, but readily accessible form. We set Φ(x) := e|x| − |x| − 1. This gives us an N -function (nice Young function, [19, p.1]) Φ : R → R+ , i.e., Φ is even, convex, Φ(x) = 0 if and only if x = 0, and Φ(x) = 0, x→0 x lim

Φ(x) = ∞. x→∞ x lim

We are only interested in the N -function Φ(x) = e|x| − |x| − 1, but all the following statements concerning general Orlicz space theory hold for arbitrary N -functions. The complementary N -function Ψ can be defined by Ψ(y) = sup {x|y| − Φ(x) : x ≥ 0} , y ∈ R. For our special function Φ(x) = e|x| − |x| − 1 this gives Ψ(y) = (1 + |y|) log(1 + |y|) − |y|, see e.g., [17, p. 83, Example 13.5(II)]. For any pair of complementary N -functions Φ, Ψ we can define two different norms. Definition 2.2.1. For a measure space (Ω, F, µ) the space   Z Φ L = f : Ω → R, measureable, Φ(af )dµ < ∞ for some a > 0 Ω

is called Orlicz space on (Ω, F, µ). For a pair of complementary N -functions Φ, Ψ and f ∈ LΦ let Z  Z  kf kΦ := sup f gdµ : Ψ(g)dµ ≤ 1, g : Ω → R, measurable Ω



denote the Orlicz norm, which is actually a semi-norm. Furthermore we define     Z f dµ ≤ 1 , kf k(Φ) := inf k > 0 : Φ k Ω the gauge norm or Luxemburg norm. With these definitions we have the following H¨older inequality.

2.2. First extensions

21

Theorem 2.2.2 (Theorem 8, p. 17, in [19]). For f ∈ LΦ and g ∈ LΨ kf gkL1 (µ) ≤ kf k(Φ) kgkΨ holds true. We are now ready to extend Theorem 2.1.4. The following lemma shows that the convergence result also holds with respect to the corresponding Luxemburg norm. Then we use this lemma to show convergence in L1 (Q) for Q ∈ Mf . Lemma 2.2.3. Given our usual setting and that the assumptions (2.1.1), (2.1.2) and (2.1.3) hold for all α > 0, let Φ(x) := e|x| − |x| − 1 and denote by kf k(Φ) the Luxemburg norm for the probability space (Ω, F, P). Then

lim Z(c? , ϑα )− (Φ) = 0. α→∞

Proof. We are actually going to show slightly more, namely that kZ(c? , ϑα )− k(Φ) ≤ 1/α. Since     Z f kf k(Φ) = inf k > 0 : Φ dP ≤ 1 , k Ω it suffices to show that Z Z   ? α − αZ(c? ,ϑα )− ? α − Φ(αZ(c , ϑ ) )dP = e − αZ(c , ϑ ) − 1 dP ≤ 1. Ω



This again will follow once we showed that h i Z   αZ(c? ,ϑα )− αZ(c? ,ϑα )− E e = e dP ≤ 2

(2.2.1)



holds. The following step is exactly as in the proof of Theorem 5.2 in [5]. By Theorem 2.1.1  ? α  log E e−αZ(c ,ϑ ) = − inf (H(Q|P) + αc? − EQ[αC]) =: RHS.

(2.2.2)

Q∈Mf

[5, Lemma 3.5] states that under our assumptions for Q ∈ Mf 1   EQ[C] ≤ H(Q|P) + E eC . e

(2.2.3)

So assumption (2.1.3) together with the fact that c? ∈ (−∞, ∞] guarantees that H(Q|P)+ αc? − EQ[αC] is well defined for any Q ∈ Mf . The assumptions (2.1.2) and (2.1.3) imply that supQe ∈Me EQe[C] ≥ EQ[C] for all Q ∈ Mf . To see this assume that there would exist an ε > 0 and a probability measure Q ∈ Mf such that EQ[C] − EQe[C] > ε ∀Qe ∈ Me .

22

Chapter 2. The semimartingale model

By assumption (2.1.2) there exists an Qe ∈ Mf ∩ Me . Thus the linear combination Qλ := λQe + (1 − λ)Q,

λ ∈ (0, 1]

is in Me and by our assumption      dQ dQλ dQ − dQe ε
(2.2.4)

Using (2.2.3) and assumption (2.1.3), namely C bounded from below, the last expression goes to 0 for λ → 0 and this gives the desired contradiction. Therefore supQe ∈Me EQe[C] − EQ[C] = c? − EQ[C] ≥ 0 for all Q ∈ Mf and we get RHS ≤ 0. Equation (2.2.2) now gives us  ? α  E e−αZ(c ,ϑ ) ∈ [0, 1] and therefore h i ? α − E eαZ(c ,ϑ ) χ{Z(c? ,ϑα )<0} ≤ 1, where χ{E} is the indicator function, equal to 1, if the event E is true, 0 otherwise. This implies i h i h ? α − ? α − E eαZ(c ,ϑ ) ≤ 1 + E eαZ(c ,ϑ ) χ{Z(c? ,ϑα )≥0} ≤ 2, i.e., we have shown (2.2.1) and finished the proof. Theorem 2.2.4. As usual we are given our standard setting and the assumptions (2.1.1), (2.1.2) and (2.1.3) hold for all α > 0. Then

lim Z(c? , ϑα )− L1 (Q) = 0

α→∞

for all Q ∈ Mf . Proof. We use Theorem 2.2.2 for Φ(x) = e|x| −|x|−1 and Ψ(y) = (1+|y|) log(1+|y|)−|y|.



dQ



? α −

Z(c? , ϑα )− 1 = Z(c? , ϑα )− dQ

≤ Z(c , ϑ ) (Φ)

dP . L (Q) dP L1 (P) Ψ From the above lemma we know that kZ(c? , ϑα )− k(Φ) tends to 0 for α → ∞, so it just remains to show that     dQ

dQ

sup E g : E[Φ(g)] ≤ 1 =

dP < ∞. dP Ψ To see this we use the definition of the complementary function to get   dQ dQ g≤Ψ + Φ(g), dP dP

2.2. First extensions

rewrite for y ≥ 0

23

    1+y 1+y Ψ(y) = 2 log + log 2 − y, 2 2

and use the convexity of the relative entropy Q 7→ H(Q|P) together with our assumption H(Q|P) < ∞ to get      dQ dQ E g ≤E Ψ + E[Φ(g)] dP dP " #      dQ dQ dP dP + + dQ dQ dP dP dP dP +E 1+ = 2E log log 2 − E + E[Φ(g)] 2 2 dP dP   Q + P ≤ 2H P + 2 log 2 < ∞ 2

for E[Φ(g)] ≤ 1.

If we now assume in addition that the underlying market is complete, i.e., that there exists only one equivalent local martingale measure, we get from the above theorem immediately L1 (Q) convergence not only of the negative part Z(c? , ϑα )− , but convergence of the outcome Z(c? , ϑα ) itself. Corollary 2.2.5. Given the setting of Theorem 2.2.4 we assume in addition that the market is complete, i.e., Me = {Q}. Then lim kZ(c? , ϑα )kL1 (Q) = 0

α→∞

Proof. Since the market is assumed to be complete c? = sup EQ[C] = EQ[C] . Q∈Me

Furthermore ϑα ∈ Θ1 by Theorem 2.1.1, implying that G(ϑα ) is a Q-martingale, i.e., EQ[GT (ϑα )] = 0, thus EQ[Z(c? , ϑα )] = EQ[c? + GT (ϑα ) − C] = 0. Therefore Theorem 2.2.4 implies also

lim Z(c? , ϑα )+ L1 (Q) = 0

α→∞

and lim kZ(c? , ϑα )kL1 (Q) = 0.

α→∞

24

Chapter 2. The semimartingale model

Remark 2.2.6. In the above results we always used as initial endowment of the utility maximizing agent the superreplication price c? . This admits a nice interpretation, but the results can easily be extended to an arbitrary initial endowment c ∈ R. This is due to the fact that for the exponential utility the optimal strategy ϑα = ϑα (c) does not depend on the initial endowment c. −e−Z(c,ϑ) = −e−Z(0,ϑ)−c = −e−c e−Z(0,ϑ) , i.e., the initial endowment c translates into the multiplicative constant e−c and has therefore no influence on the optimization. This is a special feature of the exponential utility function and not true for general utility functions. So we have Z(c? , ϑα ) = Z(c, ϑα ) + (c? − c) for any initial endowment c ∈ R, and we can in the above results always replace one expression by the other. Especially in the result concerning the complete market, Corollary 2.2.5, we get lim kZ(c, ϑα )kL1 (Q) = c − c? α→∞

for an arbitrary initial endowment c ∈ R, where the limit c − c? is just the amount being surplus to the superhedging price.

Chapter 3 Finite setting To get a better understanding of the situation and to investigate in an incomplete market the behavior of both the outcome Z(ϑ), not just Z(ϑ)− , and the strategy ϑ under increasing risk aversion, we start by considering the case of a finite underlying probability space and discrete time, i.e., a model, where the market can be represented by an event tree. We will start by describing an algorithm that determines a unique strategy. Theorem 3.1.9 shows that the optimal strategy ϑα , optimal for an agent maximizing expected exponential utility −e−αx , tends to this unique strategy as α → ∞. In Section 3.2 an other characterization of the special strategy is given as the maximal element of a total preordering. The notation of a balanced strategy, one further description of this special strategy, is given in Section 3.3. This notation is then used in the two following sections to investigate the connections between increasing risk aversion and superhedging, Section 3.4, and to generalize the previous convergence results to a family of arbitrary utility functions with risk aversion tending to ∞.

3.1

Algorithm

Let us assume that we are given a finite probability space Ω = {ω1 , . . . , ωN }, a finite time horizon T ∈ N, a filtration (Ft )t=0,1,...,T , and a probability measure P. For finite Ω we always assume that P [ωi ] > 0 for all i = 1, . . . N . The market consists of a stock S = (St )t=0,1,...,T , i.e., an R-valued adapted process, and a riskless bond B. We assume wlog that the stock S represents the discounted price process, i.e., that it is denoted in terms of units of the riskless bond, and that the bond is constant equal to 1. The stock process S can be represented by an event tree. Each node in the tree corresponds to an atom F ∈ Ft , t = 0, 1, . . . , T . By an atom F ∈ Ft we mean an element F ∈ Ft , F 6= ∅ such that G ∈ Ft with G ( F implies G = ∅. By slight abuse of notation we say node F at time t for an atom F ∈ Ft . G is an successor of F ∈ Ft , if G ∈ Ft+1 and G ⊂ F . 25

26

Chapter 3. Finite setting

In this finite setting a trading strategy ϑ, i.e., an Ft−1 -measurable process, can be ~ ∈ RM , where M is the number of nodes in the tree from time 0 to viewed as a vector ϑ time T − 1. These are the nodes at which one can make a decision on how to trade. By slight abuse of notation we will write ϑ also for the vector, not just for the process. We denote by ϑF ∈ R the value of the strategy ϑ at the node F ∈ Ft , t = 0, . . . , T . This means that ϑF denotes the number of shares the agent holds in the risky asset from time t up to time t + 1, if the state of the world ω satisfies ω ∈ F , which is, since F ∈ Ft , known at time t. Furthermore the final gain of a trading strategy ϑ, in general the stochastic R integral GT (ϑ) = ϑdS, can be written as an inner product ϑ · ki := GT (ϑ)(ωi ), where ki represents the development of the stock S in the state ωi of the world. Note that GT (ϑ)(ωi ) is linear in ϑ. We assume that our market is arbitrage free (see, e.g., [9]), this is a standing assumption which we always make. In the current finite setting this simply means that at each time step the stock is not allowed to move only up or only down. In mathematical terms this means that for each atom F ∈ Ft , t = 0, 1, . . . , T − 1 either P [4St+1 = 0; F ] = P [St+1 = St ; F ] = 1 (this corresponds to the case when the stock does not move at all), or P [4St+1 > 0; F ] > 0 and P [4St+1 < 0; F ] > 0. As before we assume that the agent has a short position in a contingent claim C ∈ FT , Ci := C(ωi ), and starts with initial endowment c ∈ R. So by employing strategy ϑ the agent achieves an output Zi (ϑ) := Z(ϑ)(ωi ) := c + GT (ϑ)(ωi ) − Ci . We define the set  Z := Z(ϑ)| ϑ ∈ RM of all possible outcomes. Now we are going to describe a special element Z ? ∈ Z. This element will turn out to be the limiting outcome if the risk aversion increases to infinity. We will also show that Z ? is the maximal element with respect to a preordering to be defined later. But let us now start by describing this special element Z ? using an inductive algorithm. The main idea is to look for each strategy at the worst state of the world and try to choose a strategy that makes the worst outcome as good as possible. Since this might not give a distinct strategy respectively outcome, one has to forget about certain states ωi and repeat the procedure again. We will afterwards give an example to illustrate this idea. But let us now give the precise mathematical description. Set I0 := {1, . . . , N } and Θ?0 := RM . Step 1. Define E1 (ϑ) := min Zi (ϑ). i∈I0

The function E1 (ϑ) is, as the minimum of a finite number of affine functions, continuous and concave. A no-arbitrage argument (see Remark 3.1.1) implies that E1 (ϑ)

3.1. Algorithm

27

is bounded above. We define ( Θ?1 :=

)

˜ ϑ ∈ Θ?0 : E1 (ϑ) = sup E1 (ϑ) ˜ ? ϑ∈Θ 0

⊂ Θ?0 .

˜ Since the supremum supϑ∈Θ ˜ ? E1 (ϑ) is a maximum (see Remark 3.1.2), the set of 0 maximizers Θ?1 is non-empty. Furthermore Θ?1 is convex (follows from concavity of E1 ) and closed (follows from continuity of E1 ). The definition of Θ?1 implies that E1 (ϑ) has the same value for all ϑ ∈ Θ?1 , and we can write E1 (Θ?1 ) ∈ R for this value. Let I1 := {i ∈ I0 | Zi (Θ?1 ) = E1 (Θ?1 )} , where Zi (Θ?1 ) = E1 (Θ?1 ) is shorthand for Zi (ϑ) = E1 (Θ?1 ) for all ϑ ∈ Θ?1 . Furthermore we define for all i ∈ I1 Zi? := E1 (Θ?1 ). Step 2. Define E2 (ϑ) := min Zi (ϑ). i∈I0 \I1

The function E2 (ϑ) is again continuous and concave. The fact that E2 (ϑ) is bounded from above follows by the same no-arbitrage argument, namely Remark 3.1.1, as before. So we get again a set ( ) ˜ ⊂ Θ? Θ? := ϑ ∈ Θ? : E2 (ϑ) = sup E2 (ϑ) 2

1

1

˜ ? ϑ∈Θ 1

which is non-empty, convex and closed. Define I2 := {i ∈ I0 \ I1 : Zi (Θ?2 ) = E2 (Θ?2 )} , and set for all i ∈ I2 Zi? := E2 (Θ?2 ). Step k. The function Ek (ϑ) :=

min

i∈I0 \(I1 ∪···∪Ik−1 )

Zi (ϑ).

is again continuous, concave, and bounded from above (Remark 3.1.1). So we can define the non-empty (Remark 3.1.2 guarantees the supremum is a maximum), convex, and closed set ( ) ˜ ⊂ Θ? , Θ? := ϑ ∈ Θ? : Ek (ϑ) = sup Ek (ϑ) k

k−1

˜ ? ϑ∈Θ k−1

k−1

the set of natural numbers Ik := {i ∈ I0 \ (I1 ∪ · · · ∪ Ik−1 ) : Zi (Θ?k ) = Ek (Θ?k )} , and we set for all i ∈ Ik Zi? := Ek (Θ?k ).

28

Chapter 3. Finite setting

Continue this way until I1 ∪ · · · ∪ IK = I0 . Note that the cardinality of Ik is greater than 1 for all k, meaning that there is at least one state of the world ωi for which Z(Θ?k )(ωi ) attains the supremum Ek (Θ?k ). Therefore the algorithm will terminate, and this way we have defined Z ? by Zi? = Zi (Θ?K ) for all i ∈ I0 = {1, 2, . . . , N }. We now have to make sure that the above algorithm really works. This is done by the following two remarks. They have to be applied iteratively. First we use Remark 3.1.1 for k = 1, then Remark 3.1.2. In the next turn we apply those remarks for k = 2, and so on. Remark 3.1.1. We want to show that Ek (ϑ) = mini∈I0 \(I1 ∪···∪Ik−1 ) Zi (ϑ) is bounded from above on Θ?k−1 for k = 1, . . . , K. For k = 1 we have E1 (ϑ) = mini∈I0 Zi (ϑ). Let us suppose the claim does not hold, i.e., for any constant K there exists a ϑ ∈ Θ?k−1 such that Zi (ϑ) > K for all i ∈ I0 \ (I1 ∪ · · · ∪ Ik−1 ). The set Θ?k−1 is well defined by remark 3.1.2 applied for k − 1 respectively by definition for k = 1, i.e., Θ?0 = RM . By construction of the set Θ?k−1 we have for all i ∈ I1 ∪ · · · ∪ Ik−1 — where I1 ∪ · · · ∪ Ik−1 = ∅ for k = 1 — that Zi (ϑ) = Zi? , independent of ϑ ∈ Θ?k−1 . Now take any ϑ1 ∈ Θ?k−1 and determine K such that Zi (ϑ1 ) < K − 1 for all i ∈ I0 . This is possible since there are only a finite number of i ∈ I0 . By our assumption there exists a ϑ2 ∈ Θ?k−1 such that Zi (ϑ2 ) > K for all i ∈ I0 \ (I1 ∪ · · · ∪ Ik−1 ). This gives us GT (ϑ2 − ϑ1 )(ωi ) = Zi (ϑ2 ) − Zi (ϑ1 ) > K − (K − 1) = 1,

∀i ∈ I0 \ (I1 ∪ · · · ∪ Ik−1 )

and GT (ϑ2 − ϑ1 )(ωi ) = Zi (ϑ2 ) − Zi (ϑ1 ) = 0,

∀i ∈ I1 ∪ · · · ∪ Ik−1 .

Therefore the trading strategy ϑ2 − ϑ1 provides a gain GT (ϑ2 − ϑ1 )(ωi ) = 0 for i ∈ I1 ∪ · · · ∪ Ik−1 and a gain GT (ϑ2 − ϑ1 )(ωi ) > 1 for i ∈ I0 \ (I1 ∪ · · · ∪ Ik−1 ) 6= ∅. But this is by definition an arbitrage strategy and does therefore by our no-arbitrage condition not exist, giving us the desired contradiction. ˜ where Ek (ϑ) ˜ = Remark 3.1.2. We want to show that the supremum supϑ∈Θ Ek (ϑ), ˜ ? k−1 ˜ is really attained for an ϑ ∈ Θ? , k = 1, . . . , K. We know for mini∈I0 \(I1 ∪···∪Ik−1 ) Zi (ϑ), k−1 k = 1 from the definition Θ?0 = RM and for k = 2, . . . , K from the definition of the algorithm that Θ?k−1 is closed and convex. Thus, since we are only considering finitely ˜ we are looking for the supremum of a polyhedral convex set many hyperplanes Zi (ϑ), over a closed, convex domain. That under this assumptions the supremum is a maximum, follows from the fact that a polyhedral convex set has at most a finite number of extreme points and extreme directions [20, Part IV, §19], and from the boundedness of Ek (ϑ) (Remark 3.1.1). Remark 3.1.3. Note that the probability measure had no influence on the definition of the special outcome Z ? and the special set Θ?K . The only important thing about the probability measure P is that is assigns positive probability to all states ω ∈ Ω. Therefore Z ? and Θ?K are the same for all probability measures that are equivalent to P. Also the initial endowment has no influence on the definition of Z ? and Θ?K .

3.1. Algorithm

29

 

   

 

 

   





   Figure 3.1.1: Zi (ϑ), i = 1, 2, 3, 4

Remark 3.1.4. It might very well happen that for example already |Θ?1 | = 1 (|A| denotes the cardinality of a set A), and therefore all the following maximizations supϑ∈Θ?1 E2 (ϑ), supϑ∈Θ?2 E3 (ϑ), . . . become trivial. We still state the algorithm this way so we do not have to include special cases. Example 3.1.5. Let us consider a one time-step model with 4 states, Ω = {ω1 , ω2 , ω3 , ω4 }. The stock starts at S0 = 3 and 4S1 = (3, 0, −1, −2). The contingent claim is given by C = (3, 3, 2, 0) and the initial endowment is assumed to be c = 3. So we get Z1 (ϑ) = 3ϑ, Z2 (ϑ) = 0, Z3 (ϑ) = −ϑ + 1, Z4 (ϑ) = −2ϑ + 3. From Figure 3.1.1 one readily sees that   3ϑ    0 E1 (ϑ) = min Zi (ϑ) = i=1,2,3,4  −ϑ + 1     −2ϑ + 3

if ϑ < 0, if 0 ≤ ϑ ≤ 1, if 1 < ϑ ≤ 2, if 2 < ϑ.

˜ = 0 and this supremum is attained for ϑ ∈ [0, 1], i.e., Θ? = [0, 1]. Therefore supϑ∈R E1 (ϑ) ˜ 1 The only state ωi of the world where the outcome Zi (Θ?1 ) = 0, i.e., Zi (ϑ) = 0 for all ϑ ∈ Θ?1 , is ω2 , thus I1 = {2}

and Z2? = 0.

30

Chapter 3. Finite setting

For the next step we forget about the state ω2 . This gives us   if ϑ ≤ 41 ,  3ϑ E2 (ϑ) = min Zi (ϑ) = i=1,3,4

and Θ?2

  1 = , 4

−ϑ + 1   −2ϑ + 3

I2 = {1, 3} ,

if

1 4

< ϑ ≤ 2,

if 2 < ϑ.

3 and Z1? = Z3? = . 4

Now we just have to consider the state ω4 , since I0 \ (I1 ∪ I2 ) = {4}, and we therefore immediately get   1 5 ? E3 (ϑ) = min Zi (ϑ) = −2ϑ + 3, Θ3 = , I3 = {4} , and Z2? = . i=4 4 2 Now the algorithm stops since I1 ∪ I2 ∪ I3 = I0 and we have defined the special outcome Z ? and the set of trading strategies Θ?3 :     3 3 5 1 ? ? Z = , 0, , , Θ3 = . 4 4 2 4 Remark 3.1.6. What do we know about the set Θ?K ? By construction respectively Remark 3.1.2 Θ?K 6= ∅. So the first possibility is |Θ?K | = 1. Now assume |Θ?K | > 1, say ν, η ∈ Θ?K ⊃ · · · ⊃ Θ?1 . Then Zi (ν) = Zi (η) for all i ∈ I1 ∪· · ·∪IK = I0 , and since Zi (ϑ) is a hyperplane, we conclude that Zi is, for i ∈ I0 , constant on the affine subspace generated by ν and η. Thus Θ?K includes this whole subspace, and any strategy within this subspace generates the same output Zi? . Summing up, Θ?K is either a single point (which can be interpreted as an 0-dimensional affine subspace) or an affine subspace. Remark 3.1.7. It might seem a little bit unsatisfactory that the algorithm does not give a set Θ?K which is a singleton and thus does not provide us with a unique strategy. But actually this non-uniqueness of Θ?K has nothing to do with the special definition of Θ?K . It is just the question on whether the mapping ϑ 7→ Z(ϑ) is injective or not. If the mapping is injective, Θ?K is a singleton, since Z ? = Z(ϑ) for all ϑ ∈ Θ?K . The next proposition will show that the mapping is not injective if and only if there exists a node at which the stock can only move to a single value, i.e., there exists a constant k such that 4St = k conditional on being at this special node. If such a node does not exists, the tree is called non-degenerate, see, e.g., [25]. Note that with our standing assumption of no-arbitrage the constant k has to equal 0. And note further that no-arbitrage together with non-degeneracy implies that P [4St+1 > 0; F ] > 0 and P [4St+1 < 0; F ] > 0 for all nodes F ∈ Ft , t = 0, . . . , T − 1, i.e., the stock has to move up and down with positive probability. If however such a degenerate node exists, the set Θ?K of trading strategies can obviously not be unique. Whatever the agent decides to trade at this node, it has no influence on the

3.1. Algorithm

31

outcome. But this is a kind of pathological ambiguity within the set of trading strategies. It can be easily overcome by assuming e.g., that the agent does not trade at all in these situations, i.e., fixing ϑF = 0 for each such node F ∈ Ft , t = 0, . . . , T − 1. Proposition 3.1.8. There exists a one-to-one correspondence between trading strategies ϑ and outcomes Z(ϑ) if and only if the tree described by the stock process S is nondegenerate, meaning that there does not exist an atom F ∈ Ft , t = 0, . . . , T − 1, such that P [4St+1 = 0; F ] = 1. Proof. As mentioned in the previous remark, if the tree is degenerate, there will obviously exist two different trading strategies ν 6= η such that Z(ν) = Z(η). So it just remains to show that the existence of two different trading strategies ν 6= η with Z(ν) = Z(η) implies the existence of a node F ∈ Ft , t = 0, . . . , T − 1 such that P [4St+1 ; F ] = 0, i.e., that the tree is degenerate. Let us consider the trading strategy ϑ = ν − η 6= 0. Since by assumption Z(ν) = Z(η), it follows that GT (ϑ) = 0. This means that there exists a strategy ϑ that gives us, if we start with 0 initial endowment, at the end a payoff of 0. Here we just look at the stock, we do not consider the claim. Let us now assume that the tree generated by the stock S is non-degenerate and derive a contradiction to the no-arbitrage condition. Since the tree is non-degenerate and we assume no-arbitrage, the stock moves up and down at each node with positive probability. Therefore ϑ 6= 0 implies that at least at one node F ∈ Ft the strategy ϑ does worse than the strategy 0 of not investing at all, i.e., G(ϑ)t+1 < G(0)t+1 on a node H ∈ Ft+1 succeeding the node F . If we therefore consider the strategy of not trading at all except on the subtree starting at the node H, where we use the strategy ϑ, we arrive at a strategy that has strictly positive outcome with positive probability, i.e., an arbitrage strategy and the desired contradiction. We now give the main result in this section. Denote by pi = P [ωi ] the probability that ωi turns out to be the true state of the world. Then the problem of maximizing expected exponential utility is equivalent to minimizing Pα (ϑ) :=

N X i=1

pi e−α(c+GT (ϑ)(ωi )−Ci ) =

N X

pi e−αZi (ϑ) > 0

i=1

over all strategies ϑ ∈ RM . Let ϑα denote the minimizer of Pα and set Ziα := Zi (ϑα ). For considerations concerning existence of ϑα see Proposition 3.5.3 respectively the remark preceding this proposition. There we treat the case of a general utility function, not just for the exponential utility function. For η, ϑ ∈ RM let d(η, ϑ) denote a distance induced by a norm k.k, e.g., the euclidian distance, and d(η, Θ) = inf ϑ∈Θ d(η, ϑ). Further let O(·) denote the Landau-symbol, here  always used for α → ∞. f (α) = O α1 means for example lim supα→∞ αf (α) < ∞.

32

Chapter 3. Finite setting

Theorem 3.1.9. Given a finite probability space (Ω = {ω1 , . . . , ωN } , (F)t=0,...,T , P) with a finite time horizon T ∈ N, an R-valued, adapted stochastic process (St )t=0,...,T representing the discounted stock price process, a riskless bond B = 1, and the contingent claim C ∈ FT . Let ϑα be the optimal strategy for an expected utility maximizing agent with utility function Uα (x) = −e−αx who starts with initial endowment c and holds a short position in the contingent claim C, i.e., whose outcome generated by the predictable strategy ϑ equals Zi (ϑ) := Z(ϑ)(ωi ) := c + GT (ϑ) − Ci . The set Θ?K of trading strategies and the outcome Z ? are defined by the above algorithm. Then we have   1 α ? d(ϑ , ΘK ) = O , α therefore for all i ∈ I0 = {1, . . . , N } lim sup |αZiα − αZi? | < ∞, α→∞

and in particular lim Ziα = Zi? .

α→∞

 Proof. We will show by induction over j = 1, . . . , K that d(ϑα , Θ?j ) = O α1 for α → ∞. lim supα→∞ |αZiα − αZi? | < ∞ follows directly from the fact that Zi (ϑ) is affine in ϑ. More explicitly for ϑ? ∈ Θ?K we have αZiα − αZi? = α(Zi (ϑα ) − Zi (ϑ? )) = αGT (ϑα − ϑ? )(ωi ). The last assertion is then obvious. First notice that it ‘does not matter for earlier Zi ’, whether we ‘move in a direction of Θ?j ’. By that we mean the following. Let us assume ϑ, ϑ + ν ∈ Θ?j , then we have Zi (η) = Zi (η + ν) for all η ∈ RM and i ∈ I1 ∪ · · · ∪ Ij . This holds true since ϑ, ϑ + ν ∈ Θ?j ⊂ Θ?k for 1 ≤ k ≤ j, and so Zi (ϑ) = Zi (ϑ + ν) for i ∈ I1 ∪ · · · ∪ Ij . By the definition of Zi (ϑ) = c + GT (ϑ)(ωi ) − Ci this gives GT (ν)(ωi ) = 0, which implies Zi (η) = Zi (η + ν) for i ∈ I1 ∪ · · · ∪ Ij .   Let us now assume that d(ϑα , Θ?j ) = O α1 , but d(ϑα , Θ?j+1 ) 6= O α1 for j ≥ 1. The special case j = 0, an easy adaption of this case, will be considered afterwards.  Our assumption implies the existence of ν α 6= 0 such that d(ϑα + ν α , Θ?j+1 ) = O α1 and Zi (ϑα + ν α ) = Zi (ϑα ) for i ∈ I1 ∪ · · · ∪ Ij . To see this, notice that there exist a constant K1 > 0 and η α ∈ Θ?j such that d(ϑα , η α ) < Kα1 . Furthermore there exists a ν α such that η α + ν α ∈ Θ?j+1 . So we have d(ϑα + ν α , η α + ν α ) = d(ϑα , η α ) < Kα1 ,  i.e., d(ϑα + ν α , Θ?j+1 ) = O α1 . Also η α + ν α ∈ Θ?j+1 ⊂ Θ?j and η α ∈ Θ?j imply by our considerations above Zi (ϑα + ν α ) = Zi (ϑα ) for all i ∈ I1 ∪ · · · ∪ Ij .  What can we say about ν α ? First we know that kν α k = 6 O α1 . Otherwise it would follow from d(ϑα , Θ?j+1 ) ≤ d(ϑα + ν α , Θ?j+1 ) + kν α k   that d(ϑα , Θ?j+1 ) = O α1 , a contradiction to our assumption d(ϑα , Θ?j+1 ) 6= O α1 .

3.1. Algorithm

33

The second property of ν α we need is that it is not possible that GT (ν α )(ωi ) = 0 for all i ∈ Ij+1 . If we would have equality for all i ∈ Ij+1 , this would imply that for η ∈ Θ?j from our construction of ν α above Zi (η + ν α ) = Zi (η) holds. This would give us η ∈ Θ?j+1  and therefore again the contradiction d(ϑα , Θ?j+1 ) = O α1 .

It is also not possible that GT (−ν α )(ωi ) > 0 for all i ∈ Ij+1 . This can once more be seen by contradiction. Otherwise we could take any η ∈ Θ?j+1 and change it by a small multiple λ > 0 of −ν α . This would not change the value of Zi (η − λν α ) for i ∈ I1 ∪ · · · ∪ Ij , but it would give us Zi (η − λν α ) > Zi (η) for i ∈ Ij+1 . And for l ∈ Ij+2 ∪ · · · ∪ IK we have Zi (η) < Zl (η). Thus we get, if we take λ sufficiently small, Zi (η − λν α ) < Zl (η − λν α ) and so a contradiction to the construction of Θ?j+1 . Collecting the last two results, we can guarantee to find an i? ∈ Ij+1 such that GT (ν α )(ωi? ) > 0.

The fact that Zi (η) ≥ Zi? (η) for all i ∈ Ij+1 ∪ · · · ∪ IK and for all η ∈ Θ?j+1 ensures  together with d(ϑα + ν α , Θ?j+1 ) = O α1 that we can find a constant K2 > 0, such that αZi (ϑα + ν α ) ≥ αZi? (ϑα + ν α ) − K2 for all i ∈ Ij+1 ∪ · · · ∪ IK . We are now ready to show that Pα (ϑα ) > Pα (ϑα + ν α ) for sufficiently large α, and this will give us the final desired contradiction. X X α α α α pi e−αZi (ϑ +ν ) Pα (ϑα + ν α ) − Pα (ϑα ) = pi e−αZi (ϑ +ν ) + i∈I1 ∪···∪Ij



X

i∈Ij+1 ∪···∪IK

pi e−αZi

(ϑα )

i∈I1 ∪···∪Ij



X

pi e−αZi (ϑ

α)

(3.1.1)

i∈Ij+1 ∪···∪IK

Due to our first observation above Zi (ϑα + ν α ) = Zi (ϑα ) for i ∈ I1 ∪ · · · ∪ Ij , therefore the first and the third term in the right hand side of 3.1.1 cancel, and we have     X X α α α Pα (ϑα + ν α ) − Pα (ϑα ) =  pi e−αZi (ϑ +ν )  −  pi e−αZi (ϑ )  i∈Ij+1 ∪···∪IK



≤

i∈Ij+1 ∪···∪IK

α +ν α )+K 2

X

pi e−αZi? (ϑ

i∈Ij+1 ∪···∪IK





α α = e−αZi? (ϑ ) e−αGT (ν )(ωi? )+K2 



 − pi? e−αZi? (ϑα ) X

i∈Ij+1 ∪···∪IK





pi  − pi?  .

 It follows now from our above results, namely kν α k = 6 O α1 and GT (ν α )(ωi? ) > 0, that the expression in the square brackets gets, for α large, less than 0 and therefore Pα (ϑα + ν α ) − Pα (ϑα ) < 0.  For the case j = 0, i.e., to show that d(ϑα , Θ?1 ) = O α1 , we do not have to bother about terms in Pα (ϑα +ν α )−Pα (ϑα ) cancelling, so everything works the same, just without those additional considerations.

34

3.2

Chapter 3. Finite setting

Total preordering

The intuitive idea behind the algorithm described in the previous section was to choose the strategy ϑ in such a way as to make the worst possible outcome minω Z(ϑ)(ω) as big as possible and thus the loss as small as possible. In this section we will use this idea to describe a very natural preordering and show that Z ? is the unique maximal element of  the set of possible outcomes Z = Z(ϑ)| ϑ ∈ RM with respect to this total preordering. Definition 3.2.1. Given two elements Z and Ze in Z, let Zi1 ≤ Zi2 ≤ · · · ≤ ZiN and Zej1 ≤ Zej2 ≤ · · · ≤ ZejN .

We define Z < Ze if there exists an L ≥ 1 such that Zil = Zejl for 1 ≤ l < L and either ZiL > ZejL or L = N + 1. Since Z < Ze and Ze < Z does not imply Ze = Z, < is just a preordering, not an ordering. Any two elements are comparable, therefore < describes a e total preordering. We use Z  Ze as shorthand for Z < Ze and Z 6= Z. Example 3.2.2. Let us take a look at Example 3.1.5 and compare Z(0), Z(1/4) and Z(1). Recall that Z1 (ϑ) = 3ϑ, Z2 (ϑ) = 0, Z3 (ϑ) = −ϑ + 1, Z4 (ϑ) = −2ϑ + 3. Thus we get Z(0) = (0, 0, 1, 3) −→ (0, 0, 1, 3), Z( 41 ) = ( 34 , 0, 34 , 52 ) −→ (0, 34 , 34 , 52 ), Z(1) = (3, 0, 0, 1) −→ (0, 0, 1, 3), where we sorted the elements to get Zi1 ≤ Zi2 ≤ Zi3 ≤ Zi4 . So we obtain Z(1/4)  Z(0) and Z(1/4)  Z(1). Since Z(0) < Z(1), Z(1) < Z(0), but Z(1) 6= Z(0), we also see that < is just a preordering, not an ordering. Theorem 3.2.3. The element Z ? defined by the algorithm in the previous section is the unique maximal element of Z with respect to the total preordering <.

3.3. Balanced strategies

35

Proof. Let Z 6= Z ? be an arbitrary element in Z. Then there exists a k, chosen minimal, such that for ϑ given by Z = Z(ϑ), ϑ ∈ Θ?j for j < k and ϑ 6∈ Θ?k . Note that ϑ might not be unique, see Remark 3.1.7 and Proposition 3.1.8. But this poses no problem, we might just take any ϑ such that Z = Z(ϑ). If we assume wlog that Z1? ≤ · · · ≤ ZN? then, by the construction of Z ? respectively Θ?j and since ϑ ∈ Θ?j for j = 1, . . . , k − 1, Zj? = Zj for 1 ≤ j ≤ m := |I1 ∪ · · · ∪ Ik−1 |. If we let further Zi1 ≤ · · · ≤ ZiN , it follows again from the construction of Z ? and Θ?j that Zj? ≥ Zij for 1 ≤ j ≤ m. Since ϑ 6∈ Θ?k , there exists an n ∈ Ik such that Zn < Zn? . Thus Zim+1 ≤ Zn < Zn? = ? Zm+1 which shows that Z ?  Z. Therefore we have shown that any arbitrary element Z ∈ Z, Z 6= Z ? is smaller than Z ? with respect to the given total preordering, i.e., Z ?  Z. Thus Z ? is the unique maximal element of Z with respect to the total preordering <.

3.3

Balanced strategies

So far we have described the special element Z ? by two different means. First it was described by an algorithm in Section 3.1. This description was useful for showing the convergence result, namely that Z ? is the limit of the optimal outcome Z α , when the risk aversion parameter α tends to infinity. In Section 3.2 it was shown that Z ? can also be interpreted as the unique maximal element with respect to an intuitive total preordering. The merit of this approach is that it gives a very short, abstract definition. The disadvantage although is that it is in general not easy to tell from these two definitions what the maximal element is or decide, given an element Z ∈ Z, whether it is the maximal one or not. We will therefore give an other additional approach, introducing a balanced strategy and the corresponding balanced outcome. To our knowledge this is a new concept. One nice feature of the definition will be that it is very easy and straightforward to determine, given the stock price process and the contingent claim, the balanced strategy and the balanced outcome. But the definition will also turn out to be useful for proving statements. Before giving the formal definition, let us describe the idea behind a balanced strategy. Actually the basic idea is already somewhat present in the definition of Z ? using the algorithm in Section 3.1 and the total preordering in Section 3.2. At each time when the agent has to make a decision about her strategy, she considers two cases: the stock going up or down. Within each of these two cases, the agent looks at the worst possible outcome for her, taking into account her short position in the contingent claim C. She then chooses a strategy to balance those two worst case scenarios, i.e., she takes the strategy that makes the worst outcome in the case of a rising stock equal the worst outcome in the case of a falling stock. Definition 3.3.1. Given a finite probability space Ω, |Ω| = N , a finite time horizon T ∈ N, a filtration (Ft )t=0,...,T , an adapted, R valued stock process (St )t=0,...,T , and a

36

Chapter 3. Finite setting

probability measure P, we define for an atom F ∈ Ft , t = 0, 1, . . . , T − 1, F+ := {ω ∈ F : 4St+1 (ω) > 0} ∈ Ft+1 and F− := {ω ∈ F : 4St+1 (ω) < 0} ∈ Ft+1 . Assume we are further given the an initial endowment c ∈ R and a claim C ∈ FT . We ¯ = c + GT (ϑ) ¯ − C balanced, if then call a strategy ϑ¯ respectively an outcome Z¯ := Z(ϑ) ¯ := min Z(ϑ)(ω) ¯ ¯ ¯ zF+ (ϑ) = min Z(ϑ)(ω) =: zF− (ϑ) ω∈F+

ω∈F−

for all atoms F ∈ Ft , t = 0, 1, . . . , T − 1. Remark 3.3.2. As in the definition of the special outcome Z ? by the algorithm and as the unique maximal element with respect to the total preordering, the probability measure P does not enter into the definition. The only important fact about the probability measure is that P [ω] > 0 for all ω ∈ Ω, a condition we always impose on a probability measure on a finite probability space Ω. Also the initial endowment c ∈ R has no influence on the definition of a balanced strategy. Remark 3.3.3. The first thing we have to check is whether a balanced strategy ϑ¯ really exists. For that we go backwards through the tree described by S. At an atom F ∈ FT −1 , i.e., a node at time T − 1, one has to find a ϑ¯F ∈ R such that   min ϑ¯F 4ST (ω) − C(ω) = min ϑ¯F 4ST (ω) − C(ω) .

ω∈F+

ω∈F−

(3.3.1)

If F+ = ∅, i.e., the stock does not move up, then the no-arbitrage condition implies that F− = ∅, and the above equation simplifies to 0 = 0, holding true for all ϑ¯F ∈ R. If we assume that the tree is non-degenerate, see Remark 3.1.7, i.e., F+ 6= ∅ then ϑF 7→ min ϑF 4ST (ω) ω∈F+

is the minimum over a strictly positive, finite number of affine, strictly increasing functions and therefore continuous, strictly increasing and the limits for ϑF to ∞ respectively −∞ are lim min ϑF 4ST (ω) = ∞ and lim min ϑF 4ST (ω) = −∞. ϑF →∞ ω∈F+

ϑF →−∞ ω∈F+

Analogously, by no-arbitrage also F− 6= ∅, ϑF 7→ minω∈F− ϑF 4ST (ω) is continuous, strictly decreasing in ϑ, with limits lim min ϑF 4ST (ω) = −∞ and

ϑF →∞ ω∈F−

lim

min ϑF 4ST (ω) = ∞.

ϑF →−∞ ω∈F−

Therefore there exists a unique ϑ¯F ∈ R that gives us equality in (3.3.1).

3.3. Balanced strategies

37

So we get a unique ϑ¯F for each atom F ∈ FT −1 . Having fixed the strategy at time T −1, we can, following the same argument as above, determine ϑ¯F for each atom F ∈ FT −2 . Continuing this way using backward induction we get the existence of a unique balanced ¯ The corresponding balanced outcome Z(ϑ) ¯ is also unique. strategy ϑ. Assumption 3.3.4. From now on we will always impose the standing assumption that the tree generated by the stock process S is non-degenerate. Thus we assume, together with the standing assumption of no-arbitrage P [4St+1 > 0; F ] > 0 and P [4St+1 < 0; F ] > 0 for all atoms F in Ft , t = 0, . . . , T − 1. Summing up the above considerations, we get the following proposition. Proposition 3.3.5. Given a market as in Definition 3.3.1 that satisfies in addition Assumption 3.3.4. Then there exists a unique balanced strategy ϑ¯ and a unique balanced ¯ outcome Z¯ = Z(ϑ). Example 3.3.6. Let us look at the setting from Example 1.2.2, i.e., S0 = 0, S1 = (g, 0, −b), c = 1, and C = (0, 1, 0). Recall that there does not exist a unique superhedging strategy, any ϑ ∈ [−1/g, 1/b] gives an outcome Z(ϑ) ≥ 0. To find the unique balanced strategy we just have to look at the only atom F = Ω ∈ F0 = {∅, Ω}. zF+ (ϑ) = min Z(ϑ)(ω) = 1 + ϑg

(3.3.2)

zF− (ϑ) = 1 − ϑb,

(3.3.3)

ω∈F+

and

so the unique balanced trading strategy, which makes (3.3.2) equal (3.3.3), is given by ¯ = (1, 0, 1). ϑ¯ = 0, and the unique balanced outcome is Z(ϑ) Example 3.3.7. In Example 3.1.5 we had the outcome Z1 (ϑ) = 3ϑ, Z2 (ϑ) = 0 Z3 (ϑ) = −ϑ + 1, Z4 (ϑ) = −2ϑ + 3. The only atom F ∈ F0 is again F = Ω and we have F+ = {ω1 }, F− = {ω3 , ω4 }. So zF+ (ϑ) = 3ϑ

38

Chapter 3. Finite setting

and zF− (ϑ) =

( −ϑ + 1

ϑ ≤ 2,

−2ϑ + 3 2 < ϑ.

Therefore, see also Figure 3.1.1, the unique balanced strategy is ϑ¯ = 1/4, which equals the strategy defined by the algorithm in Section 3.1. Now we prove that the concept of balanced outcome really coincides with the special outcome introduced so far by means of an algorithm and a total preordering, where we already know that the latter two concepts coincide. ¯ equals the maximal element Z ? with Proposition 3.3.8. The balanced outcome Z¯ = Z(ϑ) respect to the total preordering described in Section 3.2. Proof. Let us assume that the maximal element Z ? = Z(ϑ) is not balanced. Then there has to exist at least one atom F ∈ Ft such that zF+ (ϑ) = min Z(ϑ)(ω) 6= min Z(ϑ)(ω) = zF− (ϑ). ω∈F+

ω∈F−

If there are more such nodes, take the one for which min{zF+ (ϑ), zF− (ϑ)} is minimal. If zF+ (ϑ) < zF− (ϑ), respectively >, we could slightly increase, respectively decrease, the strategy ϑ to ϑ0 for this single node and this way the minimum outcome minω∈F Z(ϑ0 ) > minω∈F Z(ϑ), while we did not change the outcome for any ω in the complement of F . So we constructed an outcome Z(ϑ0 )  Z(ϑ), contradicting the maximality of Z(ϑ).

3.4

Different notions of superhedging

Now we are going to use the notion of a balanced strategy, introduced in the previous section, to show the connections to the concept of superhedging. As indicated in the introduction there are some intuitive connections between increasing risk aversion and superhedging. We saw however in the quite easy Example 1.2.2 that there might be more than one superhedging strategy, whereas the notation of a balanced strategy is, in the setting presented so far, unique. Therefore we look at a more restrictive notion of superhedging that is already known in the literature. In his paper [14] Kramkov considers two notions of superhedging (in fact he uses the word ‘hedging’, not ‘superhedging’), superhedging and minimal superhedging. The setting of the paper is a very general, continuous time model. But we are going to apply the results just to our usual finite Ω model, described by (Ω, (Ft )t=0,...,T , P), the stock process (St )t=0,...,T , the initial endowment c (which will, for consistency reasons, in this

3.4. Different notions of superhedging

39

section be denoted by V0 ), the contingent claim C ∈ FT , and the riskless bond B = 1. Let us consider a trading strategy ϑ, which in our finite setting can be interpreted as a vector, together with the value process Vt = V0 + Gt (ϑ) − Kt , where Kt ≥ 0 is (in our finite setting) an adapted, non-decreasing process, which can be interpreted as cumulative consumption. We call ϑt and Kt a superhedging portfolio for the claim C, if C ≤ VT , i.e., if the strategy gives us, provided we start with initial endowment V0 , enough to satisfy the claim at final time T . We call a strategy ϑˆt and an non-decreasing consumption ˆ −K b t ≥ 0 with value process Vbt = Vb0 + Gt (ϑ) b t minimal superhedging portfolio, process K if it satisfies Vbt ≤ Vt , t ∈ [0, T ] for all superhedging portfolios (ϑt , Kt ) with value processes Vt .

Kramkov shows that such a minimal superhedging portfolio exists, that its value process Vbt is given by Vbt = supQ∈Me EQ[C|Ft ], and that it gives rise to the so called optional decomposition ˆ −K b t = Vbt . Vb0 + Gt (ϑ) Me denotes, see Section 2.1, the set of martingale measures equivalent to P. At final time T this gives ˆ −K b T = VbT . Vb0 + GT (ϑ) Note that

C=

sup Q∈Me (S)

EQ[C|FT ] = VbT

holds and therefore the cumulative consumption at time T ˆ −C b T = Vb0 + GT (ϑ) K can be interpreted as the outcome of an agent starting with an initial endowment of Vb0 ˆ Since by assumption and a short position in the contingent claim C, using strategy ϑ. b T ≥ 0 this is a superhedging strategy. K What are the relations between the minimal superhedging portfolio introduced by Kramkov and the concept of balanced strategies? Note that the balanced strategy is independent of the initial endowment. It becomes however a superhedging strategy provided the initial endowment is big enough to be able to superhedge, i.e., if the endowment equals supQ∈Me (S) EQ[C]. This can be seen directly by using the characterization as the maximal element of the total preordering in Section 3.2. But is it also a minimal superhedging strategy? The minimal superhedging portfolio seems to be more restrictive, since it takes into account the whole time-period [0, T ], whereas the balanced strategy is only concerned with the outcome at the final time T . We will however show that the balanced outcome Z¯ corresponds to a minimal superhedging strategy. Before proving this, the following remark shows that an implication in the other direction, namely that every

40

Chapter 3. Finite setting

minimal superhedging strategy is a balanced strategy, does for blatant reasons not hold true. Remark 3.4.1. Note that the minimal superhedging strategy does not have to be unique. This can be seen from Example 1.2.2, i.e., S0 = 0, S1 = (g, 0, −b), c = 1, and C = (0, 1, 0). In this setting any ϑ ∈ [−1/g, 1/b] is a minimal superhedging strategy with V0 = 1, K0 = 0, V1 = (−1, 0, −1) = C, and K1 = (1 + ϑg, 0, 1 − ϑb). So in this example the restriction to minimal superhedging strategies does not provide a superhedging agent with any help for deciding which strategy to choose, whereas the balanced strategy ϑ¯ = 0 is unique, see Example 3.3.6. Let us now show that the balanced outcome Z¯ corresponds, as claimed above, indeed to a minimal superhedging strategy. Theorem 3.4.2. Given a market as in Definition 3.3.1 that satisfies in addition As¯ defined in Section 3.3, equals the sumption 3.3.4. Then the balanced outcome Z¯ = Z(ϑ), b T for a minimal superhedging portfolio. consumption K b is a minimal superhedging portfolio if Proof. A strategy ϑˆ and a consumption process K and only if it satisfies Vbt = sup EQ[C|Ft ] Q∈Me (S)

b t ≥ 0, determined by and K is non-decreasing.

ˆ −K b t = Vbt , Vb0 + Gt (ϑ)

We start building up a strategy to get the balanced outcome Z¯ by working backwards through the tree induced by the stock process St . The final value V¯T = C = VbT is known. For each node F at time T − 1 we have to determine the corresponding strategy ϑ¯F to ¯ T ≥ 0. But this simply means that we have to take the get the balanced outcome Z¯ = K unique ϑ¯F to get ϑ¯F 4ST − V¯T balanced, i.e.,   min ϑ¯F 4ST (ω) − V¯T (ω) = min ϑ¯F 4ST (ω) − V¯T (ω) .

ω∈F+

ω∈F−

For a node F ∈ FT −1 we denote by V¯T −1 (F ) ∈ R the value of V¯T −1 (ω) for ω ∈ F , which is, since F ∈ FT −1 is an atom, equal for all ω ∈ F . Now we set for each such node F ∈ FT −1 the value V¯T −1 (F ) ∈ R minimal, such that  ¯ T (ω) − K ¯ T −1 (F ) = V¯T −1 (F ) − V¯T (ω) − ϑ¯F 4ST (ω) ≥ 0 K

(3.4.1)

for all ω ∈ F+ ∪ F− . Now we have to show that the so defined V¯T −1 (F ) equals VbT −1 (F ). If we do this, we just have to repeat the argument for all the previous time steps T − 2, T − ¯ t ≥ 0 and a 3, . . . , 0 to get a strategy that gives us a non-decreasing consumption process K

3.5. General utility functions

41

value process V¯t = Vbt = supQ∈Me (S) EQ[C|Ft ]. Thus we would have constructed a minimal ¯ T = Z, ¯ and the theorem would be superhedging portfolio that produces the outcome K proved. To show that V¯T −1 (F ) = VbT −1 (F ), first note that by the definition of a minimal superhedging strategy VbT −1 (F ) is the minimal amount needed such that there exists a strategy ϑˆF to be able to get VbT −1 (F ) + ϑˆF 4ST (ω) ≥ VbT (ω) = V¯T (ω)

(3.4.2)

for all ω ∈ F+ ∪ F− . Therefore we know from (3.4.1) that V¯T −1 (F ) ≥ VbT −1 (F ). If we assume that V¯T −1 (F ) > VbT −1 (F ), there exists by the choice of V¯T −1 (F ) an ω ∈ F+ ∪ F− such that VbT −1 (F ) − (V¯T (ω) − ϑ¯F 4ST (ω)) < 0, i.e., −VbT −1 (F ) > ϑ¯F 4ST (ω) − V¯T (ω) ≥

min

ω ˜ ∈F+ ∪F−

 ϑ¯F 4ST (˜ ω ) − V¯T (˜ ω) .

Since ϑ¯ was chosen to give a balanced outcome, this implies that there exist ω+ ∈ F+ and ω− ∈ F− for which the minimum on the right hand side of the last inequality is attained. If we now assume that ϑˆF ≤ ϑ¯F take ω = ω+ , otherwise ω = ω− to get, using V¯T = VbT and (3.4.2) for the last inequality −VbT −1 (F ) > ϑ¯F 4ST (ω) − VbT (ω) ≥ ϑˆF 4ST (ω) − VbT (ω) ≥ −VbT −1 (F ), i.e., VbT −1 (F ) < VbT −1 (F ), the desired contradiction.

3.5

General utility functions

In this section we will use the concept of a balanced strategy to extend the convergence results of Section 3.1 to general, not necessarily exponential utility functions. We will see how the notion of risk aversion as given in Chapter 1 comes into play, and many ideas presented in this section will be of importance in the succeeding chapter. Definition 3.5.1. Let U (x) denote a general utility function on R, which is twice differentiable. By this we mean a function U : R → R which is strictly increasing and strictly concave, i.e., U 0 (x) > 0 and U 00 (x) < 0, and which satisfies the Inada condition1 U 0 (−∞) = ∞. Then we denote by r(x) := −

U 00 (x) >0 U 0 (x)

the (absolute) risk aversion. This notion, as mentioned in the introduction, goes back to Arrow [1] and Pratt [18]. Normally the Inada conditions also imply the assumption U 0 (∞) = 0, but we do not need this additional assumption in the following. 1

42

Chapter 3. Finite setting

We consider a family of utility functions (Uα )α∈A for a parameter set A ⊂ R, whose risk aversion tends to infinity uniformly on R as α → α∞ ≤ ∞. By that we mean that for all K > 0 there exists an α ¯ such that rα (x) = −

Uα00 (x) >K Uα0 (x)

holds for all x and for all α, where 0 < α ¯ < α < α∞ . This extends the previously treated case of the exponential utility function, since for Uα (x) = −e−αx the risk aversion equals rα (x) = α. We adopt the setting of Section 3.1 concerning the market including the stock, the bond, the contingent claim and the initial endowment. Being interested in utility maximization, we look at E[Uα (Z(ϑ))] = E[Uα (c + GT (ϑ) − C)] 7→ max . ϑ∈RM

Let us denote by ϑα the optimal strategy, i.e., E[Uα (Z α )] = E[Uα (Z(ϑα ))] = sup E[Uα (Z(ϑ))] , ϑ∈RM

where Z α := Z(ϑα ) is the optimal outcome. Remark 3.5.2. How do we know that the optimal strategy ϑα really exists? One could deduce this fact from general results, e.g., Theorem 2.1.1, which is Theorem 2.1 in [5] for the exponential utility function. But for the finite setting we can use some tools from convex analysis to easily deduce the existence of a unique maximizing strategy ϑ? , since the function ϑ 7→ E[U (Z(ϑ)] is a continuous, concave function on RM . Proposition 3.5.3. In the finite setting described in Section 3.1 there exists a utility maximizing ϑ? ∈ RM , i.e., E[U (Z(ϑ? ))] = sup E[U (Z(ϑ))] ϑ∈RM

for a utility function U (x). Proof. −E[U (Z(ϑ))] is a continuous, thus closed, proper convex function. We can therefore use [20, Theorem VI.27.1(d)], which guarantees the existence of a minimizer provided −E[U (Z(ϑ))] has no direction of recession. To show that no direction of recession exists, we use [20, Proposition II.8.6 together with the definition following Corollary 8.6.2] and show that lim sup E[U (Z(λϑ))] = −∞ λ→∞

for any ϑ 6= 0. Since ϑ 6= 0 we know by the no-arbitrage condition together with the non-degeneracy condition, see Assumption 3.3.4, that there has to exist an ω1 ∈ Ω such that GT (ϑ)(ω1 ) < 0. We also get that maxω∈Ω GT (ϑ)(ω) > 0. So E[U (Z(λϑ))] ≤ P [ω1 ] U (c + λGT (ϑ)(ω1 ) − C(ω1 ))

3.5. General utility functions

43

+ (1 − P [ω1 ])U



 c + λ max GT (ϑ)(ω) − min C(ω) . ω∈Ω

ω∈Ω

and therefore there exist constants c1 , c2 , d1 , d2 , where d1 , d2 > 0 such that E[U (Z(λϑ))] ≤ P [ω1 ] U (c1 − λd1 ) + (1 − P [ω1 ])U (c2 + λd2 )   1 − P [ω1 ] U (c2 + λd2 ) = P [ω1 ] U (c1 − λd1 ) 1 + . P [ω1 ] U (c1 − λd1 )

(3.5.1)

We know from concavity that U (−∞) = −∞. If U (∞) < ∞ it follows directly, otherwise we deduce from de L’Hospital and the Inada condition U 0 (−∞) = ∞ that the expression in the square bracket tends to 1 for λ → ∞. Therefore the right hand side in equation (3.5.1) goes to −∞. This proves that −E[U (Z(ϑ))] has no direction of recession. Our goal is to show that Z α converges to the balanced strategy Z¯ and determine the speed of convergence. First we will show that the optimal outcome Z α becomes ‘almost balanced’ as the risk aversion rα (x) tends to infinity. Proposition 3.5.4. Let Z α = c + GT (ϑα ) − C be the optimal outcome for the utility maximization problem E[Uα (c + GT (ϑ) − C)] 7→ max . ϑ

Let further rα (x) denote the risk aversion for the utility function Uα (x), α ∈ A. Then there exists for all atoms F ∈ Ft , t = 0, . . . , T − 1 a constant K(F ) such that | min Z α (ω) − min Z α (ω)| ≤ ω∈F+

ω∈F−

K(F ) inf x∈I rα (x)

holds true for all α ∈ A, where I := (−∞, minω∈F− Z α (ω)] ∪ (−∞, minω∈F+ Z α (ω)]. Proof. Fix an atom F ∈ Ft , t = 0, . . . , T − 1 and let us denote by ϑαF the entry of the optimal strategy ϑα that corresponds to the node given by F . Since ϑα is optimal, we get, if we differentiate E[Uα (Z(ϑ))] with respect to the entry ϑF , 0 = E[Uα0 (Z α (ω))4St+1 (ω)] . We can split this sum into a sum over ω ∈ F+ and ω ∈ F− , the terms for which 4St+1 (ω) = 0 vanish anyhow, to get, noting that Uα0 (x) > 0 is strictly decreasing,    0 α 0 α 0 ≥ E[Uα (Z (ω))4St+1 (ω); F+ ] + E max Uα (Z (˜ ω )) 4St+1 (ω); F− ω ˜ ∈F−

= Uα0 ( min Z α (˜ ω )) ω ˜ ∈F−     Uα0 (Z α (ω)) E 0 4St+1 (ω); F+ + E[4St+1 ; F− ] . Uα (minω˜ ∈F− Z α (˜ ω ))

(3.5.2)

44

Chapter 3. Finite setting

For this inequality to hold true, Uα0 (Z α (ω)) 4St+1 (ω) > 0 Uα0 (minω˜ ∈F− Z α (˜ ω )) must be bounded for ω ∈ F+ . It follows from the definition of risk aversion, i.e., r(x) = − that

U 00 (x) d = − log U 0 (x) 0 U (x) dx

Rb U 0 (a) a r(x)dx . = e U 0 (b)

ω ) ≥ Z α (ω), Therefore we get, if minω˜ ∈F− Z α (˜ Uα0 (Z α (ω)) = exp Uα0 (minω˜ ∈F− Z α (˜ ω )) ≥ exp

(Z

minω∈F Z α (˜ ω) ˜ −

rα (x)dx

)

Z α (ω)



   α min Z (˜ ω ) − Z (ω) inf rα (x) . α

ω ˜ ∈F−

x∈I

So we get from (3.5.2) the necessary condition that for each ω ∈ F+ , for minω˜ ∈F− Z α (˜ ω) < α Z (ω) this holds trivially, and for all α ∈ A    α α min Z (˜ ω ) − Z (ω) inf rα (x) ≤ K1 (ω) ∀α ω ˜ ∈F−

x∈I

is bounded by a constant K1 (ω). So in particular, since |F+ | < ∞,   α α min Z (˜ ω ) − min Z (˜ ω ) inf rα (x) ≤ K1 (F+ ), ∀α, ω ˜ ∈F−

ω ˜ ∈F+

x∈I

has to be bounded by a constant K1 (F+ ), i.e., min Z α (˜ ω ) − min Z α (˜ ω) ≤

ω ˜ ∈F−

ω ˜ ∈F+

K1 (F+ ) , inf x∈I rα (x)

∀α.

If one uses    0 α 0 ≤ E max Uα (Z (˜ ω )) 4St+1 (ω); F+ + E[Uα0 (Z α (ω))4St+1 ; F− ] ω ˜ ∈F+

instead of (3.5.2), one analogously gets min Z α (˜ ω ) − min Z α (˜ ω) ≤

ω ˜ ∈F+

ω ˜ ∈F−

K1 (F− ) , inf x∈I rα (x)

∀α,

thus completing the proof for the constant K(F ) = max {K1 (F+ ), K1 (F− )}.

3.5. General utility functions

45

The previous proposition showed that the optimal outcome Z α gets, for large risk aversion, ‘almost balanced’. Now we have to investigate whether this property of the outcome also implies that the corresponding optimal trading strategy ϑα becomes ‘almost balanced’. Actually we do not need the fact the strategy ϑα is optimal. The following proposition can be formulated for any strategy ϑβ and outcome Z β = Z(ϑβ ). Proposition 3.5.5. Let r(β) be a function defined on R such that r(β) tends to infinity as β tends to β∞ ≤ ∞. If for all atoms F ∈ Ft , t = 0, 1, . . . , T − 1 | min Z β (˜ ω ) − min Z β (˜ ω )| = O (1/r(β)) ω ˜ ∈F+

for β → β∞ , then

ω ˜ ∈F−

β

ϑ − ϑ¯ = O (1/r(β)) , ∞

where k·k∞ denotes the supremum norm on RM .

Proof. Suppose the proposition would not hold true. Then for n = 1, 2, . . . there exist a βn , where βn → β∞ , and a node Fn such that |ϑβFnn − ϑ¯Fn | ≥ n/r(βn ). Since there are only finitely many nodes Fn , there have to exist one fixed node F and a subsequence nk tending to ∞, such that βnk

|ϑF

− ϑ¯F | ≥ nk /r(βnk ).

(3.5.3)

If there are more such atoms Fi , which correspond to times ti , we choose a node F = Fi such that t = ti is maximal. This way we get for nodes G following F , i.e., G ( F , βn |ϑG k − ϑ¯G | = O(1/r(βnk ). We will consider the subtree starting at this specially chosen node F at time t, and we will deduce a contradiction by showing that for any given K1 > 0 there exists an β0 = β0 (K1 ) such that β0 min Z β0 (˜ ω ) − min Z (˜ ω ) > K1 /r(β0 ). ω ˜ ∈F+

ω ˜ ∈F−

To do this, we define the outcome Z˜ β = Z(ϑ˜β ), which we get if we follow the strategy ϑ¯ everywhere except on the subtree starting at F . There we use the balanced strategy ϑ¯ just for the first time step (t, t + 1] and after that we follow the strategy ϑβ , i.e., ϑ˜β = ϑ¯ everywhere except on nodes G ( F , where ϑ˜βG = ϑβG . We know that there exists a K2 > 0 such that βnk ˜ min Z˜ βnk (˜ ω ) − min Z (˜ ω ) < K2 /r(βnk ) ω ˜ ∈F+

ω ˜ ∈F−

holds for all βnk . This follows since the node F was chosen such that t is maximal and since the outcome Z˜ βnk = c + GT (ϑ˜βnk ) − C depends affine on the strategy. If we set m = min{|4St+1 (ω)| : |4St+1 (ω)| > 0 and ω ∈ F },

46

Chapter 3. Finite setting

1 we know from the non-degeneracy condition that m > 0 and we can take nk ≥ 2m (K1 + K2 ). βn Let us now suppose ϑF k > ϑ¯F , the other case work analogously, just the rolls of F+ βn and F− have to be interchanged. By (3.5.3) this implies ϑ k ≥ ϑ¯F + nk /r(βn ), and we F

k

get ω ) > min Z˜ βnk (˜ ω ) + mnk /r(βnk ), min Z βnk (˜

ω ˜ ∈F+

ω ˜ ∈F+

min Z

βnk

ω ˜ ∈F−

(˜ ω ) < min Z˜ βnk (˜ ω ) − mnk /r(βnk ). ω ˜ ∈F−

Therefore min Z βnk (˜ ω ) − min Z βnk (˜ ω ) > min Z˜ βnk (˜ ω ) − min Z˜ βnk (˜ ω ) + 2mnk /r(βnk )

ω ˜ ∈F+

ω ˜ ∈F−

ω ˜ ∈F+

ω ˜ ∈F−

> −K2 /r(βnk ) + (K1 + K2 )/r(βnk ) = K1 /r(βnk ), the desired contradiction with β0 = βnk . We can now give the following theorem, which extends Theorem 3.1.9 for exponential utility functions to the case of general utility functions with increasing risk aversion. It is an immediate consequence of the two previous propositions. We just have to apply Proposition 3.5.5 for the optimal outcome Z α , where Proposition 3.5.4 guarantees that the necessary assumptions are fulfilled. Since ϑ 7→ Z(ϑ)(ω) is affine in ϑ, convergence of the strategy ϑα implies convergence of the outcome Z α at the same rate of convergence. Theorem 3.5.6. Assume we are given a finite probability space (Ω, (F)t=0,...,T , P) with a finite time horizon T ∈ N, an R-valued, adapted stochastic process (St )t=0,...,T representing the discounted stock price process, a riskless bond B = 1, the contingent claim C ∈ FT , and the initial endowment c for an agent holding a short position in the contingent claim C. Given a family of utility functions (Uα (x))α∈A whose risk aversion tends to infinity uniformly as α tends to α∞ ≤ ∞, let ϑα respectively Z α denote the optimal strategy respectively outcome for the optimization problem E[Uα (c + GT (ϑ) − C)] 7→ max . ϑ

Then

 

α

¯

ϑ − ϑ = O 1/ inf rα (x) ∞ x∈R

and

   ¯ sup Z (ω) − Z(ω) = O 1/ inf rα (x) . α

ω∈Ω

x∈R

Chapter 4 Generalization of balanced strategies In the previous chapter we used a finite model. The risky stock (St )t and the contingent claim C were modelled as a stochastic process respectively a random variable over the finite underlying probability space (Ω, F, P). The current chapter deals with an infinite model. We choose a simple one time-period model, so the stock is given by (St )t=0,1 and a trading strategy is just a real number ϑ. This setting is described in Section 4.1, where also the necessary generalizations to adapt the notion of a balanced strategy are introduced. The step from a finite to an infinite setting has some major consequences: for example there might exist more than one balanced strategy. The corresponding examples are given in Section 4.2. Section 4.3 provides similar convergence results as in previous sections, but also shows that the speed of convergence in the infinite setting is different. Finally Section 4.4 investigates the possible existence of more than one balanced strategy in more detail, showing that it is — as the set of possible limiting strategies given the risk aversion tends to ∞ — minimal.

4.1

The infinite setting

We consider a potentially incomplete market with underlying probability space (Ω, F, P) that consists of two times, time 0 and time 1. The market has a bond B = 1 with zero interest rate and a risky asset (St )t=0,1 , i.e., S0 ∈ R and S1 is an F-measurable random variable. We write 4S(ω) = S1 (ω) − S0 . Let us assume that the utility maximizing agent has a short position in a contingent claim C, i.e., an F-measurable random variable, and that the agent is endowed with an initial endowment c. So by employing a self-financing strategy ϑ ∈ R the agent gets at time 1 the outcome Z(ϑ) := c + ϑ4S − C. As in the chapters before we are interested in utility maximization, i.e., sup E[U (Z(ϑ))] , ϑ∈R

47

48

Chapter 4. Generalization of balanced strategies

and we denote by ϑ? ∈ R the optimal strategy, i.e., the strategy that maximizes the expected utility for the utility function U . Let us now give some definitions. The first one corresponds to the definition of F+ and F− in Section 3.3 for the current one time-period setting. We split the underlying probability space Ω into three disjoint sets Ω = Ω + ∪ Ω0 ∪ Ω− , where Ω+ := {ω ∈ Ω : 4S(ω) > 0} , Ω0 := {ω ∈ Ω : 4S(ω) = 0} , Ω− := {ω ∈ Ω : 4S(ω) < 0} . We assume that the market is not trivial, i.e., P [Ω0 ] < 1, and also that the market does not allow arbitrage, which in this setting can be written as P [Ω+ ] > 0, P [Ω− ] > 0. As in Section 3.5 we consider a family of utility functions (Uα (x))α∈A , whose risk aversion rα (x) tends to infinity as α tends to α∞ ≤ ∞. The convergence will be assumed to take place uniformly on an interval, see, e.g., Theorem 4.3.3 and Theorem 4.3.7. We impose the same assumptions on the utility functions as in Section 3.5. This means that Uα : R → R is twice differentiable, strictly increasing, strictly concave, and satisfies U 0 (−∞) = ∞. Since the underlying probability space (Ω, F, P) might be infinite, we have to make some integrability conditions on the stock (St )t and the contingent claim C. We use the following standing assumption throughout this chapter: Assumption 4.1.1. We assume that the stock (St )t=0,1 and the contingent claim C are P-integrable and that for all ϑ ∈ R the outcome Z(ϑ) = c + ϑ4S − C satisfies U (Z(ϑ)) ∈ L1 (P) and U 0 (Z(ϑ))4S ∈ L1 (P). Remark 4.1.2. In the case of a finite probability space Ω this assumption is obviously satisfied. Assumption 4.1.1 allows us to differentiate the expected utility E[U (Z(ϑ))] with respect to ϑ and gives us the existence of an optimal ϑ? . Lemma 4.1.3. Under the Assumption 4.1.1 the optimal strategy ϑ? for the utility maximization problem E[U (Z(ϑ))] 7→ max ϑ∈R

exists in R and is uniquely given by E[U 0 (Z(ϑ? ))4S] = 0.

4.1. The infinite setting

49

Proof. We will use dominated convergence to be able to take the limit into the expectation in   d U (Z(ϑ + h)) − U (Z(ϑ)) E[U (Z(ϑ))] = lim E . h→0 dϑ h We have for each ω ∈ Ω U (Z(ϑ + h)(ω)) − U (Z(ϑ)(ω)) = U 0 (X(ω))4S(ω) h for X(ω) between Z(ϑ) and Z(ϑ + h). Since U (x) is continuous and concave, |U 0 (X(ω))4S(ω)| ≤ max {U 0 (Z(ϑ)(ω))4S(ω), U 0 (Z(ϑ + h)(ω))4S(ω)} ∈ L1 (P) holds. So we have an integrable upper bound and can therefore exchange limit and expectation to get d E[U (Z(ϑ))] = E[U 0 (Z(ϑ))4S] . dϑ Since analogously for |h| < 1 |U 0 (Z(ϑ + h))4S| ≤ max {U 0 (Z(ϑ + 1)(ω))4S(ω), U 0 (Z(ϑ − 1)(ω))4S(ω)} ∈ L1 (P), we can again make use of Lebesgue’s dominated convergence theorem to get lim E[U 0 (Z(ϑ + h))4S; A] = E[U 0 (Z(ϑ))4S; A] ,

h→0

i.e., continuity of ϑ 7→ E[U 0 (Z(ϑ))4S; A], for A ∈ {Ω+ , Ω− }. To find the optimal ϑ? we just have to look at the first order condition. Since U (x) is concave, this suffices to find the maximum. So we need to find a ϑ? such that E[U 0 (Z(ϑ? ))4S] = 0. We can rewrite this as E[U 0 (Z(ϑ? ))4S; Ω+ ] = E[−U 0 (Z(ϑ? ))4S; Ω− ] . Note that by the no-arbitrage condition P [Ω+ ] > 0 and P [Ω− ] > 0. Now the left hand side is strictly decreasing in ϑ? (ϑ 7→ Z(ϑ)(ω) is strictly increasing, x 7→ U 0 (x)4S(ω) is strictly decreasing for ω ∈ Ω+ ), continuous, and the limit for ϑ? to −∞ is ∞ (this follows from the Inada condition U 0 (−∞) = ∞ on the utility function). The right hand side is strictly increasing, continuous and the limit for ϑ? to ∞ is ∞. Therefore there has to exist a ϑ? such that the left hand side equals the right hand side. Now we want to extend the definition of a balanced strategy given in Section 3.3 to the infinite setting described above. There the first step was to consider the worst outcome achieved by employing strategy ϑ in two different cases, the stock going up or down. This has the following, obvious generalization to the infinite setting. z+ (ϑ) := ess inf Z(ϑ)(ω), ω∈Ω+

50

Chapter 4. Generalization of balanced strategies

z− (ϑ) := ess inf Z(ϑ)(ω). ω∈Ω−

For ω ∈ Ω+ the map ϑ 7→ Z(ϑ)(ω) is strictly increasing, affine and lim Z(ϑ)(ω) = −∞.

ϑ→−∞

For ω ∈ Ω− we get the corresponding properties, i.e., strictly decreasing, affine and the limit for ϑ → ∞ equals −∞. Therefore z+ (ϑ) is an non-decreasing, concave function with limϑ→−∞ z+ (ϑ) = −∞ and z− (ϑ) is an non-increasing, concave function with limϑ→∞ z− (ϑ) = −∞. Note that it might very well happen that z+ (ϑ) takes the value −∞. Let us look at the differences between the situation now and the finite setting in Section 3.3. There we took for the definition of z+ (ϑ) — the corresponding features hold also for z− (ϑ) — the infimum over a finite number of affine functions. Thus the infimum was again strictly increasing and continuous. Both of those two properties do not hold anymore in the infinite setting. z+ (ϑ) is only non-decreasing, not necessarily strictly increasing, and it might also be discontinuous. Note that also the set of ω ∈ Ω+ such that z+ (ϑ) = Z(ϑ)(ω) might have probability zero in the infinite setting, whereas it always has probability greater than zero in the finite setting, since the infimum is a minimum and therefore really attained. We will give examples for all these cases. In the finite case we could define the balanced strategy ϑ¯ simply as the solution of z+ (ϑ) = z− (ϑ). Due to the above mentioned facts this is not feasible in the general infinite setting. Therefore we have to proceed differently and start by looking at the sets of strategies ϑ where z+ (ϑ) is less than respectively greater than z− (ϑ). In the following it will turn out to be important to be able to distinguish between the case when the infimum is attained with positive probability or not. Thus the following definition gets a little bit more subtle and might look quite technical. But we will show how it simplifies for finite Ω, and we will provide examples showing the benefits of this definition. Definition 4.1.4. For a given market (Ω, F, P) we define the following two sets of strategies: Θ+ := {ϑ : P [ω ∈ Ω− : z+ (ϑ) ≥ Z(ϑ)(ω)] > 0} , Θ− := {ϑ : P [ω ∈ Ω+ : z− (ϑ) ≥ Z(ϑ)(ω)] > 0} . These two sets are, by the above discussed properties of z+ (ϑ) and z− (ϑ), intervals of the form (ϑ, ∞) or [ϑ, ∞) for Θ+ respectively (−∞, ϑ) or (−∞, ϑ] for Θ− , where ϑ might be ±∞ and (−∞, −∞) = ∅ = (∞, ∞). It might be easier to think about the sets A+ = {ϑ : z+ (ϑ) > z− (ϑ)} , A− = {ϑ : z+ (ϑ) < z− (ϑ)} ,

4.2. Examples

51

for which A+ ⊂ Θ+ ⊂ cl(A+ ) and A− ⊂ Θ− ⊂ cl(A− ) ˆ holds. h iA+ ⊂ Θ+ is obvious. If ϑ ∈ Θ+ — which implies the existence of a subset Ω− ⊂ Ω− , ˆ − > 0, such that z+ (ϑ) ≥ Z(ϑ)(ω) for all ω ∈ Ω ˆ − — then we may take any ϑˆ > ϑ P Ω and get ˆ ≥ z+ (ϑ) ≥ Z(ϑ)(ω) > Z(ϑ)(ω) ˆ z+ (ϑ) ˆ − . Thus ϑˆ ∈ A+ for all ϑˆ > ϑ and so ϑ ∈ cl A+ . The result for A− follows for all ω ∈ Ω analogously. This way we see that the sets Θ+ and Θ− correspond to the heuristic idea of sets of strategies where z+ (ϑ) is greater respectively less than z− (ϑ). The reason why we do not use A+ and A− is that they contain slightly less information than Θ+ and Θ− , information that we will need later on. Now we want to define the set reflecting the idea of z+ (ϑ) being equal to z− (ϑ). Definition 4.1.5. The set ¯ := {ϑ ∈ [−∞, ∞] : ϑ− ≤ ϑ ≤ ϑ+ , ∀ϑ− ∈ Θ− , ∀ϑ+ ∈ Θ+ } Θ ¯ is a balanced strategy, we also say that is called the set of balanced strategies. If ϑ¯ ∈ Θ ¯ is balanced. the outcome Z(ϑ) Remark 4.1.6. Because of the above described properties of Θ+ and Θ− , we know that ¯ always exists, i.e., Θ ¯ 6= ∅. Otherwise there would exist a balanced strategy ϑ¯ ∈ Θ − ϑ+ ∈ Θ+ and ϑ− ∈ Θ− such that ϑ− > ϑ+ . This implies ϑ+ < ϑ+ +ϑ < ϑ− and so 2 ϑ+ +ϑ− ¯ is a closed interval, probably equal to ∈ A− ∩ A+ = ∅. We further know that Θ 2 ¯ might be greater than 1, see {−∞} or {∞}. It does not have to be singleton, i.e., Θ Example 4.2.3 for a market, where more than one balanced strategy exists. If however  ¯ = ϑ¯ , see Corollary 4.3.6, and the the underlying probability space Ω is finite, than Θ definition of a balanced strategy given here coincides with the one given in Section 3.3. Remark 4.1.7. If C = 0, i.e., there is no contingent claim, or more general C(ω) = C0 is ¯ = {0}. See Corollary 4.3.5. constant for all ω ∈ Ω, then Θ Remark 4.1.8. Note that the notion ‘balanced’ is independent of a utility function. The set of balanced strategies also does not change, if we change the probability measure P to an other equivalent one. Furthermore the initial endowment is irrelevant, it can be easily incorporated in the contingent claim. In this chapter we are not interested in the connections to superhedging as in Section 3.4. Therefore we assume in the following always, if not explicitly stated differentially, that the initial endowment c equals 0.

4.2

Examples

Let us now give different examples, illustrating the above definitions, where we concentrate on the differences between finite and infinite setting.

52

Chapter 4. Generalization of balanced strategies

 

  

      

      

 



 Figure 4.2.1: Zi (ϑ) for Example 4.2.1

Example 4.2.1. We take Ω = {ω0 , ω1 , . . .} and write Xn = X(ωn ) for a random variable X. Let 4S0 = 1 and 4Sn = -1 for n ≥ 1. That means that Ω+ = {ω0 } and Ω− = {ω1 , ω2 , . . . }. For the contingent claim C we take C0 = 0 and Cn = − n1 , n ≥ 1, and the initial endowment c equals 0. In this setting z+ (ϑ) = Z(ϑ)(ω0 ) = ϑ4S0 − C0 = ϑ and z− (ϑ) = inf Z(ϑ)(ω) = inf ω∈Ω−

n≥1



1 −ϑ + n



= −ϑ.

Therefore we get ¯ = {0} , Θ+ = (0, ∞), Θ− = (−∞, 0], Θ and we have the unique balanced strategy ϑ¯ = 0. Note that Θ+ is open and that ϑ¯ ∈ / Θ+ . We will later (Example 4.3.8) continue with this example, and then this fact will be important. Note also that we did not specify the probability measure so far. Since this is not really necessary for defining the balanced strategy, we just assumed that P[ω] > 0 for all ω ∈ Ω. But since we will later return to this example, let us fix the probabilities P[ωn ] = 2−n−1 , n ≥ 0. Example 4.2.2. Let us now give an example where z− (ϑ) and z+ (ϑ) do not intersect. This example shows that the ‘easy’, straight forward definition of balanced strategy, given in 3.3 for a finite probability space Ω, does not work in this infinite setting. Let us take Ω+ = {ω1 , ω2 , . . .}, 4Sn = n, Cn = −1 for n ≥ 1, and c = 0. This gives Zn (ϑ) = nϑ + 1

4.2. Examples

53

  

 

  

 



       

   

 

Figure 4.2.2: Zi (ϑ) for Example 4.2.2

and z+ (ϑ) = inf (nϑ + 1) = n≥1

( ϑ+1

if ϑ ≥ 0,

−∞

if ϑ < 0.

We further take Ω− = {ω0 }, 4S0 = −1, C0 = 0, which gives z− (ϑ) = Z0 (ϑ) = −ϑ. z+ (ϑ) is not continuous and does not intersect with z− (ϑ). But the general definitions of ¯ do still apply, and we get the sets Θ+ , Θ− , and Θ ¯ = {0} Θ+ = [0, ∞), Θ− = (−∞, 0), and Θ in this example. Example 4.2.3. Now we will give an example where the balanced strategy is not unique. For this we will construct a non-decreasing function z+ (ϑ) and a non-increasing function z− (ϑ) that intersect at more than one point. This is only possible if both functions are constant over a certain interval. We take Ω+ = {ω1 , ω2 , . . .}, 4Sn = 3−2n+2 , Cn = −3−2n+2 for n ≥ 1. Thus we get for initial endowment c = 0  z+ (ϑ) = inf (1 + ϑ)3−2n+2 = (1 + ϑ)χ{ϑ<−1} , n≥1

54

Chapter 4. Generalization of balanced strategies

  

  

    

    

   

  





 Figure 4.2.3: Zi (ϑ) for Example 4.2.3

Analogously we set Ω− = {ω−1 , ω−2 , . . .}, 4S−n = −3−2n+1 , C−n = −3−2n+1 for n ≥ 1 and get  1 z− (ϑ) = inf (1 − ϑ)3−2n+1 = (1 − ϑ)χ{ϑ>1} , n≥1 3 where χ{E} is the indicator function, equal to 1, if the event E is true, 0 otherwise. Now ¯ = [−1, 1], Θ+ = [1, ∞), Θ− = (−∞, −1], and Θ i.e., we get a whole interval of balanced strategies. Theorem 4.3.7, will show that the optimal strategy ϑα converges to the set of balanced strategies, if the risk aversion tends to infinity. In the current example this would not ¯ So the question arises whether give us that ϑα converges to a single, balanced strategy ϑ. ϑα still converges to a ϑ¯ and the theorem does just not cover this case, or whether in ¯ but only to the set of general ϑα does no longer converge to a single balanced strategy ϑ, ¯ strategies Θ. This example will show that the later is true. In general ϑα does not converge to a ¯ Before doing the not hard, but somewhat tedious calculations, let us single strategy ϑ. give the intuitive idea behind this example. If one just looks at the market consisting of {ω−1 , ω1 }, the unique balanced strategy would be ϑ¯ = −1/2. So if we manage to choose the probabilities such that the other states ω−n and ωn , n ≥ 2, get relatively little weight, we might expect to get ϑα1 close to −1/2 by choosing α1 large enough. For the market {ω−1 , ω1 , ω2 } the unique balanced strategy would be ϑ¯ = 1/2. Now we again try to choose the probabilities in such a way that the other states get relatively little weight, and that the optimal ϑβ1 gets close to 1/2 if β1 is large enough. Continuing this way one should get an oscillating sequence of optimal strategies ϑαn ≈ −1/2 and ϑβn ≈ 1/2 for αn , βn → ∞.

4.2. Examples

55

Let us show that this really works by specifying the probabilities pn := P[ωn ] and qn := P[ω−n ], n = 1, 2, . . . : p1 := q1 := K, pn := qn−1 33−3·2

2n−3

3−3·22n−2

qn := pn 3

, n > 1,

, n > 1.

This gives two decreasing, summable sequences pn and qn , and the constant K can be P P chosen such that n≥1 pn + n≥1 qn = 1. Next let us define 1 2n−2 αn := 32n log(2 · 3−2+3·2 ) and 2 1 2n−1 βn := 32n+1 log(2 · 3−2+3·2 ). 2 These two increasing sequences obviously tend to ∞ for n → ∞. We may use αn and βn to rewrite 4 pn = qn−1 exp(−4αn−1 3−2n+2 ) and 3 4 qn = pn exp(−4βn−1 3−2n+1 ) 3

(4.2.1) (4.2.2)

From differentiating E[U (Z(ϑ)], where we use the exponential utility U (x) = −e−αx , we know that the optimal ϑα must satisfy X X pn exp(−α(ϑα 4Sn − Cn ))4Sn = qn exp(−α(ϑα 4S−n − C−n ))(−4S−n ), n≥1

i.e., X n≥1

n≥1

X   pn exp −α(3ϑα + 3)3−2n+1 3−2n+2 = qn exp −α(3 − 3ϑα )3−2n 3−2n+1 . (4.2.3) n≥1

Let us denote the left hand side of the above equality by LHS(ϑα , α), the right hand side by RHS(ϑα , α). Note that LHS(ϑ, α) is strictly decreasing in ϑ, whereas RHS(ϑ, α) is strictly increasing in ϑ. Our goal is now the following: we will show that LHS(−1/3, αm ) ≤ RHS(−1/3, αm ) and LHS(1/3, βm ) ≥ RHS(1/3, βm ). This implies that 1 and ϑβm ≥ 3 Since we know that αm and βm tend to ∞ for m → converge to a strategy ϑ¯ as α → ∞. ϑα m ≤ −

1 . 3 ∞, this shows that ϑα can not

56

Chapter 4. Generalization of balanced strategies

Let us start by looking at LHS(−1/3, αm ): X  LHS(−1/3, αm ) = pn exp −αm (−1 + 3) 3−2n+1 3−2n+2 ≤

n≥1 m X n=1

X  pn exp −6αm 3−2n 3−2n+2 + pm+1 3−2n+2 .

(4.2.4)

n>m

For n = 1, . . . , m we deduce from  1  1 1 2n−2 2n−2 exp 2αm 3−2n ≥ exp 2αn 3−2n = 2 · 3−2+3·2 = 3−3+3·2 6 6 6 and, using the definition of qn , pn 2n−2 = 3−3+3·2 qn that  pn 1 ≤ exp 2αm 3−2n qn 6 holds. This is equivalent to   1 pn exp −6αm 3−2n 3−2n+2 ≤ qn exp −4αm 3−2n 3−2n+1 . 2 If we now plug this into (4.2.4) and use for the second step (4.2.1), we get m

LHS(−1/3, αm ) ≤

 1X 1 qn exp −4αm 3−2n 3−2n+1 + pm+1 3−2m+2 2 n=1 8 m

 1X = qn exp −4αm 3−2n 3−2n+1 2 n=1  1 + qm exp −4αm 3−2m 3−2m+1 X2  ≤ qn exp −4αm 3−2n 3−2n+1 = RHS(−1/3, αm ), n≥1

the desired result. For the sake of completeness let us also show that RHS(1/3, βm ) ≤ LHS(1/3, βm ), which works similarly. For n = 1, . . . , m we get from  1 2n−1 exp 2βm 3−2n−1 ≥ 3−3+3·2 6 and qn 2n−1 = 3−3+3·2 pn+1 that  qn 1 ≤ exp 2βm 3−2n−1 , pn+1 6

4.3. Convergence results

57

which is equivalent to   1 qn exp −6βm 3−2n−1 3−2n+1 ≤ pn+1 exp −4βm 3−2n−1 3−2n . 2 Thus RHS(1/3, βm ) ≤ =

m X

n=1 m X

X  qn exp −2βm 3−2n 3−2n+1 + qm+1 3−2n+1 n>m

 1 qn exp −2βm 3−2n 3−2n+1 + qm+1 3−2m+1 8 n=1 m

 1X pn+1 exp −4βm 3−2n−1 3−2n 2 n=1  1 + pm+1 exp −4βm 3−2m−1 3−2m X2  ≤ pn+1 exp −4βm 3−2n−1 3−2n ≤

n≥1

=

X n≥2

 pn exp −4βm 3−2n+1 3−2n+2 ≤ LHS(1/3, βm ).

What can be said about the behavior of the optimal outcome Z(ϑα )? We know from the general Theorems 2.1.4 respectively 2.2.4 that the negative part Z(ϑα )− converges to 0 in the L1 (P)-norm respectively L1 (Q)-norm for all absolutely continuous martingale measures Q which have finite relative entropy with respect to the physical measure P. For the above example we can see this fact also directly, once we have shown Theorem 4.3.7, ¯ ¯ ¯ Since Z(ϑ)(ω) ¯ and ϑ¯ 7→ Z(ϑ)(ω) which states that ϑα converges to Θ. ≥ 0 for all ϑ¯ ∈ Θ is α affine, and thus in particular continuous, this implies immediately limα→∞ Z(ϑ )(ω) ≥ 0 for all ω ∈ Ω. But the example also shows that we can not hope to get convergence of the outcome Z(ϑα ). Just look at the state ω1 which has positive probability. ϑαm ≤ −1/3 and ϑ βm ≥ 1/3 imply that Z(ϑα m )(ω1 ) = (1 + ϑαm ) ≤ 2/3 whereas Z(ϑβm )(ω1 ) ≥ 4/3. Thus Z(ϑαm )(ω1 ) − Z(ϑβm )(ω1 ) ≥ 2/3 and the optimal strategy can not converge to a unique limiting strategy.

4.3

Convergence results

We will start this section by a proposition which gives a bound on the distance between the optimal strategy ϑ? for our maximization problem and a strategy ϑ+ in the set Θ+ respectively ϑ− ∈ Θ− . The important feature of this bound is that it depends on the utility function U (x) only through its risk aversion r(x). This way we can use this proposition later in the section to show convergence results for a family of utility functions with increasing risk aversion.

58

Chapter 4. Generalization of balanced strategies

Proposition 4.3.1. Assume we are given a one time-period market consisting of a probability space (Ω, F, P), a stock S, a bond B = 1, and a contingent claim C satisfying Assumption 4.1.1. Let ϑ+ be any fixed strategy in Θ+ . For an arbitrary utility function U (x) let r(x) denote the risk aversion and ϑ? the optimal strategy for the maximization problem E[U (Z(ϑ)] 7→ max, ϑ∈R

where Z(ϑ) = ϑ4S − C denotes the outcome for an agent who has a short position in the contingent claim C and follows strategy ϑ. Then there exists a constant K > 0 such that simultaneously for all utility functions U (x) and corresponding strategies ϑ? ϑ? − ϑ+ ≤

K , inf x r(x)

where the infimum is taken over all x ∈ (−∞, z+ (ϑ+ )]. I.e., the constant K depends only on ϑ+ , not on the utility function U (x) and the corresponding optimal strategy ϑ? . The analogous result holds also for ϑ− ∈ Θ− , i.e., there exists a constant K 0 > 0 such that K0 , ϑ− − ϑ? ≤ inf x r(x) where the infimum is taken over all x ∈ (−∞, z− (ϑ− )]. Proof. We will just prove the assertion for ϑ+ , the case for ϑ− works analogously. We might assume that ϑ+ < ϑ? , otherwise the statement is trivial. ˆ − ⊂ Ω− such that P[Ω ˆ − ] > 0 and Since ϑ+ ∈ Θ+ , there exists a subset Ω ˆ −. z+ (ϑ+ ) ≥ Z(ϑ+ )(ω) ∀ω ∈ Ω

(4.3.1)

˜− ⊂ Ω ˆ − with P[Ω ˜ − ] > 0 for There also exists an ε > 0 such that there is a further subset Ω ˜ − . Otherwise the sets Aε := {ω ∈ Ω ˆ − : 4S(ω) < which 4S(ω) < −ε holds for all ω ∈ Ω −ε} would have probability 0 for all ε > 0, which would imply that the probability of ˆ − is also 0, a contradiction. A0 = Ω Let us now assume that the proposition would not hold true, i.e., for all constants K > 0 there exists a utility function U (x) with corresponding risk aversion r(x) and optimal strategy ϑ? such that ϑ? − ϑ+ >

K inf x∈I r(x)

˜ − ⊂ Ω− , that for I := (−∞, z+ (ϑ+ )]. This implies, since ϑ? > ϑ+ and 4S(ω) < −ε on Ω ˜− for all ω ∈ Ω Z(ϑ+ )(ω) ≥ Z(ϑ? )(ω) + ε(ϑ? − ϑ+ ) ≥ Z(ϑ? )(ω) + ε

K inf x∈I r(x)

.

4.3. Convergence results

59

˜− ⊂ Ω ˆ− Using (4.3.1) we further get for all ω ∈ Ω z+ (ϑ+ ) − Z(ϑ? )(ω) ≥ Z(ϑ+ )(ω) − Z(ϑ? )(ω) ≥ ε

K inf x∈I r(x)

.

(4.3.2)

Note also that, since ϑ? > ϑ+ , z+ (ϑ? ) ≥ z+ (ϑ+ ).

(4.3.3)

Now we are going to use the optimality of ϑ? . Using Lemma 4.1.3 we get E[U 0 (Z(ϑ? )4S] = 0 for the optimal ϑ? . The equation can be rewritten, using the introduced notation Ω+ and Ω− , noting that we have 4S = 0 on Ω0 and the fact that U 0 (x) is decreasing and continuous, in the following way: 0 = E[U 0 (Z(ϑ? ))4S; Ω+ ] + E[U 0 (Z(ϑ? ))4S; Ω− ]    0 ? ≤ E ess sup U (Z(ϑ )(˜ ω )) 4S; Ω+ + E[U 0 (Z(ϑ? ))4S; Ω− ] ω ˜ ∈Ω+

= E[U 0 (z+ (ϑ? ))4S; Ω+ ] + E[U 0 (Z(ϑ? ))4S; Ω− ]    0 U (Z(ϑ? )) 0 ? 4S; Ω− . = U (z+ (ϑ )) E[4S; Ω+ ] + E 0 U (z+ (ϑ? )) Since U 0 (x) > 0 we get  0  U (Z(ϑ? )) E[4S; Ω+ ] ≥ E 0 (−4S); Ω− . U (z+ (ϑ? )) 00

(x) d It follows from the definition of risk aversion, i.e., r(x) = − UU 0 (x) = − dx log U 0 (x), that U 0 (a) U 0 (b)

Rb

=e

a

r(x)dx

. So we rewrite the above inequality as " # ! Z z+ (ϑ? ) E[4S; Ω+ ] ≥ E exp r(x)dx (−4S(ω)); Ω− Z(ϑ? )(ω)

"

≥ E exp

Z

z+ (ϑ? )

Z(ϑ? )(ω)

!

#

˜− . r(x)dx ε; Ω

(4.3.4)

˜ − we can, using (4.3.3) and (4.3.2), find a lower bound for the above integral. For ω ∈ Ω Z z+ (ϑ? ) Z z+ (ϑ+ ) K r(x)dx ≥ r(x)dx ≥ ε inf r(x) ≥ εK (4.3.5) inf x∈I r(x) x∈J Z(ϑ? )(ω) Z(ϑ? )(ω) where J := [Z(ϑ? )(ω), z+ (ϑ+ )] ⊂ I. Thus we finally arrive, by putting (4.3.4) and (4.3.5) together, at h i εK ˜ ˜ − ]. E[4S; Ω+ ] ≥ E εe ; Ω− = εeεK P[Ω The left hand side does not depend on K, but this inequality would have to hold for all K. So we get the desired contradiction by letting K tend to ∞.

60

Chapter 4. Generalization of balanced strategies

Remark 4.3.2. In the above proposition the infimum is over x ∈ (−∞, z+ (ϑ+ )] respectively x ∈ (−∞, z− (ϑ− )]. If one assumes that x 7→ r(x) is decreasing, which follows for example, if U 0 (x) is convex, i.e., U 00 (x) is increasing, then the infimum is attained for x = z+ (ϑ+ ) respectively x = z− (ϑ− ). For the exponential utility function −e−αx the proposition simplifies further, since in this case r(x) = α, independent of x. This kind of remark also applies to all the following results. Let us now consider a family of utility functions (Uα (x))α∈A with increasing risk aversion. We will prove two theorems, both being relatively straightforward applications of the above proposition. The theorems will show convergence results for the optimal strategy. In the first one we will make special assumptions on the set of balanced strategies, therefore we will also get a result on the speed of convergence. Denote by d(η, Θ) := inf ϑ∈Θ |η − ϑ|, the distance between η ∈ R and Θ ⊂ R. Theorem 4.3.3. Assume we are given a one time-period market described by a probability space (Ω, F, P), a stock S, a bond B = 1, and an agent holding a short position in a contingent claim C satisfying Assumption 4.1.1. Thus the outcome for the agent following the trading strategy ϑ is Z(ϑ) = ϑ4S − C. Let us further assume that Θ+ = [ϑ+ , ∞) and ¯ = [ϑ− , ϑ+ ]. Set z := max {z+ (ϑ+ ), z− (ϑ− )}. Θ− = (−∞, ϑ− ], ϑ− ≤ ϑ+ . This implies Θ Moreover we are given a family of utility functions (Uα (x))α∈A , whose risk aversion on the interval (−∞, z] tends uniformly to infinity as α tends to α∞ ≤ ∞, i.e., rα :=

inf x∈(−∞,z]

rα (x) → ∞ for α → α∞ .

Let ϑα be the optimal strategy for the utility maximization problem E[Uα (Z(ϑ))] 7→ max . ϑ∈R

Then ¯ =O d(ϑ , Θ) α



1 rα



,

¯ of balanced strategies at a convergence rate of 1/rα . i.e., ϑα converges to the set Θ Proof. We just have to apply Proposition 4.3.1 to get ϑα − ϑ+ ≤

K(ϑ+ ) rα

ϑ− − ϑα ≤

K 0 (ϑ− ) . rα

and

¯ = [ϑ− , ϑ+ ] that It now follows from Θ ¯ ≤ max {K(ϑ+ ), K 0 (ϑ− )} 1 . d(ϑα , Θ) rα

4.3. Convergence results

61

We give three corollaries of the above proposition, assuming further special properties of the set of balanced strategies, the contingent claim C, respectively the underlying probability space. Corollary 4.3.4. We continue with the notation of the above theorem. Let us assume in  ¯ = z− (ϑ) ¯ = z. Furthermore ¯ = Θ+ ∩ Θ− . This implies Θ ¯ = ϑ¯ and z+ (ϑ) addition that Θ   α ϑ − ϑ¯ = O 1 , rα i.e., ϑα converges to the unique balanced strategy ϑ¯ at a convergence rate of 1/rα .  ¯ = ϑ¯ . Assume there would be two different balanced Proof. Let us first verify that Θ ¯ and the definition of Θ ¯ that strategies ϑ¯ < ϑ¯0 . Then we would get from ϑ¯ ∈ Θ+ , ϑ¯0 ∈ Θ 0 ¯ ¯ ϑ ≤ ϑ, a contradiction.  ¯ = z− (ϑ), ¯ since ϑ¯ ∈ Θ+ implies ¯ = ϑ¯ it follows directly that z = z+ (ϑ) From Θ ¯ ≥ z− (ϑ) ¯ and ϑ¯ ∈ Θ− gives the inequality in the other direction. z+ (ϑ) The rest is just a direct application of Theorem 4.3.3. Corollary 4.3.5. We use the notation of the Theorem 4.3.3 and assume in addition that ¯ = {0} and the the contingent claim is constant, i.e., C(ω) = C0 for all ω ∈ Ω. Then Θ assertions of Corollary 4.3.4 hold. Proof. If the claim C(ω) = C0 , then Z(0)(ω) = 0 · 4S − C0 for all ω ∈ Ω, which implies z+ (0) = 0 = z− (0). Therefore 0 ∈ Θ+

and 0 ∈ Θ− .

¯ = {0}, Thus Θ ¯ = Θ− ∩ Θ+ = (−∞, 0] ∩ [0, ∞) Θ and Corollary 4.3.4 applies. In the following corollary we assume that the underlying probability space is finite. The result therefore corresponds to Theorem 3.5.6. Corollary 4.3.6. We continue with the notation of Theorem 4.3.3 and assume in addition  ¯ = Θ+ ∩Θ− = ϑ¯ , that the underlying probability space Ω is finite. Then one always has Θ and therefore   α ϑ − ϑ¯ = O 1 . rα

62

Chapter 4. Generalization of balanced strategies

Proof. Since for a finite Ω the functions z+ (ϑ) respectively z− (ϑ) are the infimum of only finitely many, strictly increasing respectively strictly decreasing affine functions, those two functions are continuous and strictly increasing respectively strictly decreasing. Thus — the limit of z+ (ϑ) for ϑ → −∞ is −∞, whereas limϑ→∞ z− (ϑ) = −∞ — these two ¯ = z− (ϑ). ¯ It follows further that functions intersect for a unique strategy ϑ¯ solving z+ (ϑ)  ¯ −∞) and Θ− = (−∞, ϑ], ¯ implying ϑ¯ = Θ ¯ = Θ+ ∩ Θ− and allowing us to Θ+ = [ϑ, apply Corollary 4.3.4. ¯ = [ϑ− , ϑ+ ] is an element The fact that the boundary point ϑ− (respectively ϑ+ ) of Θ of Θ− (respectively Θ+ ) made it easy to apply Proposition 4.3.1 in the above theorem. We saw in Section 4.2 that this is in general not always the case. To get results for those other cases, we will need to approximate ϑ− and ϑ+ by elements in Θ− and Θ+ . This way we can still show convergence, but we will not get a convergence rate of 1/risk aversion. ¯ by an approximating sequences of strategies More formally we have to approximate Θ ϑ+ (n) and ϑ− (n). By this we mean sequences ϑ+ (n) ∈ Θ+ and ϑ− (n) ∈ Θ− such that ¯ and d(ϑ− (n), Θ) ¯ tend to zero for n → ∞. We assume wlog that ϑ+ (n) is d(ϑ+ (n), Θ) decreasing and that ϑ− (n) is increasing. Theorem 4.3.7. Let us consider a one time-period market which consists of a probability space (Ω, F, P), a stock (St )t=0,1 , a bond B = 1, and an agent with a short position in a contingent claim C such that Assumption 4.1.1 is satisfied. We assume that we are further given a family of utility functions (Uα (x))α∈A , whose risk aversion on the intervals (−∞, z+ (ϑ+ (1))] and (−∞, z− (ϑ− (1))] tends uniformly to infinity as α tends to α∞ ≤ ∞ for approximating sequences ϑ+ (n) and ϑ− (n). Let ϑα be the optimal strategy for the utility maximization problem E[Uα (Z(ϑ))] 7→ max . ϑ∈R

Then ¯ = 0. lim d(ϑα , Θ)

α→α∞

¯ < ε for α close to α∞ . Assume we Proof. Given any ε > 0, we will show that d(ϑα , Θ) are given approximating sequences ϑ+ (n) ∈ Θ+ and ϑ− (n) ∈ Θ− . Take a fixed n large enough such that ¯ < ε/2 and d(ϑ− (n), Θ) ¯ < ε/2. d(ϑ+ (n), Θ)

(4.3.6)

Now we can apply Proposition 4.3.1 to get constants K+ (ϑ+ (n)) and K− (ϑ− (n)) such that ϑα − ϑ+ (n) ≤ K+ (ϑ+ (n))

1

inf x∈In rα (x) 1 ϑ− (n) − ϑα ≤ K− (ϑ− (n)) inf x∈Jn rα (x)

,

4.3. Convergence results

63

for In = (−∞, z+ (ϑ+ (n))] and Jn = (−∞, z− (ϑ− (n))]. Since z+ (x) is non-decreasing, ϑ+ (n) decreasing and z− (x) non-increasing, ϑ− (n) increasing, we get In ⊂ I1 and Jn ⊂ J1 . Thus the above inequalities hold also for In and Jn replaced by I1 and J1 . By assumption the risk aversion on the intervals I1 , J1 tends to infinity for α → α∞ , and we get ϑα − ϑ+ (n) ≤ ε/2 and ϑ− (n) − ϑα ≤ ε/2 for α close enough to α∞ . Therefore by (4.3.6)  ¯ , ϑα ∈ [ϑ− (n) − ε/2, ϑ+ (n) + ε/2] ⊂ ϑ¯ + ε : ϑ¯ ∈ Θ ¯ ≤ ε holds. and so the desired d(ϑα , Θ) Example 4.3.8. (Continuing Example 4.2.1) In the above theorem we did not get a convergence rate of 1/risk aversion, as we did in Theorem 4.3.3. This example shows that this is not a shortcoming of the theorem: in general, i.e., without the assumptions in Theorem 4.3.3, you can not expect the convergence rate to be 1/risk aversion. We use the market described in example 4.2.1. If we take the utility function Uα (x) = −e , we get X α 1 2−n−1 eαϑ− n . E[Uα (Z(ϑ))] = − e−αϑ − 2 n≥1 −αx

If we differentiate the above expression with respect to ϑ and set it equal to 0, we get for the optimal ϑα X X α α α α α e−αϑ = 2−n e− n . 2−n eαϑ − n = eαϑ n≥1

n≥1

Suppose now that there would exist a K > 0 such that ϑα ≤ K for all α, i.e., that ϑα α converges to ϑ¯ = 0 with convergence rate 1/α. Then we would get X α α e−2K ≤ e−2αϑ = 2−n e− n . n≥1

We now choose m large enough such that X 1 2−n < e−2K 2 n>m and then α large enough such that m X

1 2−n < e−2K . 2 n=1

α −m

e With these choices we get −2K

e



m X

−n − α n

2

n=1

e

+

X

n>m

−n − α n

2

e

α −m

≤e

m X n=1

2−n +

X

2−n < e−2K ,

n>m

the desired contradiction. We slightly extend this example now and show that the convergence can be arbitrarily slow.

64

Chapter 4. Generalization of balanced strategies

Proposition 4.3.9. Given any function f (α) > 0, α = 0, 1, . . . that decreases to 0 for α → ∞. Then one can find a market — i.e., a probability space (Ω, F, P) together with a bond B = 1, a stock (St )t=0,1 and a contingent claim C — with a unique balanced strategy ϑ¯ such that for all α = 0, 1, . . . ϑ¯ − ϑα ≥ f (α),

where ϑα is the optimal strategy for an agent maximizing expected exponential utility Uα .

Proof. We basically take the market described in example 4.2.1 and 4.3.8 , i.e., 4S0 = 1 and 4Sn = -1 for n ≥ 1, P[ωn ] = 2−n−1 , n ≥ 0. For the contingent claim C we take C0 = 0, but we will have to choose Cn < 0, Cn increasing to 0 for n → ∞, dependent on the function f (α). We might wlog assume that αf (α) increases to ∞ for α → ∞, which just says that f (α) tends to 0 slower than 1/α. dxe denotes the ceiling of x, i.e., the smallest integer larger or equal to x. Define   2αf (α) m(α) := 1 + log 2 and

1 − 2f (α), α where α is maximal such that m ≥ m(α). Our assumption on f (α) ensures that m(α) tends to ∞ for α → ∞. Therefore Cm is well defined, Cm < 0 and, since f (α) decreases to 0, Cm increases to 0 for m → ∞. Therefore Cm := −

z+ (ϑ) = ϑ, z− (ϑ) = inf (−ϑ − Cm ) = −ϑ. m≥1

Thus −ϑ − Cm > −ϑ for all m ≥ 1 implies Θ+ = (0, ∞), Θ− = (−∞, 0], and our market has a unique balanced strategy ϑ¯ = 0. Note that 0 ∈ / Θ+ . This fact allows us to make the convergence as slow as we want. Otherwise Corollary 4.3.4 would give us convergence speed 1/α. By differentiating we get for the optimal ϑα the equation X α α e−αϑ = 2−n eα(ϑ +Cn ) n≥1

and from that 1=

X

−n α(2ϑα +Cn )

2

e



n=1

n≥1

α(2ϑα +Cm )

≤e for all m.

m X

−m 2αϑα

+2

e

2−n eα(2ϑ

α +C

n)

+ e2αϑ

α

X

n>m

2−n

4.3. Convergence results

65

Let us assume that there would exist an α0 such that ϑα0 = ϑα0 − ϑ¯ ≤ f (α0 ). Then we get from the above inequality by using m = m(α0 ) that 1 ≤ eα0 (2f (α0 )+Cm(α0 ) ) + 2−m(α0 ) e2α0 f (α0 ) would have to hold true. But by our choice of the function m(α) the second term on the right hand side of the above inequality is less than 1/2. And by the definition of Cm the first term equals 1/e < 1/2. This gives us the desired contradiction. In Example 4.3.8 and Proposition 4.3.9 we saw that we can not expect a convergence speed of 1/risk aversion if Θ+ and Θ− are not closed. In view of Proposition 4.3.1 we might still hope for a convergence rate of 1/risk aversion if for example Θ− = (−∞, ϑ− ], ¯ = [ϑ− , ϑ+ ] from left, i.e., ϑα ≤ ϑ− . In Example 4.3.8 Θ+ = (ϑ+ , ∞), and ϑα tends to Θ ϑα > ϑ¯ = 0, i.e., ϑα converges from the ‘bad’ side since Θ+ = (0, ∞) is the open interval. The following proposition shows that this is not bad luck but that, provided one of the sets Θ+ , Θ− is open and the other one is closed, ϑα will always converge from the ‘open side’. Proposition 4.3.10. Given the setting from Theorem 4.3.7 let us assume that in addition Θ− = (−∞, ϑ− ] and

Θ+ = (ϑ+ , ∞)

for ϑ− ≤ ϑ+ . Then ϑα ≥ ϑ− for α sufficiently close to α∞ . Proof. We will prove the proposition using an indirect approach. Assume that ϑα < ϑ− . This implies E[Uα0 (Z(ϑ− ))4S; Ω+ ] < E[Uα0 (Z(ϑ− ))(−4S); Ω− ] since for ϑα we would get equality in the above inequality and the left hand side is decreasRb 0 ing whereas the right hand side is increasing in ϑ. If we again use UU 0(a) = exp( r(x)dx), (b) a 0 this is, by dividing by Uα (z− (ϑ− )), equivalent to " ! # " ! # Z Z z− (ϑ− )

z− (ϑ− )

rα (x)dx 4S; Ω+ < E exp

E exp

rα (x)dx (−4S); Ω− . (4.3.7)

Z(ϑ− )

Z(ϑ− )

Now we are going to consider two cases. Let us first assume that z+ (ϑ− ) < z− (ϑ− ). This implies that there exists an ε > 0 such that z+ (ϑ− ) < z− (ϑ− ) − 2ε. Furthermore we get from hthe idefinition of z+ (ϑ) as the essential infimum that there exists a subset ˆ + ⊂ Ω+ , P Ω ˆ + > 0, such that Z(ϑ− )(ω) < z− (ϑ− ) − ε for all ω ∈ Ω ˆ + . Therefore the Ω left hand side of (4.3.7) is greater or equal to " ! # " ! # Z z− (ϑ− ) Z z− (ϑ− ) ˆ + ≥ E exp ˆ+ E exp rα (x)dx 4S; Ω rα (x)dx 4S; Ω Z(ϑ− )(ω)

z− (ϑ− )−ε



≥ exp ε

inf x∈(−∞,z− (ϑ− )]

 h i ˆ rα (x) E 4S; Ω+ .

66

Chapter 4. Generalization of balanced strategies

Thus the left hand side of (4.3.7) gets arbitrarily large for α → α∞ . Note that, since ϑ− ≥ ϑ− (1), (−∞, z− (ϑ− )] ⊂ (−∞, z− (ϑ− (1))]. On the other hand the right hand side of (4.3.7) is, since by definition z− (ϑ− ) ≤ Z(ϑ− )(ω) for ω ∈ Ω− , less or equal than ! " # Z z− (ϑ− )

rα (x)dx (−4S); Ω− = E[−4S; Ω− ] < ∞.

E exp

z− (ϑ− )

This gives us the desired contradiction, since (4.3.7) states that the left hand side is less than the right hand side. Now we have to consider the second case, namely h i z+ (ϑ− ) ≥ z− (ϑ− ). Here we use that, ˆ + > 0, such that z− (ϑ− ) ≥ Z(ϑ− )(ω) ˆ since ϑ− ∈ Θ− , there exists a set Ω+ ⊂ Ω+ , P Ω ˆ + . Therefore the left hand side of (4.3.7) is greater or equal than for all ω ∈ Ω "

E exp

Z

z− (ϑ− )

Z(ϑ− )(ω)

!

#

"

ˆ + ≥ E exp rα (x)dx 4S; Ω

z− (ϑ− )

Z

!

ˆ+ rα (x)dx 4S; Ω

#

z− (ϑ− )

h i ˆ + > 0. = E 4S; Ω We will now show that the right hand side of (4.3.7) tends to 0 for α → α∞ , which gives us again the desired contradiction. The right hand side of (4.3.7) equals " ! # Z z+ (ϑ− ) Uα0 (z+ (ϑ− )) E exp rα (x)dx (−4S); Ω− Uα0 (z− (ϑ− )) Z(ϑ− )(ω) " ! # Z z+ (ϑ− ) ≤ E exp rα (x)dx (−4S); Ω− , (4.3.8) Z(ϑ− )(ω)

since by assumption z+ (ϑ− ) ≥ z− (ϑ− ) and Uα0 (x) is decreasing. Now we use the fact that ϑ− ≤ ϑ+ . Therefore ϑ− ∈ / Θ+ and so P [ω ∈ Ω− : z+ (ϑ− ) ≥ Z(ϑ− )(ω)] = 0,

(4.3.9)

i.e., Z(ϑ− )(ω) − z+ (ϑ− ) > 0 a.s. on Ω− . If we define for ε ≥ 0 Ω− (ε) := {ω ∈ Ω− : 0 < Z(ϑ− )(ω) − z+ (ϑ− ) ≤ ε} , we know that Ω− (ε1 ) ⊂ Ω− (ε2 ) for 0 ≤ ε1 ≤ ε2 and P [Ω− (0)] = 0. Thus limε↓0 P [Ω− (ε)] = 0 and " ! # " ! # Z z+ (ϑ− ) Z z+ (ϑ− ) E exp rα (x)dx (−4S); Ω− (ε) ≤ E exp rα (x)dx (−4S); Ω− (ε) Z(ϑ− )(ω)

z+ (ϑ− )

= E[(−4S); Ω− (ε)] .

¯ 4.4. Minimality of Θ

67

Since S is integrable, E[(−4S); Ω− (ε)] gets arbitrarily small when ε tends to 0. So we ˆ − (ε) := Ω− \ Ω− (ε). can now focus on the complement Ω "

E exp

Z

z+ (ϑ− )

!

#

ˆ − (ε) rα (x)dx (−4S); Ω

Z(ϑ− )

"

≤ E exp

Z

z+ (ϑ− )

!

#

ˆ − (ε) rα (x)dx (−4S); Ω

z+ (ϑ− )+ε

h i ˆ ≤ exp (−ε inf rα (x)) E −4S; Ω− (ε) , (4.3.10) where the infimum is over x ∈ [z+ (ϑ− ), z+ (ϑ− ) + ε]. For ϑ+ (1) > ϑ+ , ϑ+ (1) ∈ Θ+ we have P [ω ∈ Ω− : z+ (ϑ+ (1)) > Z(ϑ+ (1))(ω)] > 0. Thus (4.3.9) implies z+ (ϑ− ) < z+ (ϑ+ (1)) and for ε > 0 small enough [z+ (ϑ− ), z+ (ϑ− ) + ε] ⊂ (−∞, z+ (ϑ+ (1))). Therefore the last term in (4.3.10) tends to 0 for α → α∞ by our assumption about the increasing risk aversion. Collecting the last two results we can make, by first choosing ε small enough and then α large enough, the right hand side of (4.3.8), and therefore the right hand side of (4.3.7), arbitrarily small and arrive so at the desired contradiction.

4.4

¯ Minimality of Θ

One of the main results of the previous section was that the optimal ϑα converges to the ¯ as the risk aversion tends to infinity. But now of course the question arises whether set Θ ¯ is minimal, or whether it is possible to find a set Φ ( Θ ¯ such that ϑα converges the set Θ to Φ. Example 4.2.3 shows that Φ can in general not be singleton. But if we look at the example thoroughly, we see that it does not exclude the possibility that Φ = [−1/3, 1/3], ¯ = [−1, 1]. which is a strict subset of Θ Until now the probability measure P did not play a very important role. As mentioned ¯ does not change, if we change the probability before the set of balanced strategies Θ measure P to an equivalent one. But for the actual optimal ϑα the probability measure P does of course matter. Consider for instance the setting of Example 4.2.3, but with probabilities qn := 3pn and pn := 2−2−n , P P such that n≥1 pn + n≥1 qn = 1. In this case the equation (4.2.3), which the optimal ϑα has to satisfy, simplifies to X X   pn exp −α(3ϑα + 3)3−2n+1 3−2n+2 = 3pn exp −α(3 − 3ϑα )3−2n 3−2n+1 . n≥1

n≥1

This equation holds for ϑα = −1/2, since ϑα = −1/2 solves 3ϑα + 3 = 1 − ϑα . So with these new probabilities the picture changes totally, ϑα does not depend on α, it is constant ¯ = [−1, 1], equal to -1/2, especially limα→∞ ϑα = −1/2. Note that -1/2 is of course in Θ

68

Chapter 4. Generalization of balanced strategies

¯ would be ‘much too big’. but for this special choice of the probability measure P the set Θ Since we can not expect to treat every probability measure separately, we are going to ¯ under the set of all possible probability measures, consider the question of minimality of Θ i.e., those probability measures that are equivalent to the physical measure P and that make Assumption 4.1.1 hold true. This makes also economically sense. In general an ˆ agent h i knows which values a stock might take, she knows for which Ω ⊂ Ω the probability ˆ is greater than 0. But normally she does not know the physical probability measure P Ω ¯ is minimal, see P. As the main result of this section we will show that in this sense Θ Corollary 4.4.11 for the precise statement. We will continue with the setting of the previous section, but we have to slightly extend some notations. ˆ ⊂Ω Definition 4.4.1. Define for Ω ˆ ϑ) := ess inf Z(ϑ)(ω), z(Ω, ˆ ω∈Ω

which extends our previous definition of z+ (ϑ) = z(Ω+ , ϑ) and z− (ϑ) = z(Ω− , ϑ). We will need to consider balanced sets for different underlying probability spaces, to be ˆ + ⊂ Ω+ and Ω ˆ − ⊂ Ω− more precise, for subsets of Ω. Thus we introduce for Ω n h i o ˆ ˆ ˆ ˆ Θ+ (Ω+ , Ω− ) := ϑ : P ω ∈ Ω− : z(Ω+ , ϑ) ≥ Z(ϑ)(ω) > 0 , n h i o ˆ +, Ω ˆ − ) := ϑ : P ω ∈ Ω ˆ + : z(Ω ˆ − , ϑ) ≥ Z(ϑ)(ω) > 0 , Θ− (Ω and n o ¯ Ω ˆ +, Ω ˆ − ) := ϑ ∈ [−∞, ∞] : ϑ− ≤ ϑ ≤ ϑ+ , ∀ϑ− ∈ Θ− (Ω ˆ +, Ω ˆ − ), ∀ϑ+ ∈ Θ+ (Ω ˆ +, Ω ˆ −) . Θ( As shorthand we still use Θ+ = Θ+ (Ω+ , Ω− ) and Θ− = Θ− (Ω+ , Ω− ). What we basically want to do is to imitate and extend Example 4.2.3. One feature of this example is the fact that Z(ϑ)(ωn ) and Z(ϑ)(ω−n ) get arbitrarily close to z+ (ϑ) = ¯ as n → ∞. The following two lemmata guarantee that this z− (ϑ) = 0 for any ϑ ∈ Θ principle holds true in the general case. Lemma 4.4.2. Assume that for ϑ− ≤ ϑ+ we have ϑ− 6∈ Θ− and ϑ+ 6∈ Θ+ . Then z+ (ϑ+ ) = z+ (ϑ− ) = z− (ϑ+ ) = z− (ϑ− ). Proof. ϑ− 6∈ Θ− means by definition that P [ω ∈ Ω+ : z− (ϑ− ) ≥ Z(ϑ− )(ω)] = 0,

¯ 4.4. Minimality of Θ

69

and so z− (ϑ− ) ≤ z+ (ϑ− ). Analogously we get z+ (ϑ+ ) ≤ z− (ϑ+ ). Since z+ (ϑ) is non-decreasing and z− (ϑ) nonincreasing we get z− (ϑ− ) ≤ z+ (ϑ− ) ≤ z+ (ϑ+ ) ≤ z− (ϑ+ ) ≤ z− (ϑ− ), which gives the desired equalities. ¯ implies Remark 4.4.3. In the case of a finite Ω it is by its mere definition trivial that ϑ ∈ Θ ¯ z+ (ϑ) = z− (ϑ). In the general infinite case it might however happen that although ϑ ∈ Θ, we get z+ (ϑ) 6= z− (ϑ). This can be seen in Example 4.2.2, where z+ (0) = 1 6= 0 = z− (0). So we really need the slightly stronger assumption of the lemma that ϑ− 6∈ Θ− and ϑ+ 6∈ Θ+ for ϑ− ≤ ϑ+ , which is in the above mentioned example, since 0 ∈ Θ+ = [0, ∞), not satisfied. Remark 4.4.4. The above lemma could also be stated slightly more general using the notation introduced in Definition 4.4.1, but since we are not going to need this lemma in such a more general form, we refrain from doing so. Lemma 4.4.5. Assume that ϑ 6∈ Θ+ ∪ Θ− . Given any 0 < p < P [Ω+ ] and y > z(Ω+ , ϑ), ˆ + ⊂ Ω+ such that there exists a set Ω h i ˆ+ > p P Ω and ˆ + , ϑ) < y. z(Ω+ , ϑ) < z(Ω The analogous result for 0 < p < P [Ω− ] and y > z(Ω− , ϑ) holds, i.e., there exists a set ˆ − ⊂ Ω− such that Ω h i ˆ − > p and z(Ω− , ϑ) < z(Ω ˆ − , ϑ) < y. P Ω ˆ + , ϑ) < y, the other result Proof. We are going the prove the case for z(Ω+ , ϑ) < z(Ω follows analogously. Let δn be a sequence decreasing to z(Ω+ , ϑ) as n goes to infinity and assume wlog that z(Ω+ , ϑ) < δn < y for all n ≥ 1. Since by definition z(Ω+ , ϑ) = z+ (ϑ) is the essential infimum of Z(ϑ)(ω), ω ∈ Ω+ , we know that, if we define Ω+ (δn ) := {ω ∈ Ω+ : Z(ϑ)(ω) ≤ δn } , we have for all n ≥ 1 P [Ω+ (δn )] > 0. By the previous lemma z(Ω+ , ϑ) = z(Ω− , ϑ), and we have Ω+ (z(Ω+ , ϑ)) = Ω+ (z(Ω− , ϑ)). Thus ϑ 6∈ Θ− implies by the very definition of Θ− P [Ω+ (z(Ω+ , ϑ))] = 0.

70

Chapter 4. Generalization of balanced strategies

Therefore Ω+ (δn ) ⊃ Ω+ (δn+1 ) ⊃ Ω+ (z(Ω+ , ϑ)) gives us the existence of an n0 such that P [Ω+ (δn0 )] < P [Ω+ ] − p and P [Ω+ (δn0 )] < P [Ω+ (δn0 −1 )] . If we then define ˆ + := Ω+ \ Ω+ (δn0 ), Ω we have the desired properties h i ˆ + > p and z(Ω+ , ϑ) < δn0 ≤ z(Ω ˆ + , ϑ) < δn0 −1 < y. P Ω

In proving the next proposition we will use the following lemma. h i ˆ + ⊂ Ω+ , P Ω ˆ + > 0, and ϑ 6∈ Θ+ ∪ Θ− such that Lemma 4.4.6. Assume we are given Ω ˆ + , ϑ) > z(Ω+ , ϑ). z(Ω Then for any ϑ0 6∈ Θ+ ∪ (cl Θ− ) ˆ + , ϑ0 ) > z(Ω+ , ϑ0 ) z(Ω ˆ − ⊂ Ω− the appropriate result holds. holds true. For Ω Proof. Let us first consider the case ϑ0 ≥ ϑ. Lemma 4.4.2 implies z(Ω+ , ϑ) = z(Ω+ , ϑ0 ). ˆ + , ·) is non-decreasing and ϑ0 ≥ ϑ, Since z(Ω ˆ + , ϑ) ≤ z(Ω ˆ + , ϑ0 ), z(Ω+ , ϑ0 ) = z(Ω+ , ϑ) < z(Ω i.e., the assertion follows. We now assume that ϑ0 < ϑ. Since ϑ0 6∈ cl Θ− , there exists a ϑ1 < ϑ0 , ϑ1 6∈ Θ− . So we have ϑ1 , ϑ0 , ϑ 6∈ Θ+ ∪ Θ− and Lemma 4.4.2 implies z(Ω+ , ϑ1 ) = z(Ω+ , ϑ0 ) = z(Ω+ , ϑ).

(4.4.1)

ˆ + , ·) is concave, we get for 0 < λ < 1 satisfying ϑ0 = λϑ1 + (1 − λ)ϑ Since z(Ω ˆ + , ϑ0 ) ≥ λz(Ω ˆ + , ϑ1 ) + (1 − λ)z(Ω ˆ + , ϑ) > λz(Ω+ , ϑ1 ) + (1 − λ)z(Ω+ , ϑ) = z(Ω+ , ϑ0 ), z(Ω ˆ + , ϑ1 ) ≥ z(Ω+ , ϑ1 ) and the assumption z(Ω ˆ + , ϑ) > z(Ω+ , ϑ) for the last where we used z(Ω inequality, and (4.4.1) for the last equality.

¯ 4.4. Minimality of Θ

71

Remark 4.4.7. The assumption ϑ0 6∈ cl Θ− respectively ϑ0 6∈ cl Θ+ might seem a little bit unnatural, but it is indeed necessary. This can be seen by combining Example 4.2.3 with Example 4.2.1, where we do exclude ω0 from Example 4.2.1. So we have Ω+ = {ω1 , ω2 , . . .}, 4S(ωn ) = 3−2n+2 , C(ωn ) = −3−2n+2 and Ω− = {ω−1 , ω−2 , . . .} ∪ {ˆ ω1 , ω ˆ 2 , . . .}, 4S(ω−n ) = −2n+1 −2n+1 −3 , C(ω−n ) = −3 respectively 4S(ˆ ωn ) = −1, C(ˆ ωn ) = −1/n for n ≥ 1, and c = 0. The probabilities are irrelevant. A moments reflection reveals that in this case Θ− = (−∞, −1] and Θ+ = (0, ∞). ˆ − = {ˆ If we now take Ω ω1 , ω ˆ 2 , . . .}, ϑ = −1/2, and ϑ0 = 0 ∈ cl Θ+ , we get 1 ˆ − , ϑ) > z(Ω− , ϑ) = 0, = z(Ω 2 but ˆ − , ϑ0 ) ≯ z(Ω− , ϑ0 ) = 0. 0 = z(Ω Recalling Example 4.2.3 the basic idea was that, if one considers the market described just by {ω−1 , ω1 }, the unique balanced strategy is −1/2. For the market {ω−1 , ω1 , ω2 } the unique balanced strategy is 1/2, for {ω−2 , ω−1 , ω1 , ω2 } it is again −1/2 and so on. This oscillating behavior was the key for proving that ϑα does not converge to a fixed real number. The following proposition extends this property to an arbitrary market and generalizes it slightly. Proposition 4.4.8. Given sequences ϑn− < ϑn+ for n ≥ 1 such that ϑn+ , ϑn− 6∈ Θ+ ∪ Θ− . There exist sequences Ω1+ ⊂ · · · ⊂ Ωn+ ⊂ · · · ⊂ Ω+ , Ω1− ⊂ · · · ⊂ Ωn− ⊂ · · · ⊂ Ω−     with P Ωn+ \ Ωn−1 > 0, P Ωn− \ Ωn−1 > 0 (set Ω0+ := ∅ =: Ω0− ), such that + − ϑn+ ∈ Θ+ (Ωn+ , Ωn− ), n+1 ϑn− ∈ Θ− (Ω+ , Ωn− )

for all n = 1, 2, . . . . Furthermore the sets Ωn+ and Ωn− can be chosen such that for arbitrarily given, increasing sequences pn+ < P [Ω+ ] and pn− < P [Ω− ]   P Ωn+ > pn+

and

  P Ωn− > pn−

holds true. By choosing pn+ such that limn→∞ pn+ = 1, we can get S analogously n≥1 Ωn− = Ω− a.s.

S

n≥1

Ωn+ = Ω+ a.s., and

72

Chapter 4. Generalization of balanced strategies

Proof. Step 1: We start by choosing any y > z(Ω+ , ϑ1+ ) and applying Lemma 4.4.5 to get   a set Ω1+ ⊂ Ω+ with P Ω1+ > p1+ and z(Ω+ , ϑ1+ ) < z(Ω1+ , ϑ1+ ) < y. Step 2: Now we set y := z(Ω1+ , ϑ1+ ) > z(Ω+ , ϑ1+ ) = z(Ω− , ϑ1+ ), where the last equality follows from Lemma 4.4.2. If we now apply Lemma 4.4.5 for this y and for p = p1− , we   get a set Ω1− ⊂ Ω− with P Ω1− > p1− and z(Ω− , ϑ1+ ) < z(Ω1− , ϑ1+ ) < y = z(Ω1+ , ϑ1+ ).

(4.4.2)

This immediately implies    ϑ1+ ∈ Θ+ (Ω1+ , Ω1− ) = ϑ : P ω ∈ Ω1− : z(Ω1+ , ϑ) ≥ Z(ϑ)(ω) > 0 . Step 3: For the next step we take y := z(Ω1− , ϑ1− ) ≥ z(Ω1− , ϑ1+ ) > z(Ω− , ϑ1+ ) = z(Ω+ , ϑ1− ). Here we used the fact that z(Ω1− , ·) is non-increasing together with ϑ1− < ϑ1+ , (4.4.2), and    Lemma 4.4.2. So we can apply Lemma 4.4.5 for p, where P [Ω+ ] > p > max p2+ , P Ω1+ ,   again, and we obtain a set Ω2+ ⊂ Ω+ , P Ω2+ > p ≥ p2+ such that z(Ω+ , ϑ1− ) < z(Ω2+ , ϑ1− ) < y = z(Ω1− , ϑ1− ).

(4.4.3)

So we have    ϑ1− ∈ Θ− (Ω2+ , Ω1− ) = ϑ : P ω ∈ Ω2+ : z(Ω1− , ϑ) ≥ Z(ϑ)(ω) > 0 . Note that we can assume wlog that Ω2+ ⊃ Ω1+ . Otherwise we could just replace Ω2+ by     Ω2+ ∪ Ω1+ for which the above statements also hold. Since P Ω2+ > P Ω1+ we also get   the required P Ω2+ \ Ω1+ > 0. Step 4: Now we want to take y := z(Ω2+ , ϑ2+ ). To be able to apply Lemma 4.4.5 once more we first have to check whether the inequality z(Ω2+ , ϑ2+ ) > z(Ω+ , ϑ2+ ) = z(Ω− , ϑ2+ )

(4.4.4)

holds. We know from (4.4.3) that z(Ω2+ , ϑ1− ) > z(Ω+ , ϑ1− ). Furthermore ϑ1− , ϑ2+ 6∈ Θ+ ∪ Θ− and, since ϑ2+ > ϑ2− for ϑ2− 6∈ Θ− , also ϑ2+ 6∈ cl Θ− holds true. So all the assumptions of Lemma 4.4.6 are fulfilled, and we get (4.4.4). So we have shown that y := z(Ω2+ , ϑ2+ ) > z(Ω− , ϑ2+ )

¯ 4.4. Minimality of Θ

73

   and we can apply Lemma 4.4.5 for a p satisfying P [Ω− ] > p > max p2− , P Ω1− to get a     set Ω2− , P Ω2− \ Ω1− > 0, P Ω2− > p2− such that z(Ω− , ϑ2+ ) < z(Ω2− , ϑ2+ ) < y = z(Ω2+ , ϑ2+ ). This implies    ϑ2+ ∈ Θ+ (Ω2+ , Ω2− ) = ϑ : P ω ∈ Ω2− : z(Ω2+ , ϑ) ≥ Z(ϑ)(ω) > 0 . Here we can as above assume wlog that Ω2− ⊃ Ω1− . We can now continue this way, repeatedly applying Lemmata 4.4.6 and 4.4.5. Thus   we get the desired sequences Ω1+ ⊂ Ω2+ ⊂ . . . and Ω1− ⊂ Ω2− ⊂ . . . with P Ωn+ > pn+ ,       n−1 P Ωn+ \ Ωn−1 > 0 respectively P Ωn− > pn− , P Ωn− \ Ω− > 0, and ϑn+ ∈ Θ+ (Ωn+ , Ωn− ) + n respectively ϑn− ∈ Θ− (Ωn+1 + , Ω− ). Remark 4.4.9. In the following we will need the seemingly stronger result n ϑn+ ∈ int Θ+ (Ωn+ , Ωn− ) and ϑn− ∈ int Θ− (Ωn+1 + , Ω− ).

(4.4.5)

But this can easily be obtained from the above proposition by taking a sequence ϑˆn+ and ϑˆn− such that ϑn− < ϑˆn− < ϑˆn+ < ϑn+ . Then obviously ϑˆn+ , ϑˆn− 6∈ Θ+ ∪ Θ− , and we can apply the proposition for the sequences ϑˆn+ and ϑˆn− , which gives us then the desired result (4.4.5). We are now ready to state and prove the main theorem of this section, which will give ¯ as a straight forward corollary. Actually the theorem us the indicated minimality of Θ ¯ = 1, then we know that ϑα converges to the unique balanced is more general. If Θ ¯ This is the somehow ‘uninteresting’ case. In the case where Θ ¯ > 1, the strategy ϑ¯ ∈ Θ. following theorem states that we can find an equivalent probability measure such that the sequence of optimal strategies ϑα behaves in basically any given way. More precisely one can choose any two sequences of elements ϑn− < ϑn+ such that ϑn− , ϑn+ 6∈ Θ+ ∪ Θ− . Then it is possible to find an equivalent probability measure Q such that the optimal ϑαn = ϑαQn n→∞ lies in the interval (ϑn− , ϑn+ ) for a sequence αn −−−→ ∞. This way we can for example achieve that the set of accumulation points of ϑα contains any given finite or countably ¯ infinite subset of Θ. Theorem 4.4.10. Assume we have a one time-period market (Ω, F, P) with stock S, bond B = 1, contingent claim C, satisfying Assumption 4.1.1. Furthermore we have a family of utility functions (Uα (x))α such that the risk aversion rα (x) tends to infinity. More precisely we assume that there exist ϑ˜+ ∈ Ω+ and ϑ˜− ∈ Ω− such that for I := (−∞, z+ (ϑ˜+ )) ∪ (−∞, z− (ϑ˜− )) and rα := inf x∈I rα (x) lim rα = ∞.

α→α∞

Assume further that we are given sequences ϑn− < ϑn+ such that ϑn+ , ϑn− 6∈ Θ+ ∪ Θ− for n ≥ 1.

74

Chapter 4. Generalization of balanced strategies

Then there exists an equivalent probability measure Q ∼ P for which Assumption 4.1.1 n→∞ are satisfied and a sequence αn −−−→ α∞ such that the optimal strategy ϑα = ϑαQ for the utility maximization problem EQ[Uα (Z(ϑ))] 7→ max ϑ∈R

satisfies ϑn− < ϑαn < ϑn+ for n = 1, 2, . . . . Proof. Let us start by introducing the shorthand Fαn (ϑ)(ω) := Uα0 n (Z(ϑ)(ω)) |4S(ω)| > 0 for ω ∈ Ω+ ∪ Ω− . Note that ϑ 7→ Fαn (ϑ)(ω) is strictly increasing for ω ∈ Ω− and strictly decreasing for ω ∈ Ω+ . This follows since Z(ϑ) is strictly decreasing on Ω− respectively strictly increasing on Ω+ and x 7→ Uα0 n (x) is strictly decreasing. This fact will be used repeatedly. We obviously want to apply Proposition 4.4.8 to get sequences Ω1+ ⊂ Ω2+ ⊂ . . . and Ω1− ⊂ Ω2− ⊂ . . . , but we have to be a little bit careful with the choice of these sequences of sets. But let us for the moment, just to explain how the equivalent probability measure Q ∼ P will be defined, assume that we already know those sets Ωn+ and Ωn− . We will give 2 3 2 3 constants k, k+ , k+ , . . . , k− , k− , · · · > 0 and then define dQ dQ = = k > 0, dP Ω1 dP Ω1 + − dQ 2 3 n = k · k+ k+ · · · k+ >0 dP Ωn \Ωn−1 +

+

and

dQ 2 3 n = k · k− k− · · · k− > 0. dP Ωn \Ωn−1 − − S n = 1. Since On Ω0 we do not change the measure, i.e., dQ n≥1 Ω+ = Ω+ a.s. and dP Ω 0 S n n≥1 Ω− = Ω− a.s. the probability measure Q on Ω = Ω+ ∪ Ω− ∪ Ω0 is well defined 2 3 2 3 once we specify the constants k, k+ , k+ , . . . , k− , k− , . . . . The constant k is used to ensure that we get a probability measure, i.e., to normalize to 1. We will see that the constants 2 2 k+ , . . . , k− , . . . can be chosen to be less or equal to 1, therefore the probability measure Q will satisfy Assumption 4.1.1. n n Let us now compute the appropriate sets Ωn+ , Ωn− , the constants k+ , k− and the right level of risk aversion rαn such that

ϑn− < ϑαn < ϑn+ holds. We will proceed inductively. So let us fix any n ≥ 1 and assume that Ω1+ , . . . , Ωn+ , 2 n 2 n Ω1− , . . . , Ωn− , k+ , . . . , k+ , k− , . . . , k− and α1 , . . . , αn−1 are already defined. Therefore the

¯ 4.4. Minimality of Θ

75

probability measure Q is, up to the negligible normalizing constant k, already defined on Ωn+ ∪ Ωn− . To be able to start with n = 1 we just have to take any Ω1+ ⊂ Ω+ and any Ω1− ⊂ Ω− such that ϑ1+ ∈ int Θ+ (Ω1+ , Ω1− ) following Proposition 4.4.8 and Remark 4.4.9. n+1 n+1 n+1 We are now going to determine Ωn+1 + , Ω− , k+ , k− , and αn , which will extend the definition of Q to Ωn+1 ∪ Ωn+1 + − . Step 1: Use Proposition 4.4.8 together withhRemark 4.4.9 i to get a preliminary — we n+1 n+1 ˆ+ , P Ω ˆ + \ Ωn > 0 such that might have to change it slightly — set Ω + ˆ n+1 , Ωn ). ϑn− ∈ int Θ− (Ω + − Choose now αn ≥ αn−1 , say α0 = 0, big enough such that     EQ Fαn (ϑn+ ); Ωn− > EQ Fαn (ϑn+ ); Ωn+

(4.4.6)

and h

ˆ n+1 EQˆ Fαn (ϑn− ); Ω +

i

  > EQˆ Fαn (ϑn− ); Ωn− ,

(4.4.7)

ˆ := Q on Ωn ∪ Ωn and dQˆ := k · k 2 · · · k n on Ω ˆ n+1 \ Ωn+ . By EQ˜ [.] we mean where Q + + − + + dP ˜ and we use E[.] = EP[.] as the expectation with respect to any probability measure Q, shorthand for the expectation with respect to the original probability measure P. Why can we find such an αn ? This is possible since we have n ˆ n+1 ϑn+ ∈ int Θ+ (Ωn+ , Ωn− ) and ϑn− ∈ int Θ− (Ω + , Ω− )

ˆ n+1 by the choice of Ωn− , Ωn+ , and Ω following Proposition 4.4.8 and Remark 4.4.9. Let us + show in detail why this implies for example (4.4.6). We know from Theorem 4.3.7 that the optimal ϑα(Ωn ,Ωn ) for the market consisting of Ωn+ ∪ Ωn− — the subscript in ϑα(Ωn ,Ωn ) + − + − is used to indicate, that it is the optimal strategy for this special market, not for the ¯ n , Ωn ) for any market consisting of Ω — converges to the set of balanced strategies Θ(Ω + − n n n probability measure. This implies, since ϑ+ ∈ int Θ+ (Ω+ , Ω− ), that for α large enough the optimal ϑα n n has to be less than ϑn . We take the strategy ϑ˜+ ∈ Θ+ from the + ϑn+

(Ω+ ,Ω− )

assumption of the theorem and get, since 6∈ Θ+ , ϑn+ < ϑ˜+ . Thus z+ (ϑn+ ) ≤ z+ (ϑ˜+ ) and the assumption that the risk aversion tends uniformly to ∞ on (−∞, z+ (ϑ˜+ )) allows us to apply Theorem 4.3.7. ϑα(Ωn ,Ωn ) < ϑn+ gives us now +



    EQ Fαn (ϑ); Ωn− > EQ Fαn (ϑ); Ωn+ for ϑ = ϑn+ . This holds since the left hand side is strictly increasing in ϑ — see the remark at the beginning of the proof — whereas the right hand side is strictly decreasing, and since for ϑα(Ωn ,Ωn ) the left hand side equals, due to the optimality of ϑα(Ωn ,Ωn ) , the right + − + − hand side. The same kind of reasoning shows (4.4.7). ˆ n+1 Step 2: As indicated above the choice of Ω was preliminary, we now might have to + change it slightly, more precisely we might have to enlarge it. Note that an enlargement

76

Chapter 4. Generalization of balanced strategies

poses no problem with respect to the assertions made in Step 1. (4.4.7) holds ‘even more’ ˆ n+1 ˆ n+1 ˆ n+1 if we replace Ω by any Ωn+1 ⊃Ω + + + . And the second statement which involves Ω+ , n+1 n n n ˆ n+1 namely ϑn− ∈ int Θ− (Ω + , Ω− ), also implies ϑ− ∈ int Θ− (Ω+ , Ω− ), since n h i o n n+1 n n+1 ˆ ˆ Θ− (Ωn+1 , Ω ) ⊃ Θ ( Ω , Ω ) := ϑ : P ω ∈ Ω : z(Ω , ϑ) ≥ Z(ϑ)(ω) > 0 . − − + − + − + n n But how do we have to choose the enlargement Ωn+1 + ? From Fα (ϑ− )(ω) > Fα (ϑ+ )(ω) for all ω ∈ Ω+ it follows that

    E Fαn (ϑn− ); Ωn+1 \ Ωn+ > E Fαn (ϑn+ ); Ωn+1 \ Ωn+ . + + In the course of the proof we will need the somewhat stronger result     E Fαn (ϑn− ); Ωn+1 \ Ωn+ > E Fαn (ϑn+ ); Ω+ \ Ωn+ . + A priori this inequality will not hold true. But we can make it hold true if we can     get E Fαn (ϑn+ ); Ω+ \ Ωn+1 small enough. This amounts to making P Ω+ \ Ωn+1 small + +  n+1 enough, i.e., making P Ω+ sufficiently close to P [Ω+ ]. But this is no problem since  n+1  Proposition 4.4.8 allows us to take an Ωn+1 with P Ω+ arbitrarily close to P [Ω+ ]. + n+1 n+1 ˆ Therefore we can replace Ω+ by Ω+ such that all the assertions from step 1 still hold true and that we have in addition     n n n E Fαn (ϑn− ); Ωn+1 \ Ω > E F (ϑ ); Ω \ Ω α + n + + + + .

(4.4.8)

Using Proposition 4.4.8 and Remark 4.4.9 we can now also find a set Ωn+1 ⊃ Ωn− , −  n+1  n+1 P Ω− \ Ωn− > 0 such that ϑn+1 ∈ int Θ− (Ωn+1 + + , Ω− ).     Step 3: Since ϑn− < ϑn+ , we know that EQ Fαn (ϑn+ ); Ωn− > EQ Fαn (ϑn− ); Ωn− and     EQ Fαn (ϑn+ ); Ωn+ < EQ Fαn (ϑn− ); Ωn+ . This gives us         EQ Fαn (ϑn+ ); Ωn− − EQ Fαn (ϑn+ ); Ωn+ > EQ Fαn (ϑn− ); Ωn− − EQ Fαn (ϑn− ); Ωn+ , where the left hand side is by (4.4.6) greater than 0. Therefore we can find a constant n+1 k+ > 0 such that       2 n+1 EQ Fαn (ϑn+ ); Ωn− − EQ Fαn (ϑn+ ); Ωn+ > k · k+ · · · k+ E Fαn (ϑn− ); Ωn+1 \ Ωn+ (4.4.9) +     n n n n > EQ Fαn (ϑ− ); Ω− − EQ Fαn (ϑ− ); Ω+ . (4.4.10) n+1 Let us first show that we can choose this constant k+ ≤ 1. Therefore we have to n+1 look at (4.4.10), which gives us a lower bound for k+ . But (4.4.7) implies that

      2 n k · k+ · · · k+ E Fαn (ϑn− ); Ωn+1 \ Ωn+ + EQ Fαn (ϑn− ); Ωn+ > EQ Fαn (ϑn− ); Ωn− , + n+1 n+1 which insures that (4.4.10) holds for k+ = 1, and therefore there exists an 0 < k+ ≤1 satisfying (4.4.9) and (4.4.10).

¯ 4.4. Minimality of Θ

77

Step 4: We show now that the optimal ϑαn = ϑαQn is greater than ϑn− . This will follow once we show EQ[Fαn (ϑ); Ω− ] < EQ[Fαn (ϑ); Ω+ ] for ϑ = ϑn− since, by the same reasoning as in step 1, the left hand side is increasing in ϑ, the right hand side decreasing, and for ϑ = ϑαn equality holds. We start with (4.4.10), which gives       n+1 2 \ Ωn+ E Fαn (ϑn− ); Ωn+1 · · · k+ EQ Fαn (ϑn− ); Ωn− < EQ Fαn (ϑn− ); Ωn+ + k · k+ +       = EQ Fαn (ϑn− ); Ωn+ + EQ Fαn (ϑn− ); Ωn+1 \ Ωn+ = EQ Fαn (ϑn− ); Ωn+1 + +   ≤ EQ Fαn (ϑn− ); Ω+ . n+1 ≤ 1 sufficiently small such that Thus we can take k−       2 n+1 EQ Fαn (ϑn− ); Ωn− + k · k− · · · k− E Fαn (ϑn− ); Ω− \ Ωn− < EQ Fαn (ϑn− ); Ω+ .

And since       EQ Fαn (ϑn− ); Ω− = EQ Fαn (ϑn− ); Ωn− + EQ Fαn (ϑn− ); Ω− \ Ωn−     2 n+1 E Fαn (ϑn− ); Ω− \ Ωn− , ≤ EQ Fαn (ϑn− ); Ωn− + k · k− · · · k−

(4.4.11)

(4.4.12)

m where we used the fact that k− ≤ 1 for all m ≥ 2, we get by combining (4.4.11) and (4.4.12) the desired     EQ Fαn (ϑn− ); Ω− < EQ Fαn (ϑn− ); Ω+ .

Step 5: The only thing which remains to show is that ϑαn < ϑn+ , which amounts to     showing EQ Fαn (ϑn+ ); Ω− > EQ Fαn (ϑn+ ); Ω+ . We are going to use (4.4.9), but we can   n \ Ω not use it directly, we first have to take a closer look at E Fαn (ϑn− ); Ωn+1 + + . Here comes the — until now unused — inequality (4.4.8) into play. It gives us together with (4.4.9)       2 n+1 \ Ωn+ · · · k+ E Fαn (ϑn− ); Ωn+1 EQ Fαn (ϑn+ ); Ωn− − EQ Fαn (ϑn+ ); Ωn+ > k · k+ +   2 n+1 > k · k+ · · · k+ E Fαn (ϑn+ ); Ω+ \ Ωn+   ≥ EQ Fαn (ϑn+ ); Ω+ \ Ωn+ , m using k+ ≤ 1 for all m ≥ 2 for the last inequality. This gives       EQ Fαn (ϑn+ ); Ω+ < EQ Fαn (ϑn+ ); Ωn− ≤ EQ Fαn (ϑn+ ); Ω− ,

thus completing the proof. ¯ We will assume Let us apply the above, general theorem to show the minimality of Θ. ¯ that is ‘really smaller’ than Θ. ¯ We are interested in that we are given any subset Φ ⊂ Θ the distance between the set Φ and the sequence of strategies ϑα . Thus it is obvious that ¯ since in this case d(ϑ, Φ) = d(ϑ, Θ) ¯ for all strategies Φ is not ‘really smaller’ if cl Φ = Θ, ¯ Φ 6= Θ ¯ to be closed. But the following ϑ ∈ R. Therefore we require the subset Φ ⊂ Θ, corollary shows that for such a subset Φ one can find an equivalent probability measure for which the sequence ϑα = ϑαQ does not converge to the set Φ.

78

Chapter 4. Generalization of balanced strategies

Corollary 4.4.11. We continue with the assumptions on the market and the utility function from Theorem 4.4.10. Assume in addition that we are further given a closed set ¯ Φ 6= Θ. ¯ Then we can find a probability measure Q ∼ P such that for the marΦ ⊂ Θ, ket described by (Ω, F, Q) instead of (Ω, F, P) the distance between the optimal strategy ϑα = ϑαQ and Φ does not tend to 0 as α → α∞ , i.e., lim sup d(ϑα , Φ) > 0. α→α∞

¯ ⊂ R for the interval of balanced strategies Θ, ¯ there has to exist an Proof. Since Φ ( Θ ¯ \ Φ, ϑ− < ϑ+ . If we now apply Theorem 4.4.10 for ϑn = 2ϑ+ + ϑ− interval [3ϑ− , 3ϑ+ ] ⊂ Θ + and ϑn− = ϑ+ + 2ϑ− , n ≥ 1, we get a probability measure Q ∼ P and a sequence αn tending to α∞ such that ϑ+ + 2ϑ− < ϑαn < 2ϑ+ + ϑ− . This implies d(ϑαn , Φ) ≥ ϑ+ − ϑ− > 0, and so lim supα→α∞ d(ϑα , Φ) > 0.

Bibliography [1] K. Arrow, Essays in the theory of risk-bearing, North-Holland Publishing Co., Amsterdam, 1970. [2] D. Becherer, Rational hedging and valuation with utility-based preferences, Ph.D. thesis, Technische Universit¨at Berlin, 2001. [3] P. Cheridito and C. Summer, Utility-maximizing strategies under increasing risk aversion, Preprint, 2002. [4] I. Csisz´ar, I-divergence geometry of probability distributions and minimization problems, Ann. Probability 3 (1975), 146–158. [5] F. Delbaen, P. Grandits, T. Rheinl¨ander, D. Samperi, M. Schweizer, and C. Stricker, Exponential hedging and entropic penalties, Mathematical Finance 12 (2002), no. 2, 99–123. [6] H. F¨ollmer and A. Schied, Stochastic finance; an introduction in discrete time, De Gruyter studies in mathematics, vol. 27, de Gruyter, Berlin; New York, 2002. [7] M. Frittelli, The minimal entropy martingale measure and the valuation problem in incomplete markets, Mathematical Finance 10 (2000), no. 1, 39–52. [8] P. Grandits and C. Summer, Risk averse asymptotics and the optional decomposition, submitted, 2002. [9] J.M. Harrison and S.R. Pliska, Martingales and stochastic integrals in the theory of continuous trading, Stochastic Process. Appl. 11 (1981), no. 3, 215–260. [10] S.D. Hodges and A. Neuberger, Optimal replication of contingent claims under transaction costs, Review of Futures Markets 8 (1989), 222–239. [11] C. Huang and R.H. Litzenberger, Foundations for financial economics, North-Holland Publishing Co., New York, 1988. [12] Y.M. Kabanov and C. Stricker, On the optimal portfolio for the exponential utility maximization: Remarks on the six-authos paper, Mathematical Finance 12 (2002), no. 2, 125–134. 79

80

Bibliography

[13] I. Karatzas and S.E. Shreve, Methods of mathematical finance, Springer-Verlag, New York, 1998. [14] D. O. Kramkov, Optional decomposition of supermartingales and hedging contingent claims in incomplete security markets, Probab. Theory Related Fields 105 (1996), no. 4, 459–479. [15] M.A. Krasnoselskii and Ya.B. Rutickii, Convex functions and orlicz spaces, P. Nordhoff Ltd., 1961. [16] R.C. Merton, Lifetime portfolio selection under uncertainty: the continuous-time model, Review of Economics and Statistics 51 (1969), 247–257. [17] J. Musielak, Orlicz spaces and modular spaces, Lecture Notes in Mathematics, Springer-Verlag, 1983. [18] J. Pratt, Risk aversion in the small and in the large, Econometrica 32 (1964), 122– 136. [19] M.M. Rao and Z.D. Ren, Applications of orlicz spaces, Pure and Applied Mathematics, A Series of Monographs and Textbooks, Marcel Dekker, Inc., 2002. [20] R.T. Rockafellar, Convex analysis, Princeton University Press, Princeton, N.J., 1970. [21] R. Rouge and N. El Karoui, Pricing via utility maximization and entropy, Mathematical Finance 10 (2000), no. 2, 259–276. [22] P.A. Samuelson, Lifetime portfolio selection by dynamic stochastic programming, Review of Economics and Statistics 51 (1969), 239–246. [23] W. Schachermayer, Optimal investment in incomplete financial markets, Mathematical Finance: Bachelier Congress 2000 (H. Geman, D. Madan, St.R. Pliska, and T. Vorst, eds.), 2000, pp. 427–462. [24] W. Schachermayer, A super-martingale property of the optimal portfolio process, submitted, 2002. [25] M. Schweizer, Variance-optimal hedging in discrete time, Math. Oper. Res. 20 (1995), no. 1, 1–32. [26] J. von Neumann and O. Morgenstern, Theory of games and economic behavior, Princeton University Press, Princeton, N.J., 1953.

Curriculum Vitae von Christopher Summer

Geboren am 21. September 1974, in Wien.

1981 – 1985

Karl Stingl Volksschule, M¨odling.

1985 – 1993

BG & BRG M¨odling, Franz-Keimgasse; Matura mit ausgezeichnetem Erfolg im Juni 1993.

Okt. 1993

Beginn des Studiums der Mathematik (Diplomstudium) an der Universit¨at Wien.

J¨an. 1995

1. Diplompr¨ ufung, mit ausgezeichnetem Erfolg.

Juli 1997

2. Diplompr¨ ufung, mit ausgezeichnetem Erfolg; Diplomarbeit bei Prof. Cigler u ¨ber Kombinatorik.

Sep. 1997 – Juni 1998

Studienaufenthalt an der Universit¨at von Kalifornien, San Diego.

Sep. 1998 – Aug. 2000 Vertragsassistent an der Technischen Universit¨at Wien, Institut f¨ ur Analysis. 1999 – 2001

Teilnahme am CCEFM (Center for Central European Financial Markets) Programm, Universit¨at Wien.

1999 – 2002

Dissertation unter der Betreuung von Prof. Walter Schachermayer; Titel: ,,Utility Maximization and Increasing Risk Aversion“.

Seit Aug. 2000

Forschungsassistent an der Technischen Universit¨at Wien, Institut f¨ ur Finanz- und Versicherungsmathematik.

Dissertation Utility Maximization and Increasing Risk ...

get 100 Euro for sure compared to having a 50–50 chance of getting 200 Euro or nothing. This is exactly .... bank account, being equal to 1. We assume that the ...

623KB Sizes 1 Downloads 244 Views

Recommend Documents

Utility maximization under increasing risk aversion in ...
Princeton University ... time 0 value of v = ξ + ϑS0 and a time T value of ξ + ϑST = v + ϑ∆S. In addition .... proved in a continuous-time setup (see e.g [3, 1, 2]).

Robust Utility Maximization with Unbounded Random ...
pirical Analysis in Social Sciences (G-COE Hi-Stat)” of Hitotsubashi University is greatly ... Graduate School of Economics, The University of Tokyo ...... Tech. Rep. 12, Dept. Matematica per le Decisioni,. University of Florence. 15. Goll, T., and

do arbitrage free prices come from utility maximization?
Bank account with no interest. Exist martingale measures. No constraints on strategy H. Agent. Maximal expected utility u(x,q) := sup. H. E[U(x + qf + ∫ T. 0.

A Suboptimal Network Utility Maximization Approach for ...
Department of Computer Science, Princeton University, ... admission control approach for such utilities, called “self- ..... metrics and rate requirements. Recall that ...

A Suboptimal Network Utility Maximization Approach for ...
Department of Computer Science, Princeton University,. Emails: {mstalebi, ak, hajiesamaili}@ipm.ir, [email protected]. Abstract—Wired and wireless data ...

Distributed Utility Maximization for Network Coding Based Multicasting ...
include for example prior works on Internet flow control [9] and cross-layer ...... wireless network using network coding have been formulated in [20], [21] ..... [3] T. Ho, R. Koetter, M. Médard, D. R. Karger, and M. Effros, “The benefits of codi

Distributed Utility Maximization for Network Coding ...
The obtained r∗ and g∗ will be used as the operating pa- rameters of the practical network coding system. Specifically, the source node will set the end-to-end ...

Maximization of Non-Concave Utility Functions in ...
... setting, a possibly non-concave utility function U is considered, with domain ... utility maximisation is carried out on the set of claims whose price is below a ...... We would like to check that Theorem 2.11 holds in a concrete, broad class of 

Distributed Utility Maximization for Network Coding Based Multicasting ...
wireless network using network coding have been formulated in [20], [21] ..... [3] T. Ho, R. Koetter, M. Médard, D. R. Karger, and M. Effros, “The benefits of coding ...

goldcore.com-Yahoo Hacking Highlights Cyber Risk and Increasing ...
3/5. Page 3 of 5. Main menu. Displaying goldcore.com-Yahoo Hacking Highlights Cyber Risk and Increasing Importance of Physical Gold.pdf. Page 1 of 5.

goldcore.com-Yahoo Hacking Highlights Cyber Risk and Increasing ...
goldcore.com-Yahoo Hacking Highlights Cyber Risk and Increasing Importance of Physical Gold.pdf. goldcore.com-Yahoo Hacking Highlights Cyber Risk and ...

Reducing Risk, Increasing Protective Factors: Findings ...
use, and alcohol use, associated with both reduction in risk factors and an increase in protective factors. Methods. Study Population. Data were obtained from ...

Reducing Risk, Increasing Protective Factors: Findings from the ...
predicting the outcomes given combinations of the risk and protective factors. Results: Rage was the strongest risk factor for every health-compromising behavior ...

Intermediate Microeconomics - Profit Maximization and ...
Variable factor: labor. Cobb-Douglas production function. Profit maximization problem becomes: max. L. pAKαL1−α − wL − rK. Profit maximizing condition: pA(1 − α)(K/L)α. ︸. ︷︷. ︸ marginal revenue. = w. ︸︷︷︸ .... (pt −ps)(

Indeterminacy and Increasing Returns
We investigate properties of the one-sector growth model with increasing returns under two organizational structures ... We thank the C.V. Starr Center at NYU and the Risk Project of the Department of. Applied Economics at ... finding is that there e

Increasing participation.pdf
critically judge the ideas as you approach a solution. 9) Clustering is an alternative to brainstorming. To do. this, begin with a word, name, or concept written in ...

Increasing Returns and Economic Geography
Aug 12, 2011 - We use information technology and tools to increase productivity and ... tion get a reasonable degree of attention in industrial organization.

Utility Fees, Rates and Collections.pdf
Plant investment. Whoops! There was a problem loading this page. Retrying... Utility Fees, Rates and Collections.pdf. Utility Fees, Rates and Collections.pdf.

Master dissertation
Aug 30, 2006 - The Master of Research dissertation intends to provide a background of the representative social and political discourses about identity and cultural violence that might anticipate the reproduction of those discourses in education poli

Ph.D Dissertation
me by running numerical simulations, modeling structures and providing .... Figure 2.9: Near-nozzle measurements for a helium MPD thruster [Inutake, 2002]. ...... engine produces a maximum Isp of 460 s [Humble 1995], electric thrusters such ...

Utility Rules and Regulations.pdf
Page 2 of 13. JAEN WATER DISTRICT 2. Utility Rules and Regulations. GENERAL POLICY ON WATER SERVICE. 1. It is the declared policy of Jaen Water ...

Repetition Maximization based Texture Rectification
Figure 1: The distorted texture (top) is automatically un- warped (bottom) using .... however, deals in world-space distorting and not with cam- era distortions as is ...

Repetition Maximization based Texture Rectification
images is an essential first step for many computer graph- ics and computer vision ... matrix based rectification [ZGLM10] can be very effective, most of our target ...

Dissertation
Deformation theory, homological algebra and mirror symmetry. In Geometry and physics of branes (Como, 2001), Ser. High Energy Phys. Cosmol. Gravit., pages 121–209. IOP, Bristol, 2003. [26] Kenji Fukaya and Yong-Geun Oh. Floer homology in symplectic