Introduction Model Main Results Extensions and Discussion

Social Learning and the Shadow of the Past Yuval Heller (Bar Ilan) and Erik Mohlin (Lund)

Technion, Game Theory seminar, November. 2017

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

1 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Motivation 1: Social Learning

Agents often make decisions without fully knowing costs & benefits. A new agent may learn from the experience of others, by basing his decision, on observations of actions taken by a few old agents. E.g., Arthur (1989, 1994); Young (1993); Kandori, Rob & Mailath (1993); Ellison & Fudenberg (1993, 1995), Banerjee & Fudenberg (2004), Acemoglu, Dahleh, Lobel & Ozdaglar (2011), Sorensen & Smith (2014).

Question When does the initial behavior of the population have a lasting effect? Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

2 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Motivation 2: Games with Random Matching Agents are randomly matched to play a game. An agent may base his choice of action on a few observations of how his current opponent behaved in the past. E.g, Community enforcement in the Prisoner’s Dilemma (Rosenthal, 1979; Okuno-Fujiwara & Postlewaite, 1995; Nowak & Sigmund, 1998; Takahashi, 2010; Heller & Mohlin, 2017).

Question When does the initial behavior of the population have a lasting effect? Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

3 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Modeling Approach & Research Question We divide the description of the interaction into two parts: 1

Learning rule: How new agents choose their actions (and keep playing the same actions throughout their lifetimes).

2

Environment: All other aspects: what agents observe, number of feasible actions, heterogeneity in the population, etc.

Question In which environments can the initial behavior of the population have a lasting effect? Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

4 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Brief Summary of Main Results

Average number of actions

≤1

∈ (1, 2]

>2

observed by a new agent Is there a rule with multiple steady states? Is there a rule with multiple locally stable states?

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

5 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Brief Summary of Main Results

Average number of actions

≤1

∈ (1, 2]

>2

observed by a new agent Is there a rule with

No

multiple steady states?

(global

Is there a rule with multiple

convergence)

Yes

Sometimes

Yes

locally stable states?

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

6 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Example: Competing Technologies (Environment) 2 competing technologies, a & b with increasing returns. Uncertainty about the initial share of agents following technology a (e.g., it is uniformly distributed in [0, 1]). Agents have small symmetric idiosyncratic preferences. In each period some agents are replaced with new agents. 99% of the new agents observe the technology of 1 incumbent. Two cases for the remaining 1%: Case (I) observe nothing, and Case (II) observe three actions. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

7 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Example: Competing Technologies (Learning Rule)

Learning rule: An agent observing a single incumbent mimics his technology. An agent observing three incumbents, mimics the majority. An agent observing nothing chooses a technology based on his own idiosyncratic preferences.

One can show that this learning rule is a Nash equilibrium. Unique equilibrium if agents are sufficiently impatient.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

8 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Example: Competing Technologies (Long run Behavior)

One cans show that: Case (I): Global convergence to 50%-50% Mean sample size = 0.99 < 1.

Case (II): convergence to everyone playing the action initially played by a majority of the agents. Mean sample size = 1.02 > 1.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

9 / 34

Introduction Model Main Results Extensions and Discussion

Motivation Research Question Brief Summary of Results Motivating Example

Example 2: Random Matching & Indirect Reciprocity Agents are randomly matched to play the Prisoner’s Dilemma. Learning rule: Agents play uniformly when they observe no past actions of the partner. When an agent observes the partner’s past actions, she plays the most frequently-observed action in her sample. The rule might be consistent with reciprocal preferences.

Same learning dynamics as in the previous example... Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

10 / 34

Introduction Model Main Results Extensions and Discussion

Environment Learning Rule and Dynamics

Population State

Infinite population (continuum of agents with mass one). Time is discrete: 1, 2, 3, 4, 5, .... Each agent chooses action a ∈ A. Population state γ ∈ ∆ (A): aggregate distribution of actions.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

11 / 34

Introduction Model Main Results Extensions and Discussion

Environment Learning Rule and Dynamics

New Agents and Samples

At each period β ∈ (0, 1) of the agents die and are replaced with new agents (or, alternatively, reevaluate their actions). The remaining agents continue to play the same action as they played in the past (inertia).

Each new agent observes a sample with a random size l . ν - the distribution of the sample size l . The sampled actions are iid.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

12 / 34

Introduction Model Main Results Extensions and Discussion

Environment Learning Rule and Dynamics

Environment & Mean Sample Size An environment is a tuple E = (A, β, ν) describing: A - finite set of actions, β - fraction of new agents, ν distribution of sample size. Let µl denote the mean sample size, i.e., the expected number of actions observed by a random new agent in the population:

µl =

X

ν (l) · l.

l

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

13 / 34

Introduction Model Main Results Extensions and Discussion

Environment Learning Rule and Dynamics

Learning Rule (Stationary) learning rule σ : M → ∆ (A) describes the behavior of a new agent who observes sample m ∈ M. E.g., taking the frequently-observed action in the motivating example.

A learning process is a pair consisting of an environment and a learning rule: P = (E , σ) .

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

14 / 34

Introduction Model Main Results Extensions and Discussion

Environment Learning Rule and Dynamics

Population dynamics

An initial state & a learning process determine a new state. Let fP : Γ → Γ denote the mapping between states induced by a single step of the learning process P. We say that γ ∗ is a steady state if fP (γ ∗ ) = γ ∗ . For each t > 1, let fPt (ˆ γ ) denote the state induced after t steps of the learning process P, given an initial state γˆ .

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

15 / 34

Introduction Model Main Results Extensions and Discussion

Environment Learning Rule and Dynamics

Local Stability and Global Attractors

γ ∗ is locally stable if a population beginning near γ ∗ remains close to γ ∗ , and eventually converges to γ ∗ . (Formally, ∀ > 0 there exists δ > 0 s.t. kˆ γ − γ ∗ k < δ implies: (1)



fPt (ˆγ ) − fPt (γ ∗ ) <  ∀t ,(2) limt−→∞ fPt (ˆγ ) = γ ∗ .)

γ ∗ is an (almost-)global attractor, if the population converges to γ ∗ from any (interior) initial state. (formally, if limt−→∞ fPt (ˆ γ ) = γ ∗ for all γˆ (totally-mixed γˆ ).)

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

16 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

Upper Bound on the Distance Between New States Theorem Distance between new populations states is at most 1 − β + β · µl times the distance between the old population states: kfP (γ) − fP (γ 0 )k1 ≤ (1 − β + β · µl ) · kγ − γ 0 k1 (strict inequality if agents may observe more than one action.)

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

17 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

Upper Bound on the Distance Between New States Theorem Distance between new populations states is at most 1 − β + β · µl times the distance between the old population states: kfP (γ) − fP (γ 0 )k1 ≤ (1 − β + β · µl ) · kγ − γ 0 k1 (strict inequality if agents may observe more than one action.) Corollary µl ≤ 1 and ν (1) < 1 ⇒ fP is a weak contraction mapping (i.e., kfP (γ) − fP (γ 0 )k1 < kγ − γ 0 k1 ∀γ 6= γ 0 ) ⇒ fP admits a global attractor. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

17 / 34

Upper Bound – Sketch of Proof Sketch of Proof. Distance between distributions of actions of new agents ≤ Distance between distribution of samples ≤ Mean sample size * distance between old population states.

Upper Bound – Sketch of Proof Sketch of Proof. Distance between distributions of actions of new agents ≤ Distance between distribution of samples ≤ Mean sample size * distance between old population states. ⇒ k(fP (γ)) − (fP (γ 0 ))k1 ≤

Lemma 1

Def. of L1 Norm

P β · l ν (l) · ψl,γ − ψl,γ 0 1 + (1 − β) · kγ − γ 0 k1 ≤ (< if l > 1) P 0 0

β·

l

(β · (

ν (l) · l · kγ − γ k1 + (1 − β) · kγ − γ k1 = l∈N ν (l) · l) + (1 − β)) · kγ − γ

P

0k

=

(β · µl + 1 − β) · kγ − γ 0 k , where ψl,γ is the distribution of samples of length l, given γ.

Lem. 3

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

Characterizing Environments with Multiple Steady States

Theorem Let E be an environment. The following conditions are equivalent: 1

µl > 1 or ν (1) = 1.

2

There exists a learning rule σ, s.t. the learning process (E , σ) admits multiple steady states.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

19 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

Multiple Steady States: Sketch of Proof (1)

If agents always observe a single action (i.e., ν (1) = 1): Learning rule: each agent plays the observed action. Each initial state is a steady state. Consistent with best replying in the example of competing technologies.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

20 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

Multiple Steady States: Sketch of Proof (2) If µl > 1, then the learning rule is that each agent plays a if he has observed action a at least once, and plays action b otherwise. Two steady states: Everyone plays b. A share x > 0 of the agents play a, the rest play b.

Yuval Heller & Erik Mohlin

Lemma 4

Social Learning and the Shadow of the Past

21 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

Multiple Steady States: Sketch of Proof (2) If µl > 1, then the learning rule is that each agent plays a if he has observed action a at least once, and plays action b otherwise. Two steady states: Everyone plays b. A share x > 0 of the agents play a, the rest play b.

Lemma 4

Remark This learning rule might be consistent with best-replying, e.g.,: Initial state is either no one plays a or 90% play a. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

21 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

Multiple Locally stable States

In the construction used in the previous result, only one of the steady states is locally stable. Moreover, the state x > 0 is an almost-global attractor.

What is the minimal mean sample size to imply that there is a rule with multiple locally stable states?

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

22 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

µl > 2 ⇒Multiple Locally stable States

Theorem Let E be an environment satisfying µl > 2. There exists rule σ, s.t. the process (E , σ) admits multiple locally stable states.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

23 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

µl > 2 ⇒Multiple Locally stable States

Theorem Let E be an environment satisfying µl > 2. There exists rule σ, s.t. the process (E , σ) admits multiple locally stable states. Sketch of Proof (1): The Learning Rule. Observing action a twice or more ⇒ play a . Never observing action a ⇒ play b. Observing action a once ⇒ play q·a and (1 − q) ·b, where q < µ1 and

1

µl l Yuval Heller & Erik Mohlin

− q << 1. Social Learning and the Shadow of the Past

23 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

µl > 2 ⇒Multiple Locally stable States Sketch of Proof (2). Assume that a is played with frequency x << 1 (“state x ”). Share of new agents who play a is f (x ) = q · µl · x + (1 − 2 · q) · O x 2 . 

q<

1 µl

⇒ state x very close to zero converges to zero

(because q · µl · x + O x 2 < x ). 

1 µl

− q << 1 ⇒ state in which a few more agents play a

converges to more agents playing a (because 1 − 2 · q > 0). Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

24 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States We present two families of environments, each with sample sizes 1 ≤ µl ≤ 2: First family: Each learning rule admits at most one locally stable state. Second family: There is a learning rule with two locally stable states.

Conclusion: Some (but not all) environments with 1 ≤ µl ≤ 2 admit a learning rule with multiple locally stable states. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

25 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

l ≤ 2 ⇒ At Most One Locally stable State Theorem Let E = ({a, b} , β, ν) be an environment satisfying supp (ν) ⊆ {0, 1, 2}. Then for any rule σ, process (E , σ) admits at most 1 locally stable state.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

26 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

l ≤ 2 ⇒ At Most One Locally stable State Theorem Let E = ({a, b} , β, ν) be an environment satisfying supp (ν) ⊆ {0, 1, 2}. Then for any rule σ, process (E , σ) admits at most 1 locally stable state.

Sketch of Proof. State is the frequency x ∈ [0, 1] of agents playing a. Maximal sample size is 2 ⇒ fσ (x ) is a polynomial of degree two. ⇒ There are at most two steady states solving fσ (x ) = x . Simple geometric arguments: At most one of these steady states can be locally stable (see figures in the next slide). Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

26 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

l ≤ 2 ⇒ At Most One Locally stable State

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

27 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

l∈ {1, 3}⇒ “Follow Majority” has 2 Locally Stable States Theorem Let E = ({a, b} , β, ν) be an environment. Assume that: ν (1) < 1 and ν (1) + ν (3) = 1. Then there is a rule σ ∗ , such that environment (E , σ ∗ ) admits two locally stable states.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

28 / 34

Introduction Model Main Results Extensions and Discussion

Upper Bound µl > 1 ⇔ Multiple Steady States µl > 2 ⇒ Multiple Locally stable States 1 ≤ µl ≤ 2: Sometimes Multiple Locally stable States

l∈ {1, 3}⇒ “Follow Majority” has 2 Locally Stable States Theorem Let E = ({a, b} , β, ν) be an environment. Assume that: ν (1) < 1 and ν (1) + ν (3) = 1. Then there is a rule σ ∗ , such that environment (E , σ ∗ ) admits two locally stable states. Sketch of Proof. Learning rule: Each agents plays the frequently-observed action in his sample (as in the motivating example). Two locally stable states: everyone playing a and everyone playing b (and an unstable steady state: 0.5 · a + 0.5 · b). Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

28 / 34

Introduction Model Main Results Extensions and Discussion

Extensions Discussion & Related Literature Conclusion Backup Slides

Responsiveness and effective Sample Size

In the paper we present an upper bound to how much a learning rule is responsive to a change in the sample. We use this bound to define an effective sample size, which is (weakly) smaller than the mean sample size. We use this effective sample size, to achieve a (weakly) stronger result on learning processes that admit global attractors. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

29 / 34

Introduction Model Main Results Extensions and Discussion

Extensions Discussion & Related Literature Conclusion Backup Slides

Heterogeneous Populations

In many applications the population might be heterogeneous (i.e., the population includes various groups that differ in their sampling procedures and learning rules). E.g., Ellison & Fudenberg (93), Young (93), Munshi (04).

In the paper, we formally extend our model and results to deal with heterogeneous populations.

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

30 / 34

Introduction Model Main Results Extensions and Discussion

Extensions Discussion & Related Literature Conclusion Backup Slides

Non-Stationary Environments and Common Shocks In the paper we extend our model and results to deal with time-dependent learning rules, and we characterize when a non-stationary environment globally converges to a time-dependent sequence of states that is independent of the initial state. We further extend the model to deal with stochastic shocks that influence the learning rules of all agents (on the aggregate level), and we characterize when the initial population state may have a lasting effect in such environments. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

31 / 34

Introduction Model Main Results Extensions and Discussion

Extensions Discussion & Related Literature Conclusion Backup Slides

Repeated Interactions without a Global Calendar Time Agents are randomly matched within a community, and these interactions have been going on since time immemorial. Arguably, these situations should be modeled as steady states of environments without a calendar time (e.g., Rosenthal, 79; Okuno-Fujiwara & Postlewaite 95; Phelan & Skrzypacz 06; Heller & Mohlin 17).

Is the distribution of stationary strategies used by the players is sufficient to uniquely determine the steady state? Our main results show that this is true if: (1) µl < 1, or (2) l ≤ 2 and |A| = 2. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

32 / 34

Introduction Model Main Results Extensions and Discussion

Extensions Discussion & Related Literature Conclusion Backup Slides

Related Literature Existing literature focuses on a specific rule - myopic best reply. Arthur (1989) (& Arthur, 1994; Kaniovski & Young, 1995): Social learning is path dependent when technologies have increasing returns.

Young (1993) & Kandori et al. (1993) stability in the long run is independent of the initial conditions. Sandholm (2001) (& Oyama et al., 2015): agents observe k actions and the game admits a

1 k -dominant

action a∗ ⇒ global convergence to a∗ .

Our model differs from the existing literature by studying general environments with arbitrary learning rules (and arbitrary payoffs). Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

33 / 34

Introduction Model Main Results Extensions and Discussion

Extensions Discussion & Related Literature Conclusion Backup Slides

Conclusion Mean sample size (µl ) Is there a rule with multiple steady states? Is there a rule with multiple

µl ≤ 1

µl ∈ (1, 2]

µl > 2 Yes

No (global convergence)

Sometimes

Yes

locally stable states? Extensions: Heterogeneous populations, non-stationary environments, and common random shocks. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

34 / 34

Introduction Model Main Results Extensions and Discussion

Extensions Discussion & Related Literature Conclusion Backup Slides

Backup Slides

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

35 / 34

Introduction Model Main Results Extensions and Discussion

L1 -Norm - Definitions

Extensions Discussion & Related Literature Conclusion Backup Slides

Back





Let the L1 -distance between ψl,γ , ψl,γ 0 ∈ ∆ Al , γ, γ 0 ∈ ∆ (A):



ψl,γ − ψl,γ 0 = P m∈Al ψl,γ (m) − ψl,γ 0 (m) , 1 P 0 0

kγ − γ k1 =

a∈A |γ (a) − γ

Yuval Heller & Erik Mohlin

(a)| .

Social Learning and the Shadow of the Past

36 / 34

Proof of Lemma 1

Back

P

k(fP (γ)) − (fP (γ 0 ))k1 =

a∈A

|(fP (γ)) (a) − (fP (γ 0 )) (a)| =

P  P β · ν (l) m∈Al ψl,γ (m) · σm + (1 − β) · γ (a) l  P P − β · l ν (l) · m∈Al ψl,γ 0 (m) · σm + (1 − β) · γ 0 (a) =  P P P 0 β· ν (l) · l ψl,γ (m) − ψl,γ 0 (m) · σm (a) + (1 − β) · (γ (a) − γ (a)) ≤

P

a∈A

a∈A

l

m∈A

(4 Ineq.)

P a∈A

β·

P l

P a∈A

β·

P

ν (l) ·

l

P

ν (l) ·

P a∈A

m∈Al

P

|γ (a) − γ 0 (a)|≤

m∈Al







ψl,γ (m) − ψl,γ 0 (m) · σm (a) + (1 − β) · |γ (a) − γ 0 (a)| =





ψl,γ (m) − ψl,γ 0 (m) · σm (a) + (1 − β) ·

Lemma 2

β·

P l





ν (l) · ψl,γ − ψl,γ 0 + (1 − β) · γ − γ00 . 1

1

Introduction Model Main Results Extensions and Discussion

Proof of Lemma 2

(4 ≤)

Extensions Discussion & Related Literature Conclusion Backup Slides

Back

X X  ψl,γ (m) − ψl,γ 0 (m) · σm (a) a∈A m∈Al X X ψl,γ (m) − ψl,γ 0 (m) · σm (a) ≤ a∈A m∈Al

=

X X ψl,γ (m) − ψl,γ 0 (m) · σm (a) m∈Al

a∈A

X

ψl,γ (m) − ψl,γ 0 (m) · 1 = ψl,γ − ψl,γ 0 . = 1 m∈Al

Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

38 / 34

Proof of Lemma 3

Back



− → − → =

ψl,γ − ψl,γ 0 = P→ − a ∈Al ψl,γ a − ψl,γ 0 a 1 P Q Q 0 γ (a ) − γ (a ) → − = telescoping series i i a 1≤i≤l 1≤i≤l P  Q  Q  P 0 0 → − a ∈Al 1≤i≤l γ (ai ) − γ (ai ) · i 1)

P

Q  Q  0 γ (ai ) − γ 0 (ai ) · γ a · γ a = j j i
→ − a ∈Al

(ai ,...,ai−1 )∈Ai−1

1≤j
γ 0 aj



 γ (ai ) − γ 0 (ai ) · 1 · 1 = P

γ − γ 0 = 1≤i≤l 1



l · γ − γ 0 1 ≤ l · γ − γ 0 . =

P

1≤i≤l

P

ai ∈A

Introduction Model Main Results Extensions and Discussion

Telescoping series

Extensions Discussion & Related Literature Conclusion Backup Slides

Back

Y

ai −

1≤i≤l

Y

bi =

1≤i≤l

(a1 · ... · an − b1 · a2 · ... · an ) + (b1 · a2 · ... · an + b1 · b2 · a3 · ... · an ) − b1 · b2 · a3 · ... · an + ... + b1 · ... · bn = (a1 − b1 ) · a2 · ... · an + (a2 − b2 ) · a3 · ... · an · b1 + (a3 − b3 ) · a4 · ... · an · b1 · b2 ... + (an − bn ) · b2 · ... · bn =

 =

X 1≤i≤l

Yuval Heller & Erik Mohlin

(ai − bi ) ·

 Y i
aj ·

Y

bj  .

1≤j
Social Learning and the Shadow of the Past

40 / 34

Introduction Model Main Results Extensions and Discussion

Proof of Lemma 4

Extensions Discussion & Related Literature Conclusion Backup Slides

Back

A share of x ∗ playing a∗ is a steady state iff:   P P l ∗) = ν (l) · Pr (sampling a ν (l) · 1 − (1 − x ) ≡ g (x ) = x θ l l g (x ) is continuous & increasing, g (1) ≤ 1.. x << 1 ⇒ 1 − (1 − x )l can be (Taylor-)approximated by   1 − (1 − x )l = 1 − 1 − l · x + O x 2 = l · x + O x 2  ⇒ g (x ) = µl · x + O x 2 > x ∗

⇒ ∃0 < x ∗ ≤ 1 s.t. g (x ∗ ) = x ∗ ⇒ γ x is a steady state. Yuval Heller & Erik Mohlin

Social Learning and the Shadow of the Past

34 / 34

Social Learning and the Shadow of the Past

Social Learning and the Shadow of the Past. Yuval Heller (Bar Ilan) and Erik Mohlin (Lund). Technion, Game Theory seminar, November. 2017. Yuval Heller ...

2MB Sizes 0 Downloads 219 Views

Recommend Documents

ICO Shadow Of The Colossus.pdf
... the apps below to open or edit this item. ICO Shadow Of The Colossus.pdf. ICO Shadow Of The Colossus.pdf. Open. Extract. Open with. Sign In. Main menu.

The Shadow of Mirage.pdf
The Shadow of Mirage.pdf. The Shadow of Mirage.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying The Shadow of Mirage.pdf. Page 1 of 8.

Saving and the long shadow of macroeconomic shocks
Sep 12, 2015 - The views expressed ..... view, the theoretical literature, as well as the micro-empirical literature, is not cohesive enough to ..... 37 (3), 353–360.

Charlie+Bone+and+the+Shadow+of+Badlock,+Book+ ...
Charlie+Bone+and+the+Shadow+of+Badlock,+Book+07)+-+Jenny+Nimmo.pdf. Charlie+Bone+and+the+Shadow+of+Badlock,+Book+07)+-+Jenny+Nimmo.pdf.

The Performance Cost of Shadow Stacks and Stack ... - CiteSeerX
ABSTRACT. Control flow defenses against ROP either use strict, expen- sive, but strong protection against redirected RET instruc- tions with shadow stacks, or much faster but weaker pro- tections without. In this work we study the inherent over- head

The sBook: towards Social and Personalized Learning ...
Behavioral Sciences]: Sociology;. General Terms. Algorithms, Design, Human Factors. Keywords. sBook, e-book, dynamic social network, learning activity, heat maps, community activity summaries, personalized learning path. .... style in the literature

Information Acquisition and the Collapse of Shadow ...
Feb 25, 2017 - 10Even if early withdrawals cannot be fully covered by selling all assets, the ... banks with good assets never default: i.e. lh(p) = D2 for all p ...

The Long Economic and Political Shadow of History ... - Squarespace
(1997, 1998) have gone over the corporate and bankruptcy laws of many countries ..... India. Russia. Ghana. Source: Maddison. Understanding persistence appears crucial, therefore, as a combined reading of Figures. 1a-1c with Figures 2a-b suggests tha