ECO421 Economics of Information, Winter 2020

Marcin Pęski

Contact information

Name Lecture Office hours Contact
Instructor Marcin Pęski Monday, 2-5pm, WI 523 Tuesday 2-6pm by appointment (email), Max Gluskin 207
TA Rami Abou-Seido TBA n/a

Course information

The course syllabus can be found under this link.
The (tentative) course schedule and the assigned readings (from the book) are listed below.
There is no required book for the class. For some topics, I recommend additional readings which are listed in the table below (Osborne means “An introduction to game theory” by M. Osborne)
Date Topic Videos Questions Concepts Skills Examples Extra readings
06-01 What is information? How to reason about information?
States of the world.
Information. Information structure.
Knowledge sets.
Reasoning about knowledge.
Common knowledge.
Represent a story with asymmetric information with state space and information of players.
Explain how new information affects the information structure.
Reason about knowledge and knowledge about knowledge.
Two generals
13-01 How to evaluate evidence?
Type space.
Bayes formula. Prior and interim beliefs.
Bayesian Nash equilibrium.
Represent a story with asymmetric information with a type space.
Use the Bayes formula to find conditional probabilities
Derive interim beliefs from a prior.
Find Bayesian Nash equilibria in games with incomplete information.
Why it is difficult to get flood insurance?
Why the economy class seats are so uncomfortable?
Market unraveling due to adverse selection.
Screening with menus.
Individual rationality and incentive compatibility.
Find conditions for market unraveling due to adverse selection
Check whether a menu is incentive compatible and individually rational.
Find optimal menus.
Market for lemons.
Health insurance with pre-existing condition.
Price-quantity discrimination.
When talk is cheap.
Why to always bring your transcript to the interview?
And why top MBA schools do not reveal grades?
Communication game. Equilibrium with beliefs.
Incentives to communicate in cheap talk games.
Communication with verifiable talk.
Verifiable information disclosure. Good news and bad news.
Find equilibria in simple cheap talk games.
Explain when refusing to reveal information is informative.
Doctor and patient with alligned and misalligned preferences.
Grade revelation.
Osborne 10.8 (wait with reading after lecture
03-02 Learning from others behavior.
Extensive form games.
Nash and Subgame perfect equilibrium.
Extensive form games with incomplete information.
Weak Perfect Bayesian Equilibrium.
Find equilibria of extensive form games.
Use Bayes formula and players’ strategies to update beliefs.
Find equilibria of simple extensive form games with incomplete information. .
Entry game.
Osborne 10.1-10.3
10-02 Midterm.
Why do we go to the university?
Suits, wedding rings, and red frogs.
Signaling games.
Pooling and separating equilibria.
Intuitive criterion.
Find equilibria of signaling games
Spence’s job market.
Conspicuous consumption.
Osborne 10.5-10.7
2-03 How to pay a worker.
Moral hazard.
Linear contacts and bonuses.
Optimal contracts.
9-03 How to pay a worker, part II TBA
16-03 “We never negotiate with terrorists.”
Modeling reputations.
WPBE with mixed strategies.
Conditions for successful reputation building (payoffs, length of the game, beliefs)
Finding reputation equilibria
Computing value of reputation
Chain store game.
Centipede game.
Reputation for promise-keeping
Herds and fashions.
How to (not) follow the others
Informational cascades
Size of herd necessary for a cascade
Finding probability of a bad cascade
  • TBA

Past exams

Problem set: Knowledge

  1. Suppose that the true state of the world is a temperature ω ∈ Z (expressed in integer degrees of Celsius). I have a phone app that tells me whether the temperature is strictly larger than 0 or weakly smaller than 0. I have no other information. Thus, my information structure is given by partition {{...,  − 2,  − 1, 0}, {1, 2, 3, ...}}, or by the types
    TM(ω) =  {...,  − 2,  − 1, 0},    if ω ≤ 0,         {1, 2, 3, ...},    if ω > 0. 
    (Here, M stands for Marcin)
    1. Show that KM{5} = ∅. In other words, in no state of the world, “I know that the temperature is 5.”
    2. Suppose that E = {ω:ω ≥ 3}. E is the event that corresponds to “The temperature is larger or equal to 3.” Show that in no state of the world, I know that event E is true.
    3. Suppose that F = {ω:ω < 5},  i.e., F means “the temperature is below than 5. Explain that
      KMF = {...,  − 2,  − 1, 0}.
      Or, I know that the temperature is below 5, whenever the true temperature is 0 or below.
  2. Convince yourself that for each information structure,
    KiT(ω) = T(ω).
    In other words, each agent knows their type. Also, convince your self that
    KiΩ = Ω.
    In other words, each player always knows that the world exists.
  3. We can compare different information structures in many different ways. For example, there is a natural notion of “better information”. Suppose that there are two people, Alice and Bob. Recall that information structure is a partition of the state space. We say that Bob has better information than Alice, if Bob’s partition is finer than Alice’s. The latter notion should be intuitive: any element of Bob’s partition is contained in some element of Alice’s partition: For each ω ∈ Ω, TB(ω) ⊆ TA(ω).
    1. Explain that for any event E,
      KAE ⊆ KBE.
      In other words, if Alice knows E, then Bob knows E as well.
    2. The converse does not hold necessarily. In other words, give an example of information structures of Alice and Bob, where Bob has a better information, and an event E such that
  4. There are four roads leading to the capital city from four directions of the world: West, South, East and North. There are four invading armies approaching the city along each of the roads. The commander of each army sees the state of the gate in front of him as well as the gate immediately to the left and to the right of him. Each gate can be either weakly or strongly defended.
    1. Describe the state space. Describe the information structure of general West.
    2. After arriving in front of the city, each commander raises a red flag if at least two of the gates she observes are strongly defended. Otherwise, she raises a green flag. The color of the flag is observed by the neighboring armies.
      Suppose that the general West observes general North raising the red flag and general South raising the green flag. Describe how this additional information changes the state space and the information structure of the commander West.
Solutions to Problem set: Knowledge

Problem set: Beliefs

  1. (O.J. Simpson trial) The O. J. Simpson murder case was a famous criminal trial in which former football star O.J. Simpson was tried for murder of his wave and a on two counts of murder for the June 12, 1994, deaths of his ex-wife Nicole Brown Simpson and Mezzaluna restaurant waiter Ronald Goldman. During the trial, the prosecution brought up the fact that O.J. investigated by police for domestic violence multiple times and pleaded no contest to spousal abuse in 1989. The defense argued that past domestic abuse is irrelevant because only 0.1% of the men who physically abuse their wives actually end up murdering them.
    It turns out that the defense argument is not valid, but to explain why, we need to make computations based on the Bayes formula. Let G be the event that O.J. Simpson was guilty of the murder, let A be the event that O.J. Simpson abused his wife. We assume that
    (a) P(M|G) = P(M|G, A) = 1,  (b) P(M|Gc, A) = P(M|Gc) = (1)/(30000),  (c) P(G|A) = (1)/(1000),  (d) P(Gc|A) = (999)/(1000).
    Here (a) is just logic, (b) the second equality comes from the following data: in 1994, around 5000 women were murdered, 1500 by their husbands, and given population of 100 mln wifes, we get the probability that a wife is killed given her husband being innocent as (5000 − 1500)/(100mln) ≈ (1)/(30000); the first equality comes from the fact that information that husband was abusive should not affect the murder rates if the husband was not the one who did it, (c) and (d) come from the defense argument. The defence was explaing that A is irrelevant because P(G|A) is very small. However, that is not relevant number, because additionally to the fact that O.J. abused his wife, we also know that she was murdered. Hence, the relevant number is \strikeout off\uuline off\uwave off
    P(G|A, M).
    Let us calculate.
    1. \strikeout off\uuline off\uwave offAssume that the probability that a wife is abused by her husband is μ (there are statistics about it, but we do not need to know μ to make our argument). Use the Bayses formula\uuline default\uwave default and teh above data to compute
      P(G, A).
      \strikeout off\uuline off\uwave off(The answer should depend on μ.) \uuline default\uwave default
    2. Use the Bayes formula and the answer to question (b) to compute
      P(M, G, A).
      \strikeout off\uuline off\uwave off(Once more, the answer should depend on μ.) \uuline default\uwave default
    3. \strikeout off\uuline off\uwave offUse Bayes formula and the above data to compute
      P(A, M) = P(M, G, A) + P(M, Gc, A).
      (The answer should depend on μ.)\uuline default\uwave default
    4. Finally, use the above answers to compute \strikeout off\uuline off\uwave off
      P(G|A, M).
      Does the answer depend on μ? Does the answer support or refute the defense argument. \uuline default\uwave default
  2. (Investigation) Consider the following version of the Investigation. Trump is guility for sure, but there is uncertainty whether Mueller has evdience and whether Trump knows about it. There are three states of the world:
    • n: no evidence,
    • eu: there is evidence, but it is unknown to Trump,
    • ek: there is evidence and it is known to Trump.
      The information structure is
      TMueller  = {{n}, {eu, ek}},  TTrump  = {{n, eu}, {ek}}.
    • Describe the information structure on a well-labeled picture.
    1. Suppose that both players have a prior belief that there is evidence with probability p and, if there is evidence, Trump will know about it with probability q. The following table describes the beliefs of each type
      Trump n eu ek
      U (1 − p)/(1 − p + (1 − q)p) ((1 − q)p)/(1 − p + (1 − q)p) 0
      K 0 0 1

      Mueller n eu ek
      N 1 0 0
      E 0 1 − q q
      Use Bayes formula to explain Trump’s beliefs.
    2. Mueller needs to decide whether to rush and finish the investigation or continue. The latter means that he may get extra evidence, but he also risks that Trump closes off the investigation before it is finished. Trump needs to decide whether to wait or to start the procedure that leads to Mueller dismissal. The payoffs depend on whether Mueller has the evidence and they are in the table below:
      ω = n:Mueller\Trump Wait Fire Mueller
      Finish 1, 1 1, 0
      Continue 0, 2 0,  − 5
      ω = eu, ek:Mueller\Trump Wait Fire Mueller
      Finish 2,  − 5 0, 0
      Continue 3,  − 10 1,  − 5

      Explain that Mueller has strictly dominant action for each of his types.
    3. Given Mueller’s behavior, what is Trump’s best response? How does your answer depend on p and q?
Solutions to Problem set: Beliefs

Problem set: Adverse selection

  1. (Screening) The monopolist sells goods (q, p) where q is the quality and p is the price. The profits from selling each unit of such a good are p − c(q), where the c(q) = (1)/(2)q2 is the choice of the quality. The monopolist wants to maximize the profits. The consumer’s utility from having a good is equal to
    θ(1 + q) − p.
    Here, θ ≥ 0 is the taste for the quality. The consumer buys the good if the utility from ownership is positive.
    1. Find the optimal choice of p and q in the complete information case, i.e., when the monopolist knows θ. Be careful to state the individual rationality condition.
    2. Suppose that there are two types of consumers θh with probability π and θl < θh with probability 1 − q. The monopolist wants to design the optimal menu of contracts. Describe the monopolist’s problem. Be careful to state the individual rationality and incentive compatibility conditions.
    3. Show that for any incentive compatible menu (i.e, a menu that satisfies the two IC constraints), qh > ql.
    4. Show that the IRh constraint is implied by the other constraints.
    5. Show that the ICh constraint is binding (i.e., it is satisfied with equality).
    6. Show that ICl constraint is implied by the other constraints and the previous observations.
    7. Show that the IRl constraint is binding.
    8. Use the above discussion to describe the simplified problem of the monopolist.
    9. Find the optimal menu.
  2. (Health insurance) Suppose that you live in one of the countries where you need to buy health insurance on a private market. Suppose that insurance charges fee p and pays out the medical costs C = 1000 in case of need. An insured receives extra utility equal to Δ = 50 from having insurance. An individual with risk π = 10% buys the insurance if
     − p + πC + Δ > 0, 
    where Δ is the extra value of peace of mind for an insured individual.
    1. Derive a condition that guarantees that the insurance company makes non-negative profits. Show that there exists an interval for prices p such that the indivduals are happy to buy insurance and the inusrance companies are happy to sell.
    2. Suppose that there are two types of individuals, high risk, with probabioity of damage πh > π and low risk, with probabioity πl < π. We assume that the fraction of high risk individuals is such that
      π = ρπh + (1 − ρ)πl.
      For what values of πl,  it is possible to have a market with insurance companies making non-zero profites and both types of the indiduals buying insurance?
Solutions to Problem set: Adverse Selection

Problem set: Communication

  1. (Cheap talk between the CEO and the Investor). The CEO observes the quality of the project ω = {0, 1}, and uses messages m = {0, 1} to communicate the information to the Investor. The Investor chooses how much to invest a ∈ [0, 1] into the project. The payoffs of the CEO and the Investor are
    UCEO(a, ω)  =  − (a − (ω + t))2,  UInvestor(a, ω)  =  − (a − ω)2.
    1. Suppose that the Investor belief is given by p(0), p(1) ≥ 0 such that p(0) + p(1) = 1. Find the best response action.
    2. For what values of t the truthful communication strategy is an equilibrium strategy?
  2. Ann discovers that one of her two employees, Bob or Celine, is leaking trade secrets to the competition. She estimates the probability that Bob thinks that the probability that Bob is guilty is π ∈ (0, 1). She knows that another employee, Damian, knows who is leaking. She wants to know whether she can trust Damian’s report. The payoffs are in the table:
    Ann\Damian ω = Bob ω = l
    Fire Bob 1, t − vB 0,  − v
    Fire Celine 0,  − vC 1, t − vC

    Here, vB, vC are measures of how much Damian likes, respectively, Bob and Celine, and t > 0 is the intrinsic value of truthtelling.
    1. What is Ann’s best response without any communication with Damian?
    2. Show that if |vB − vC| is small enough, then truthful communication is possible in the equilibrium (wPBE). Is it unique equilibrium?
    3. Find an equilibrium (wPBE) if vB − vC > t. Be careful about off-path beliefs.
Solutions to Problem set: Communication

Problem set: Extensive

  1. (Rothschild’s fortune) Before the battle of Waterloo, Nathan Rothschild sent messengers with pigeons to Belgium. Thanks to them, he was the first man in London to know the outcome of the battle with Napoleon. Armed with his information, he could decide whether to buy or to sell British war consoles (government bonds). The rest of the Market knew about his private information. They also could observe his decision and react accordingly. The actions of Mr. Rothschild (who moves first) are in the rows, the actions of the rest of the Market are in the columns
    Wellington wins BuyM SellM
    BuyR 1,1 3,-3
    SellR -3,3 -1,-1
    Napoleon wins BuyM SellM
    BuyR -1,-1 -3,3
    SellR 3,-3 1,1
    1. Is there an equilibrium in which Mr. Rothschild follows his knowledge (i.e., bus British consoles in case of victory and sells them if Wellington loses?)
    2. Is there an equilibrium, where Mr. Rothschild always buys? Be careful about the off-path beliefs.
  2. (Entry) Consider the entry game from the class, but with (1)/(9)(a − c)2 < fl < (1)/(4)(a − c)2 < fh and such that Ef < (1)/(4)(a − c)2.
    1. Can the Incumbent’s strategy “Always EnterI” be ever played in equilibrium?
    2. Check whether “Always OutI” and “Always EnterC” is a (wPBE) equilibrium. Carefully explain the off-path beliefs.
    3. Is there a separating equilibrium in this game?
Solutions to Problem set: Extensive games

Problem set: Signaling

  1. (Spence’s education) Consider the Spence’s model of education with the cost function c(e, θ) = e(1 − θ), where the ability type θ ∈ {θh, θl}and 0 < θl < θh < 1 .
    1. Check that the marginal cost of education is decreasing with the ability.
    2. Describe the set of all pooling equilibria.
    3. Describe all separating equilibria.
  2. (Gazelle stotting) A gazelle notices a tiger creeping in the bush. The tiger wonders whether the gazelle is fast (type f, with probability π) or slow (type s). The gazelle decides whether to start stotting or not. The tiger observes the gazelle’s behavior and chooses whether to chase the gazelle or not. The payoffs of the tiger depend on the type of the gazelle and they are equal to
    Tiger’s payoffs f s
    chase  − 1 2
    no chase 0 0

    (It is easier to catch the slow gazelle. The fast gazelle has a sufficiently high chance of running away, which ends with a waste of energy for the tiger.) The gazelle pays cost a > 0 if it is chased. Additionally, if the gazelle stots, it pays tθ (in the energy expenditure). We assume that tf < ts < a, or that the fast gazelle pays less in the cost of stotting.
    1. Does the game have pooling equilibria? Is there a pooling equilibrium, where both types of the gazelle are stotting?
    2. Does the game have fully separating equilibria?
    3. Suppose that π( − 1) + (1 − π)2 > 0. Does the game have a partially separating equilibria? Carefully describe the strategies and beliefs.
  3. (Lark’s singing). A skylark notices a falcon hovering in the air. The falcon wonders whether the skylark is healthy (type h, with probability (2)/(3)) or sick (type s). The skylark decides whether to run away immediately or sing first. The falcon observes the skylark’s behavior and chooses whether to attack or not. If the falcon does not attack, it receives payoff 0. If it attacks, it receives payoff 3 if the skylark is sick and payoff -1 if the skylark is healthy. The skylark’s payoffs are described in the table.
    Payoffs not attacked attacked, type h attacked, type s
    singing 1  − df  − ds
    rnning away 1 0 0

    where df < ds is the decrease in the survival chance caused by not running away immediately.
    1. Does the game have pooling equilibria? Carefully describe all off-path beliefs.
    2. Does the game have fully separating equilibria?
    3. Show that there exists an equilibrium, in which the healthy skylark always sings and the falcon always attacks if the skylark runs away immediately. Carefully describe the strategies and beliefs.
Solutions to Problem set: Signaling

Problem set: Moral hazard

  1. (Multi-tasking) Consider a version of the multi-tasking model with a cost function c(e1, e2) = (1)/(2)e21 + (1)/(2)e22. Teacher chooses two types of effort e1, e2 ≥ 0. The teacher’s and the school district payoffs given effort choices and wages are equal to
    πteacher(e1, e2;w)  = w + a(e1 + e2) − c(e1, e2),  πdistrict(e1, e2, w)  =  − w + e1 + βe2, 
    The school district is using paying teachers with linear contracts w(e1, e2) = w0 + αe1, where α ≥ 0.
    1. Solve the teacher’s problem.
    2. Explain that the IR constraints are binding.
    3. Solve the principal’s problem.
  2. (How to pay a worker) An employer sells widgets at price p = 6. The widgets are produced by worker. The number of widget produced f(e) = 3e is a function of the worker’s effort e ≥ 0. The cost of effort is c(e) = (1)/(6)e3. The worker’s outside option is U0 = 0.
    1. Find the first best (i.e., socially optimal) number of widgets produced.
    2. Show that there exists a contract for which the first-best level of effort is incentive compatible.
    3. State the principal’s problem.
    4. Explain that the contract you found in part (b) is a solution to the principal’s problem.
  3. (Partially observable effort) A security company offers to protect data of their clients. The probability of data breach q(e) = 1 − e depends on the level of effort e > 0 chosen by the company. The cost of effort is c(e) = ce2, where c > 0 is a constant/.
    An online retailer offers to hire the company with the following contract: If there is no data breach, the company will receive payment π0, and if there is a data breach, the company will receive π1. The cost of data breach for the online retailer is d > 0.
    1. Given π0, π1, find the optimal choice of effort by the security company.
    2. Assuming that the outside option for the security company is equal to U0, describe the retailer’s problem of finding the optimal contract.
    3. Solve the retailer’s problem. (Hint: Use (a) to eliminate the IC constraint and explain that the IR constraint is binding).
    4. Find the first-best (i.e., socially optimal) level of effort. Is the effort chosen under the optimal contract the same as the first-best?
    5. Explain that the optimal contract can be interpreted as selling the firm.
  4. (Multitasking) Consider a version of the multi-tasking model. A teacher chooses two types of effort e1, e2 ≥ 0. The teacher’s and the school district payoffs given effort choices and wages are equal to
    πteacher(e1, e2;w)  = w(e1, e2;α) − c(e1, e2),  πdistrict(e1, e2, w)  =  − w(e1, e2;α) + e2.
    Notice that the school district cares only about the second type of effort e2. The cost function is equal to
    c(e1, e2)  = (e1 + e2)2 + 2(e1 − e2)2.
    In particular, the teacher does not like effort, but also does not like to vary effort across activities. The school district observes the first type of effort e1 and pays the teachers with linear contracts w(e1, e2;α) = w0 + αe1, where α ≥ 0.
    1. Solve the teacher’s problem.
    2. Compute the worker’s utility from the contract. Explain that the IR constraints are binding.
    3. State the principal’s problem. Use the above observation to reduce the principal’s problem to the unconstrained version (you do not need to solve it). Will the district choose a flat-wage contract?
  5. (Moral hazard with partially observable effort) An IT security professional (agent) is hired to work for an online retailer (principal). The employer offers a contract: If there is no data breach, the agent receives wage w0, and if there is a data breach, the agent receives w1. The employer chooses the contract to maximize the profits. The contract must satisfy the minimum wage constraint: namely, the agent’s salary must be higher than w1, w0 ≥ wmin, where wmin ≥ 0 is a constant. The probability of data breach,q(e) = 1 − e,  depends on the level of effort e ≥ 0 chosen by the agent. The cost of effort is c(e) = (1)/(2)e2. The cost of data breach for the employer is d > 0.
    (As you recall, the standard principal-agent’s problem assumes an IR constraint that ensures that the agent receives at least utility equal to her outside option. Here, we consider the alternative: minimum wage constraint. The goal of this question is to check whether the properties of the standard solution are preserved under a different type of constraint.)
    1. Given w0, w1, find the optimal choice of effort by the agent.
    2. State the principal’s problem. (Remember to use the minimum wage constraint instead of the IR constraint.)
    3. Explain that under the optimal contract, w0 ≥ w1.

    4. Solve the employer’s problem. (Hint: Using the fact that under the optimal contract w0 ≥ w1, explain that the minimum wage constraint is binding.) What is the effort level chosen under the principal-optimal contract?
    5. Find the first-best (i.e., socially optimal) level of effort. Is the effort chosen under the optimal contract the same as the first-best?Can the optimal contract be interpreted as selling the firm? Why or why not?
Solutions to Problem set: Moral Hazard

Problem set: Reputation

  1. (Repeated Cournot duopoly). There are two firms i = 1, 2. Both firms choose quantities qi ≥ 0 and receive profits:
    πi(qi, q − i) = qi(a − c − (q1 + q2)), 
    where a > 0 is the largest possible price on the market, c > 0 is the (constant marginal cost of production) and we assume that a > c.
    1. Show that the best response function is
      bi(q − i) = max0, (1)/(2)(a − c − q − i).
      Show that the game has a unique equilibrium in which both players choose qC = (1)/(3)(a − c) and that the players payoff is equal to πC = (1)/(9)(a − c)2.
    2. Suppose now that player 1 faces sequentially (i.e., one after another) T different players 2. (You can think about a single-large firm operating in different markets and competing with a sequence of smaller firms that each operate in a single market). What are the equilibrium strategies and payoffs?
    3. From now on, suppose now that there is a probability ε that the player 1 is “crazy” and always chooses action q* ∈ (qC, qS), where qS = (1)/(2)(a − c) is the “Stackelberg’s action. With the remaining probability, 1 − ε, the player 1t is normal, which means that it has the payoffs as described above.First, show that if ε = 1 (i.e., player 2 is sure than the player 1 is crazy), then the player 2’ss unique SPE action is b(q*). Show that player 1’s payoff
      π*: = π(q*, b(q*))
      is larger than the payoff in the Cournot equilibrium πC.
    4. We consider pure-strategy equilibria of the repeated game with incomplete information about the type of player 1. Let p(h) be the probability that player 1 is “crazy”. Let q1(h) be the equilibrium strategy of the normal type. Explain that the equilibrium action of player 2 after history h is
      q2(h) = b(p(h)q* + (1 − p(h))q1(h)).
    5. Suppose that h is a history at the beginning of period t such that p(h) > 0. Show that, if q1(h) ≠ q*, then it must be that
      π(q*, q2(h)) + (T − t)π* ≤ π(q1(h), q2(h)) + (T − t)πC.
    6. Conclude that if T is large enough, then in the first period, then in all but few last periods, player 1 plays q* in each equilibrium.
  2. Consider a (more general) version of the centipede game, where the player who stops in period t receives payoff t + a and the other player receives t − b. (In class, we assumed that a = 2 and b = 1). We assume that a > 1 − b so that each player has an incentive to stop now, rather than wait for the opponent to stop in the next period (notice that the assumption implies that t + a > (t + 1) − b). Assume that ε is the probability of the “crazy” type who always continues, and 1 − ε is the probability that the player is normal.
    1. Suppose that the t-period player is randomizing between stopping and continuing. Show that the probability that the next period player stops is equal to qt + 1 = (2)/(1 + a + b).
    2. Suppose that the t-period player is randomizing between stopping and continuing.Show that if pt + 1 is the (beginning of the period) probability that the (t + 1)-period player is crazy given that the game did not stop before period t + 1, then
      pt + 3  = P((t + 1) -period player is crazy|player continued in period t + 1)  = Apt + 1, 
      where A = (1 + a + b)/(a + b − 1) is a constant.
    3. Explain that, in the last period of the game Tmax, if the previous period player is randomizing , then it must be that pTmax = (1)/(A).
    4. Put all of the above together to explain the structure of the equilibrium. How long players continue with certainty? (For simplicity, you can assume that ε = A − k for some natural number k).
  3. A consultant faces a sequence of T < ∞ clients. The clients approach the consultant one after another. Each client decides whether to hire the consultant or not; if the consultant is hired, she decides whether to put an effort. Each subsequent clients observe the decisions made in the previous periods. The payoffs are
    Consultant, Client Hire No hire
    Effort a, 1 0,0
    No effort b,  − 1 0,0

    where we assume 0 < a < b. Each next client observes the outcome of the previous game (and, in particular, whether the consultant was hired and whether she put an effort) before he makes his decision.
    1. Find the sub game perfect equilibrium of the T repeated game. Is the equilibrium unique?
    2. From now on, suppose that, with probability ε > 0, the consultant is an “honest” type, who always puts an effort. With probability 1 − ε, the consultant is a “normal” type who has payoffs as described above. Show that if T = 1, and ε is very small, then in the unique SPE, the client does not hire the consultant. Find threshold ε* such that if ε ≥ ε*, the consultant is hired in one shot game.
    3. Suppose now that T > 1. Show that if a(T − 1) > b and ε > 0, then, in each pure strategy equilibrium, in the first period, the client hires the consultant and the two types of the consultant put an effort.
Solutions to Problem set: Reputation

Problem set: Social learning

  1. Consider a more general version of the social learning problem from the class. There are two states of the world ω ∈ {0, 1} with a prior probability of state 1 equal to π. A sequence of agents observes signals s ∈ {0, 1}, with the probability of signal s given state ω equal to qsω. (We have q0ω + q1ω = 1 for each ω.) For future use, denote fs = (qs0)/(qs1) as the likelihood ratio of signal s, and we assume that f1 < f0.
    One after another, the agents choose actions a ∈ {0, 1} and receive payoffs uω > 0 if the action matches the state, a = ω, and 0 otherwise. Each aget observes the actions (but no payoffs) of the previous agents.
    (To focus attention, think about the agents as farmers, who receive provate information about the profitability of using fertilizer. The action is the choice whether to use the fertilizer. )
    1. Let p(h) denote the probability that ω = 1 after observing history h (that includes the actions of the previous agents as well as his own signal s). Show that the agent chooses action a if p(h) > p* = (u0)/(u0 + u1), action 0 if p(h) < p*and is indifferent between two actions if p(h) = p*.
    2. Suppose that the first n agents act sincerely (i.e., their action was equal to their signal al = sl for each l ≤ n). Consider n + 1th agent with signal sn + 1. Use the Bayes formula to show that his updated beliefs are equal to
      p(s1, ..., sn + 1) = (1)/(1 + (1 − π)/(π)n + 1l = 1fsl).
    3. Explain that the (n + 1)th agent will act sincerely if
      nl = 1fslf1 < (π)/(1 − π)(1)/(p*) − 1 < nl = 1fslf0.
  2. Consider a small generalization of the social learning model from the lecture. There are two states of the world ω ∈ {0, 1} with a prior probability of state 1 equal to π = (1)/(2). A sequence of agents observes signals s ∈ {0, 1}, with the probability q > (1)/(2) that signal s is equal to the state ω (and remaining probability 1 − q that s is equal to 1 − ω). A sequence of agents , one after another, choose actions a ∈ {0, 1}. Each agent observes the actions (but no payoffs) of the previous agents. Each agent receives payoff uωif her action is equal to the state and 0 otherwise. We assume that u1 > 1 = u0, i.e., the payoff of choosing the correct action is greater in state ω = 1 than in state ω = 0.
    1. Suppose that p are some agent’s beliefs that state is equal to ω = 1. Explain that there is a threshold p* such that the agent chooses a = 1 if p > p*, and a = 0 if p < p*. Show that p* < (1)/(2).
    2. Derive condition on q than ensures that the first agent acts sincerely (i.e., chooses his action to be equal to his signal).
    3. Does the second agent act sincerely? If so, explain. If not, what is the optimal decision of the second agent?
Solutions to Problem set: Social learning