Approximate Aggregation Revisited: Higher Moments Do Matter Andrea Giusto∗ November 2014 Abstract This paper addresses the “approximate aggregation” result by Krusell and Smith (1998) who show that in a heterogeneous-agent model it is possible to obtain near-perfect forecasts disregarding distributional information. While this fact is generally interpreted causally, the forecasting model is misspecified and thus unfit for inference. Approximate aggregation does not hold in the baseline economy of Krusell and Smith (1998) when inferences are drawn from an econometric model showing no evidence of misspecification: the higher moments of the wealth distribution are important for the aggregate dynamics. Keywords: Approximate Aggregation, Incomplete Markets, Household Heterogeneity, Forecasting and Inference. JEL Codes: E10, E25, C52. ∗

Dalhousie University, Department of Economics. Email: [email protected]. My gratitude to G. W. Evans.

1

1 Introduction

1

2

Introduction

In heterogeneous-agents models, rational decision making requires that agents account for the savings of all the other agents to estimate future capital. Under rational expectations, such estimation is exact and it is performed by integrating an individual’s policy function. Krusell and Smith (1998) circumvent our inability to compute such integrals by assuming that the agents estimate the future levels of aggregate capital through a simple log-linear function. One interpretation of this strategy views the log-linear forecasts simply as an approximation to the actual law of motion of aggregate capital, the goodness of which is evaluated by the precision of the forecasts. The equilibrium reached by the economy is then conceptualized as a restricted-perception equilibrium (RPE) – see Sargent (1999) and Evans and Honkapohja (2001). This interpretations is useful for replicating several aspects of economic data, see for example Evans and McGough (2005), Milani (2007), Evans and Honkapohja (2009), Branch and Evans (2011), Eusepi and Preston (2011), Evans and Honkapohja (2013), and Giusto (2014). Under this interpretation this paper revisits the validity of the approximate aggregation result of Krusell and Smith (1998) for a model calibrated on the US economy.

The

Krusell and Smith procedure implies that it is possible to obtain very precise forecasts using exclusively aggregate data.

Approximate aggregation argues that if agents can

forecast almost perfectly without using any distributional data then the model reduces to the representative-agent case. I show that approximate aggregation is rooted in weak inference: the agents’ forecasting model has autocorrelated residuals and the result does not hold once more robust inferences are drawn by approximating near-unit root processes as unit roots – see Johansen (2006) and Juselius (2006). This is natural here: the forecasting process estimated by the agents in Krusell and Smith (1998) have autoregressive coefficients approximately equal to 0.96 and furthermore the identification of a unit root in this process would be very difficult – see Evans (1991).

1 Introduction

3

Approximate aggregation is based on high forecasting precision, for which one explanation is the near-linearity of the policy function. Linearity, however, cannot account also for the high accuracy of naive forecasts. A measure-zero set of agents adopting static expectations make one-step ahead errors for wages and interest rates that are on average 0.0072% and 0.0128% higher than the rest of the economy. Such a decrease in precision is economically trivial: for a worker grossing $120,000 a year, the additional error implied by static expectation is worth ¢72 per month; for an investor worth $1,000,000 earning 4% a year on their assets, the additional error is worth ¢43 per month.1 This happens because capital is very stable at the equilibrium (top panel of Figure 1). The bottom panel of Figure 1 plots the very same data on a different scale: the variation in the capital series is small compared to its average. This peculiarity of these data implies that, albeit easy to forecast, appropriate differencing is needed for a valid structural analysis. The approximate aggregation result is widely cited in the literature but many studies question its generality. Carroll (2000) argues that it conflicts with the statistical evidence on the US households’ marginal propensities to consume. Acemoglu and Shimer (2000) find that it does not hold for low-income agents in partial equilibrium. Heathcote (2005) finds that changes to fiscal policy produce noticeably different responses in representative-agent and heterogeneous-agents economies. Townsend and Ueda (2006) show that approximate aggregations does not attain in models with costs to financial markets access. Branch and McGough (2009) consider the case in which agents are heterogeneous in their expectations, and argue that expectational heterogeneity requires the introduction of risk sharing mechanisms to avoid aggregation issues. On the converse, Young (2010) finds that the result is robust to a variety of modifications, based on the same methodology of using the agents’ forecasts to perform structural analyses. Other papers instead rely explicitly on the approximate aggregation result to simplify complex research questions of practical 1

In the following I will refer to such worker and investor as the benchmark ones.

2 Misspecification Analysis

interest.

4

For example, Davis and Heathcote (2005) analysis of residential investment

uses a representative agent because they consider market incompleteness irrelevant for aggregate dynamics. Another example is Shimer (2009) who argues that in light of the approximate aggregation result, the heterogeneous-agent approach to macroeconomics is not promising in explaining the puzzling dynamics of the ratio between the labor-leisure rate of substitution and the marginal product of labor. Using a correctly specified model for inference is not a mere technicality and dispensing with it leads to erroneous conclusions. To illustrates this point I evaluate the significance of aggregate shocks for the aggregate dynamics.

The methodology used to support

approximate aggregation leads to the paradox that aggregate shocks do not matter for aggregate dynamics.

The econometric specification preferred in this paper, instead,

correctly shows that aggregate shocks are important, despite the fact that agents can forecast precisely without that information.2

2

Misspecification Analysis

What follows uses data generated by the baseline Krusell and Smith economy at its ergodic equilibrium. Let the dummy Dg,t be one if zt = zg at t and zero otherwise (zt = zb ) and let mit , i = 1, 2, . . . denote i-th moment of the distribution of wealth at time t. The agents forecast the aggregate capital series ({m1t }∞ t=0 ) using the following specification log m1t+1 = a0 Dg,t + a1 Dg,t log m1t + c0 (1 − Dg,t ) + c1 (1 − Dg,t ) log m1t + t+1 .

(1)

where a0 , a1 , c0 , and c1 are parameters to be estimated. The first column of Table 1 reports the values yielding an RPE: when households inform their saving decisions with 2

This is seen directly from Krusell and Smith (1998). The coefficients on the agents’ forecasting specification are different among the two phases of the business cycle only by magnitudes between 10−2 and 10−3 . These differences imply approximately the same forecasts in both expansions and recessions and hence, the approximate aggregation logic would imply that aggregate shocks do not matter.

3 The Structural Analysis

5

the predictions offered by model (1) parameterized as in Table 1, OLS estimates of model (1) conducted on the aggregate data produces the estimates in Table 1. The second column of Table 1 shows that including the second moment of the distribution of capital leads to substantially identical forecasts. In fact, (i) the R2 statistic of the extended regression is only imperceptibly higher, and (ii) the coefficients on m2t are three orders of magnitude smaller than the coefficients on m1t . Hence the data on wealth dispersion does not practically make a difference for the optimal behavior of the agents.3 The third and fourth columns of Table 1 report two autoregressive coefficients on the residuals produced by the regressions columns (1) and (2) respectively showing clearly that they violate the OLS assumption of IID residuals. This misspecification weakens the argument in favor of approximate aggregation, yet giving no incentive to the agents of the economy to adopt different expectational models: model (1) is still an excellent source of forecasts.

3

The Structural Analysis

Model (1) does not adequately describe the data and to find a specification that instead does, I consider the following class of models, ∆2 log m1t+1 =

n X

φ0,i Dg,t log mit + φ1,i (1 − Dg,t ) log mit

(2)

i=0

where ∆2 is the second-difference operator, φ0,i and φ1,i are coefficients to be estimated. Differencing the data is suggested by the almost-unity estimates of the parameters a1 and c1 .4 Table 2 shows that the specifications in second differences show no evidence of residual autocorrelation and therefore they provide a better source of inference. The approximate aggregation result is not robust to this modification: adding moments of the distribution 3

This conclusion also attains if the full measure of agents forecasts according to the second specification of Table 1. 4 First differencing does not solve the problem of autocorrelated residuals.

4 The Importance of Strong Inferences

6

of wealth increases the adjusted R2 and their quantitative importance is non-negligible. For example, in specification (5) the average is as relevant as the skewness and kurtosis coefficients. While this is an interesting finding in itself, the most relevant implications here are (1) that the macroeconomic dynamics of the Krusell and Smith model are nonlinear in the distribution of wealth, and (2) higher moments of the wealth distribution have first-order relevance. A representative-agent model may not be as good an approximation to a heterogeneous agents one as implied by approximate aggregation.

4

The Importance of Strong Inferences

This section illustrates the necessity of solid inference through an example.

Are the

aggregate shocks relevant for the dynamics of macroeconomic aggregates? This question fits the purpose at hand because (i) it has a known answer – yes, they are – and (ii) it can be answered through both of the methodologies considered here. While the regression model (2) provides the correct answer, the methodology used to support approximate aggregation leads to erroneous conclusions. Suppose first that agents forecast ignoring aggregate shocks, that is, according to log m1t+1 = ϕ0 + ϕ1 log m1t + t+1 .

(3)

where ϕ1 and ϕ0 are parameters to be determined by the Krusell and Smith algorithm. The first column of Table 3 reports the RPE for this case. The forecasts are very precise but slightly less than before. Nevertheless, the economic relevance of such deterioration is very small: in the RPE described in Table 3 the agents make forecast errors that are on average 0.025% and 0.04% higher respectively of the wage rate and of the interest rate, compared to the RPE described in Table 1. For the benchmark worker and investor, the additional errors are worth $1.875 and $1.33 per month, so that it can be argued that the saving-consumption behavior of the agents is very similar in both cases. Furthermore,

5 Conclusion

7

including the aggregate shocks improves only marginally the forecasting precision, as shown in the second column of Table 3. The specification in column (2) of Table 3 improves the forecasts for wages and interest rates with respect to the specification in column (1) by 0.0055% and 0.0063% of the average wage and interest rate, worth only 55 and 21 cents a month for the benchmark worker and investor. The RPE of Table 3 is approximately an equilibrium in the sense of Krusell and Smith (1998), given that the incentives to deviate from specification (3) are trivial. Nevertheless, the stochastic behavior of the economy is obviously not independent of the aggregate shocks, as would logically follow from the argument supporting approximate aggregation. This is instead easily shown by the econometric model in second differences: columns (3) and (4) of Table 3 show that aggregate productivity data not only increase the fit of the regression, but also their coefficient is large and significant.

5

Conclusion

The influential result known as approximate aggregation is supported by weak inference. More reliable inference suggest that the aggregate dynamics of the incomplete-market model with aggregate fluctuations are highly non-linear. The issue raised and solved in this paper preserves the validity of the algorithm to find RPEs of incomplete-markets models with aggregate fluctuations. The agents of the economy require forecasting precision, and if a simple econometric model provides it, there is no incentive to adopt a more complete description of the economy. In general, therefore, the agents’ forecasting model does not have to be a good representation of the actual stochastic behavior of the macroeconomic aggregates. The stability of aggregate capital in the Krusell and Smith economy explains why it is easy to attain high forecasting precision and future work should focus on matching the volatility of aggregate investment.

5 Conclusion

8

References Acemoglu, D., & Shimer, R.

(2000).

Productivity gains from unemployment

insurance. European Economic Review , 44 (7), 1195–1224. Branch, W., & Evans, G. (2011). Learning about risk and return: A simple model of bubbles and crashes. American Economic Journal: Macroeconomics, 3 (3), 159–191. Branch, W., & McGough, B. (2009). A new keynesian model with heterogeneous expectations. Journal of Economic Dynamics and Control , 33 (5), 1036–1051. Carroll, C. D.

(2000).

Requiem for the representative consumer?

aggregate

implications of microeconomic consumption behavior. American Economic Review , 110–115. Davis, M. A., & Heathcote, J. (2005). Housing and the business cycle*. International Economic Review , 46 (3), 751–784. Eusepi, S., & Preston, B.

(2011).

Expectations, learning, and business cycle

fluctuations. American Economic Review , 101 , 284472. Evans, G. W. (1991). Pitfalls in testing for explosive bubbles in asset prices. The American Economic Review , 922–930. Evans, G. W., & Honkapohja, S.

(2001).

Learning and expectations in

macroeconomics. Princeton Univ Pr. Evans, G. W., & Honkapohja, S. (2009). Learning and macroeconomics. Annu. Rev. Econ., 1 (1), 421–449. Evans, G. W., & Honkapohja, S. (2013). Learning as a rational foundation for macroeconomics and finance (R. Frydman & P. Edmund S., Eds.). Princeton University Press. Evans, G. W., & McGough, B. (2005). Monetary policy, indeterminacy and learning.

5 Conclusion

9

Journal of Economic Dynamics and Control , 29 (11), 1809–1840. Giusto, A. (2014). Adaptive learning and distributional dynamics in an incomplete markets model. Journal of Economic Dynamics and Control , 40 , 317–333. Heathcote, J. (2005). Fiscal policy with heterogeneous agents and incomplete markets. The Review of Economic Studies, 72 (1), 161–188. Johansen, S. (2006). Confronting the economic model with the data. Post Walrasian Macroeconomics, Cambridge University Press, Cambridge, 287–300. Juselius, K. (2006). The cointegrated var model: methodology and applications. Oxford University Press, USA. Krusell, P., & Smith, A.

(1998).

Income and wealth heterogeneity in the

macroeconomy. Journal of Political Economy, 106 (5), 867–896. Milani, F. (2007). Expectations, learning and macroeconomic persistence. Journal of monetary Economics, 54 (7), 2065–2082. Sargent, T. J. (1999). The conquest of american inflation. Princeton University Press. Shimer, R. (2009). Convergence in macroeconomics: The labor wedge. American Economic Journal: Macroeconomics, 280–297. Townsend, R. M., & Ueda, K. (2006). Financial deepening, inequality, and growth: a model-based quantitative evaluation. The Review of Economic Studies, 73 (1), 251–293. Young, E. (2010). Solving the incomplete markets model with aggregate uncertainty using the Krusell-Smith algorithm and non-stochastic simulations. Journal of Economic Dynamics and Control , 34 (1), 36–41.

10

log k

0.0

1.0

2.0

5 Conclusion

0

50

100

150

200

150

200

2.46 2.42

log k

t

0

50

100 t

Fig. 1: A 200-period example of the typical data on aggregate capital generated by the Krusell and Smith (1998) economy.

5 Conclusion

11

Dependent Variable

Db,t Dg,t Db,t log m1t Dg,t log m1t

(1) log m1t+1

(2) log m1t+1

0.0826∗∗∗ (0.0002) 0.0924∗∗∗ (0.0001) 0.9653∗∗∗ (0.0001) 0.9633∗∗∗ (0.0001)

0.0823∗∗∗ (0.0000) 0.0920∗∗∗ (0.0000) 0.9652∗∗∗ (0.0000) 0.9633∗∗∗ (0.0000) 0.0002∗∗∗ (0.0000) 0.0002∗∗∗ (0.0000)

Db,t log m2t Dg,t log m2t ˆt ˆt−1 R2

0.9999

(3) ˆ(1),t+1

(4) ˆ(2),t+1

0.9226∗∗∗ (0.0010) 0.0410∗∗∗ (0.0010)

0.9171∗∗∗ (0.0010) 0.04212∗∗∗ (0.0010)

0.9261

0.9169

1.0000

Tab. 1: Misspecification tests on the KS economy.

Standard errors are reported in parentheses. Three stars indicate statistical significance at the 1% level.

5 Conclusion

12

∆2 log m1t+1

(1)

(2)

(3)

(4)

(5)

Db Dg Db × log m1t Dg × log m1t Db × log m2t Dg × log m2t Db × log m3t Dg × log m3t Db × log m4t Dg × log m4t Db × log m5t Dg × log m5t

0.0508∗∗∗ 0.0460∗∗∗ −0.0210∗∗∗ −0.0185∗∗∗

0.052∗∗∗ 0.0483∗∗∗ −0.0209∗∗∗ −0.0183∗∗∗ −0.0008 −0.0013∗∗

0.0561∗∗∗ 0.0545∗∗∗ −0.0212∗∗∗ −0.0188∗∗∗ −0.0029∗∗∗ −0.0048∗∗∗ 0.0005∗∗∗ 0.0008∗∗∗

0.0607∗∗∗ 0.0573∗∗∗ −0.0219∗∗∗ −0.0192∗∗∗ −0.0055∗∗∗ −0.0063∗∗∗ 0.0051∗∗∗ 0.0036∗∗∗ −0.0030∗∗∗ −0.0019∗∗

0.0711∗∗∗ 0.0638∗∗∗ −0.0233∗∗∗ −0.0199∗∗∗ −0.0117∗∗∗ −0.0104∗∗∗ 0.0217∗∗∗ 0.0148∗∗∗ −0.027∗∗∗ −0.018∗∗∗ 0.0110∗∗∗ 0.0074∗∗∗

¯2 R Res. Autocorr.

0.1556 -0.0052

0.1562 -0.0048

0.1587 -0.005 p-values

0.16 -0.0049

0.1628 -0.004

Durbin-Watson Box-Pierce Breusch-Godfrey

0.66 0.6024 0.6023

0.656 0.6298 0.6298

0.658 0.6206 0.6205

0.706 0.627 0.6269

0.798 0.6903 6902

Tab. 2: A set of models that passes the reported misspecification tests for serial correlation. The independent variables denoted with mit , i = 1, . . . , 5, are the moments of the distribution of wealth at time t.

5 Conclusion

13

∆2 log m1t

log µt Intercept Dg log(m1t ) Db × log(m1t ) Dg × log(m1t ) R2

(1)

(2)

(3)

(4)

0.0272∗∗∗

0.1793∗∗∗ -0.0055

0.0543∗∗∗

-0.0048 -0.0183∗∗∗

0.9889∗∗∗

-0.0222∗∗∗ 0.9259∗∗∗

0.0023 0.0090∗∗∗

0.9298∗∗∗ 0.9779

0.9883

0.0343

0.1864

Tab. 3: Assessment of the importance of TFP shocks for aggregate dynamics.

Approximate Aggregation Revisited: Higher Moments ...

Email: [email protected]. My gratitude to G. W. ... questions of practical. 1 In the following I will refer to such worker and investor as the benchmark ones.

213KB Sizes 1 Downloads 247 Views

Recommend Documents

Aggregation Model of Marine Particles by Moments ...
Jul 11, 2008 - By classical probability theory, the k-th moment of the log-normal distribution. N(x) = 1. √. 2πxlnσ e−(lnx−lnµ)2. 2log2σ. (1) is defined as m(k) =.

Approximate Time-Optimal Control via Approximate ...
and µ ∈ R we define the set [A]µ = {a ∈ A | ai = kiµ, ki ∈ Z,i = 1, ...... [Online]. Available: http://www.ee.ucla.edu/∼mmazo/Personal Website/Publications.html.

Comparing L-Moments and Conventional Moments to ...
by examining correlations between speeds at different depths and using regression .... The peak-over-threshold model creates a more homogeneous sample by ..... database available, can lead to a rather heterogeneous sample; e.g., due to.

moments
Harriet's friends (Sport and Audrey) nominate her for class blogger. But another girl named Marine also gets nominated. Harriet's dad is a movie producer and ...

Models based on moments, L-moments, and maximum ...
Jun 19, 2011 - statistics—hence their name—and, unlike µn=E[(X −mX)n], all. L-moments retain the ... Most recently, we use numerical routines to obtain “exact” values of c3 and c4 ..... domain solver Wolfram Alpha [11]. We can use a simila

Online Rank Aggregation
We consider an online learning framework where the task is to predict a ... a loss defined as Kendall tau distance between the predicted permutation and the.

LABORMARKET HETEROGENEITY, AGGREGATION, AND POLICY ...
Abstract. Data from a heterogeneous-agents economy with incomplete asset markets and indivisible labor supply are simulated under various fiscal policy ...

Supervised Rank Aggregation
The optimization for. Markov Chain based methods is not a convex optimization .... reasonable to treat the results of the search engines equally. To deal with the ...

Online Rank Aggregation
Then the online rank aggregation problem consists of the following .... The first idea is that we regard a permutation as a N(= n(n − 1)/2) dimensional comparison.

intimate moments antonella.pdf
Francese. photography award winningwedding photographers best. Your privateitaly by claudiaand antonellafrancese photography. Thecoming soon club.

The aggregation of preferences
Phone number: +32 (0)10/47 83 11. Fax: +32 (0)10/47 43 .... The first property is usually combined with a second one, called history independence: Property 2 ...

Voter Turnout and Preference Aggregation
4See Merlo and de Paula (2016) for identification of voter preferences in a spatial voting model with full turnout. 5Under the electoral college system, perceptions of voting efficacy may differ significantly across states. For example, electoral out

Employment by Lotto Revisited
“stable” if each firm and worker has an acceptable match, and no firm and worker .... given matching µ, a firm-worker pair (f,w) is a blocking pair if they are not ...

Mobile Search MoMentS Services
All rights reserved. Nielsen and the Nielsen logo are registered trademarks or trademarks of CZT/ACN Trademarks, LLC. Used with permission. Other product and service names are trademarks of their respective companies. Mobile Search MoMentS. UnderStan

Rollout sampling approximate policy iteration
Jun 22, 2008 - strated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car. ... Recent studies have investigated the use of super- ..... could be replaced by fresh ones which might yield meaningful result

Approximate Boolean Reasoning: Foundations and ... - CiteSeerX
0 if x > 50. This function can be treated as rough membership function of the notion: “the young man” ... Illustration of inductive concept approximation problem. 2.

Preference Monotonicity and Information Aggregation ...
{01} which assigns to every tuple (μ x s) a degenerate probability of voting for P. Formally, we define our equilibrium in the following way. DEFINITION 1—Equilibrium: The strategy profile where every independent voter i uses σ∗ i (μ x s) is a

Marketplace Lending, Information Aggregation, and ...
Jul 6, 2018 - We analyze an electronic peer-to-business lending platform for small-and-medium- sized (SME) ... from the benchmark of information efficiency.

Approximate Test Risk Bound Minimization ... - Semantic Scholar
GALE program of the Defense Advanced Research Projects Agency, Contract. No. ...... recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp.

Random Grids: Fast Approximate Nearest ... - Semantic Scholar
2056 matches - We propose two solutions for both nearest neigh- bors and ... the parameters in a learning stage adopting them to the case of a ... ages, our algorithms show meaningful speed-ups .... For each random translation/rotation, we use ...

Manipulated Electorates and Information Aggregation
Nov 2, 2015 - Economic Theory Conference, NBER conference on GE at .... The management may reasonably be expected to be better informed about .... ment effort (the number of busses, the phone calls made to others etc). Note that ...

Representation and aggregation of preferences ... - ScienceDirect.com
Available online 1 November 2007. Abstract. We axiomatize in the Anscombe–Aumann setting a wide class of preferences called rank-dependent additive ...