Econometrics: A Predictive Modeling Approach

Francis X. Diebold University of Pennsylvania

Edition 2018 Version Sunday 22nd April, 2018

Econometrics: A Predictive Modeling Approach

Econometrics

Francis X. Diebold

c 2013-2018 Copyright by Francis X. Diebold. This work is freely available for your use, but be warned: it is preliminary, incomplete, and evolving. It is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. (Briefly: I retain copyright, but you can use, copy and distribute non-commercially, so long as you give me attribution and do not modify. To view a copy of the license, go to http://creativecommons.org/licenses/by-nc-nd/4.0/.) In return I ask that you please cite the book whenever appropriate, as: “Diebold, F.X. (2018), Econometrics, Department of Economics, University of Pennsylvania, http://www.ssc.upenn.edu/~fdiebold/Textbooks.html.”

To my undergraduates, who continually surprise and inspire me

Brief Table of Contents About the Author

xix

About the Cover

xxi

Guide to e-Features

xxiii

Preface

xxxi

I

Beginnings

1

1 Introduction to Econometrics

3

2 Graphics and Graphical Style

13

II

29

Cross-Sections

3 Regression Under the Ideal Conditions

31

4 Model Selection

69

5 Non-Normality

79

6 Group Heterogeneity and Indicator Variables

99

7 Nonlinearity

107

8 Heteroskedasticity

125

ix

x

III

BRIEF TABLE OF CONTENTS

Time-Series

137

9 Indicator Variables in Time Series: Trend and Seasonality 139 10 Non-Linearity and Structural Change in Time Series

153

11 Serial Correlation in Observed Time Series

171

12 Serial Correlation in Time Series Regression

217

13 Heteroskedasticity in Time Series

239

14 Multivariate: Vector Autoregression

263

IV

309

Appendices

A Probability and Statistics Review

311

B Construction of the Wage Datasets

323

C Some Popular Books Worth Encountering

329

Detailed Table of Contents About the Author

xix

About the Cover

xxi

Guide to e-Features

xxiii

Preface

xxxi

I

Beginnings

1

1 Introduction to Econometrics 1.1 Welcome . . . . . . . . . . . . . . . . . . . 1.1.1 Who Uses Econometrics? . . . . . . 1.1.2 What Distinguishes Econometrics? 1.2 Types of Recorded Economic Data . . . . 1.3 Online Information and Data . . . . . . . 1.4 Software . . . . . . . . . . . . . . . . . . . 1.5 Tips on How to use this book . . . . . . . 1.6 Exercises, Problems and Complements . . 1.7 Notes . . . . . . . . . . . . . . . . . . . . . 2 Graphics and Graphical Style 2.1 Simple Techniques of Graphical Analysis 2.1.1 Univariate Graphics . . . . . . . 2.1.2 Multivariate Graphics . . . . . . 2.1.3 Summary and Extension . . . . . 2.2 Elements of Graphical Style . . . . . . . 2.3 U.S. Hourly Wages . . . . . . . . . . . . xi

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

3 3 3 5 5 6 8 9 11 12

. . . . . .

13 13 14 14 17 19 21

xii

DETAILED TABLE OF CONTENTS

2.4 2.5 2.6 2.7

II

Concluding Remarks . . . . . . . . . . Exercises, Problems and Complements Notes . . . . . . . . . . . . . . . . . . . Graphics Legend: Edward Tufte . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Cross-Sections

3 Regression Under the Ideal Conditions 3.1 Preliminary Graphics . . . . . . . . . . . . . . . . . . 3.2 Regression as Curve Fitting . . . . . . . . . . . . . . 3.2.1 Bivariate, or Simple, Linear Regression . . . . 3.2.2 Multiple Linear Regression . . . . . . . . . . . 3.2.3 Onward . . . . . . . . . . . . . . . . . . . . . 3.3 Regression as a Probability Model . . . . . . . . . . . 3.3.1 A Population Model and a Sample Estimator 3.3.2 Notation, Assumptions and Results: The Ideal tions . . . . . . . . . . . . . . . . . . . . . . . A Bit of Matrix Notation . . . . . . . . . . . . Assumptions: The Ideal Conditions (IC) . . . Results . . . . . . . . . . . . . . . . . . . . . . 3.4 A Wage Equation . . . . . . . . . . . . . . . . . . . . 3.4.1 Mean dependent var 2.342 . . . . . . . . . . . 3.4.2 S.D. dependent var .561 . . . . . . . . . . . . 3.4.3 Sum squared resid 319.938 . . . . . . . . . . . 3.4.4 Log likelihood -938.236 . . . . . . . . . . . . . 3.4.5 F statistic 199.626 . . . . . . . . . . . . . . . 3.4.6 Prob(F statistic) 0.000000 . . . . . . . . . . . 3.4.7 S.E. of regression .492 . . . . . . . . . . . . . 3.4.8 R-squared .232 . . . . . . . . . . . . . . . . . 3.4.9 Adjusted R-squared .231 . . . . . . . . . . . . 3.4.10 Akaike info criterion 1.423 . . . . . . . . . . . 3.4.11 Schwarz criterion 1.435 . . . . . . . . . . . . . 3.4.12 Hannan-Quinn criter. 1.427 . . . . . . . . . . 3.4.13 Durbin-Watson stat. 1.926 . . . . . . . . . . . 3.4.14 The Residual Scatter . . . . . . . . . . . . . . 3.4.15 The Residual Plot . . . . . . . . . . . . . . . .

22 22 27 28

29

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Condi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31 31 33 33 36 37 38 38 39 39 40 41 42 45 45 45 46 46 47 47 49 49 50 50 50 51 52 52

DETAILED TABLE OF CONTENTS

3.5 3.6 3.7 3.8

xiii

More on Prediction . . . . . . . . . . . . . . . . . . . . . 3.5.1 Regression Output from a Predictive Perspective Exercises, Problems and Complements . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression’s Inventor: Carl Friedrich Gauss . . . . . . . .

. . . . .

. . . . .

. . . . .

54 55 57 66 68

4 Model Selection 69 4.0.1 A Bit More on AIC and SIC . . . . . . . . . . . . . . . 69 4.1 All-Subsets Model Selection I: Information Criteria . . . . . . 70 4.2 Exercises, Problems and Complements . . . . . . . . . . . . . 77 5 Non-Normality 5.0.1 Results . . . . . . . . . . . . . . . . . . . 5.1 Assessing Normality . . . . . . . . . . . . . . . 5.1.1 QQ Plots . . . . . . . . . . . . . . . . . 5.1.2 Residual Sample Skewness and Kurtosis 5.1.3 The Jarque-Bera Test . . . . . . . . . . 5.2 Outliers . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Outlier Detection . . . . . . . . . . . . . Graphics . . . . . . . . . . . . . . . . . . Leave-One-Out and Leverage . . . . . . 5.3 Robust Estimation . . . . . . . . . . . . . . . . 5.3.1 Robustness Iteration . . . . . . . . . . . 5.3.2 Least Absolute Deviations . . . . . . . . 5.4 Wage Determination . . . . . . . . . . . . . . . 5.4.1 W AGE . . . . . . . . . . . . . . . . . . 5.4.2 LW AGE . . . . . . . . . . . . . . . . . 5.5 Exercises, Problems and Complements . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

6 Group Heterogeneity and Indicator Variables 6.1 0-1 Dummy Variables . . . . . . . . . . . . . . . . . 6.2 Group Dummies in the Wage Regression . . . . . . 6.3 Exercises, Problems and Complements . . . . . . . 6.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Dummy Variables, ANOVA, and Sir Ronald Fischer

. . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . .

79 79 80 81 81 81 82 83 83 83 83 84 85 86 86 86 87

. . . . .

99 99 101 103 105 106

xiv

DETAILED TABLE OF CONTENTS

7 Nonlinearity 7.1 Models Linear in Transformed Variables . . . . . . . . . . . . 7.1.1 Logarithms . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Box-Cox and GLM . . . . . . . . . . . . . . . . . . . . 7.2 Intrinsically Non-Linear Models . . . . . . . . . . . . . . . . . 7.2.1 Nonlinear Least Squares . . . . . . . . . . . . . . . . . 7.3 Series Expansions . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 A Final Word on Nonlinearity and the IC . . . . . . . . . . . 7.5 Selecting a Non-Linear Model . . . . . . . . . . . . . . . . . . 7.5.1 t and F Tests, and Information Criteria . . . . . . . . 7.5.2 The RESET Test . . . . . . . . . . . . . . . . . . . . 7.6 Non-Linearity in Wage Determination . . . . . . . . . . . . . . 7.6.1 Non-Linearity in Continuous and Discrete Variables Simultaneously . . . . . . . . . . . . . . . . . . . . . . . 7.7 Exercises, Problems and Complements . . . . . . . . . . . . . 7.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

107 108 108 110 111 111 112 113 114 114 114 115

8 Heteroskedasticity 8.1 Consequences of Heteroskedasticity for Estimation, Inference, and Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Detecting Heteroskedasticity . . . . . . . . . . . . . . . . . . . 8.2.1 Graphical Diagnostics . . . . . . . . . . . . . . . . . . 8.2.2 Formal Tests . . . . . . . . . . . . . . . . . . . . . . . The Breusch-Pagan-Godfrey Test (BPG) . . . . . . . . White’s Test . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Dealing with Heteroskedasticity . . . . . . . . . . . . . . . . . 8.3.1 Adjusting Standard Errors . . . . . . . . . . . . . . . . 8.3.2 Adjusting Density Forecasts . . . . . . . . . . . . . . . 8.4 Exercises, Problems and Complements . . . . . . . . . . . . .

125

III

137

Time-Series

9 Indicator Variables in Time Series: Trend and Seasonality 9.1 Linear Trend . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Seasonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Seasonal Dummies . . . . . . . . . . . . . . . . . . . .

117 118 123

125 126 126 127 127 128 129 129 130 131

139 139 141 142

DETAILED TABLE OF CONTENTS

9.3 9.4 9.5

9.2.2 More General Calendar Effects Trend and Seasonality in Liquor Sales . Exercises, Problems and Complements Notes . . . . . . . . . . . . . . . . . . .

xv

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

10 Non-Linearity and Structural Change in Time Series 10.1 Exponential Trend . . . . . . . . . . . . . . . . . . . . . . . 10.2 Quadratic Trend . . . . . . . . . . . . . . . . . . . . . . . . 10.3 More on Non-Linear Trend . . . . . . . . . . . . . . . . . . . 10.3.1 Moving-Average Trend and De-Trending . . . . . . . 10.3.2 Hodrick-Prescott Trend and De-Trending . . . . . . . 10.4 Structural Change . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Gradual Parameter Evolution . . . . . . . . . . . . . 10.4.2 Abrupt Parameter Breaks . . . . . . . . . . . . . . . Exogenously-Specified Breaks . . . . . . . . . . . . . The Chow test with Endogenous Break Selection . . 10.5 Dummy Variables and Omitted Variables, Again and Again 10.5.1 Dummy Variables . . . . . . . . . . . . . . . . . . . . 10.5.2 Omitted Variables . . . . . . . . . . . . . . . . . . . 10.6 Non-Linearity in Liquor Sales Trend . . . . . . . . . . . . . . 10.7 Exercises, Problems and Complements . . . . . . . . . . . . 10.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

144 144 146 151

. . . . . . . . . . . . . . . .

153 153 155 156 157 158 158 159 159 159 160 161 161 161 162 164 169

11 Serial Correlation in Observed Time Series 11.1 Characterizing Time-Series Dynamics . . . . . . . . . . . . . . 11.1.1 Covariance Stationary Time Series . . . . . . . . . . . 11.2 White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Estimation and Inference for the Mean, Autocorrelation and Partial Autocorrelation Functions . . . . . . . . . . . . . . . . 11.3.1 Sample Mean . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Sample Autocorrelations . . . . . . . . . . . . . . . . . 11.3.3 Sample Partial Autocorrelations . . . . . . . . . . . . . 11.4 Autoregressive Models for Serially-Correlated Time Series . . . 11.4.1 Some Preliminary Notation: The Lag Operator . . . . 11.4.2 Autoregressions . . . . . . . . . . . . . . . . . . . . . . The AR(1) Process . . . . . . . . . . . . . . . . . . . . 11.4.3 The AR(p) Process . . . . . . . . . . . . . . . . . . .

171 171 172 177 182 182 183 186 187 187 189 189 195

xvi

DETAILED TABLE OF CONTENTS

11.4.4 Alternative Approaches to Estimating Autoregressions 197 11.5 Exercises, Problems and Complements . . . . . . . . . . . . . 199 11.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 12 Serial Correlation in Time Series Regression 12.1 Testing for Serial Correlation . . . . . . . . . . . . . . . . . . 12.1.1 The Durbin-Watson Test . . . . . . . . . . . . . . . . . 12.1.2 The Breusch-Godfrey Test . . . . . . . . . . . . . . . . 12.1.3 The Residual Correlogram . . . . . . . . . . . . . . . . 12.2 Estimation with Serial Correlation . . . . . . . . . . . . . . . 12.2.1 Regression with Serially-Correlated Disturbances . . . 12.2.2 Serially-Correlated Disturbances vs. Lagged Dependent Variables . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 A Full Model of Liquor Sales . . . . . . . . . . . . . . . 12.3 Exercises, Problems and Complements . . . . . . . . . . . . . 12.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

217 219 219 222 223 225 225

13 Heteroskedasticity in Time Series 13.1 The Basic ARCH Process . . . . . . . . . . . . . . . . . . . . 13.2 The GARCH Process . . . . . . . . . . . . . . . . . . . . . . . 13.3 Extensions of ARCH and GARCH Models . . . . . . . . . . . 13.3.1 Asymmetric Response . . . . . . . . . . . . . . . . . . 13.3.2 Exogenous Variables in the Volatility Function . . . . . 13.3.3 Regression with GARCH disturbances and GARCH-M 13.3.4 Component GARCH . . . . . . . . . . . . . . . . . . . 13.3.5 Mixing and Matching . . . . . . . . . . . . . . . . . . . 13.4 Estimating, Forecasting and Diagnosing GARCH Models . . . 13.5 Exercises, Problems and Complements . . . . . . . . . . . . . 13.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

239 240 245 252 253 254 254 255 256 256 258 262

14 Multivariate: Vector Autoregression 14.1 Distributed Lag Models . . . . . . . 14.2 Regressions with Lagged Dependent sions with ARM A Disturbances . . . 14.3 Vector Autoregressions . . . . . . . . 14.4 Predictive Causality . . . . . . . . . 14.5 Impulse-Response Functions . . . . .

. . . . . . Variables, . . . . . . . . . . . . . . . . . . . . . . . .

228 232 235 237

263 . . . . . . . . 264 and Regres. . . . . . . . 266 . . . . . . . . 269 . . . . . . . . 271 . . . . . . . . 274

DETAILED TABLE OF CONTENTS

14.6 14.7 14.8 14.9

IV

Variance Decompositions . . . . . . . . . . . . Application: Housing Starts and Completions Exercises, Problems and Complements . . . . Notes . . . . . . . . . . . . . . . . . . . . . . .

xvii

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Appendices

278 279 295 303

309

A Probability and Statistics Review A.1 Populations: Random Variables, Distributions and Moments . A.1.1 Univariate . . . . . . . . . . . . . . . . . . . . . . . . . A.1.2 Multivariate . . . . . . . . . . . . . . . . . . . . . . . . A.2 Samples: Sample Moments . . . . . . . . . . . . . . . . . . . . A.2.1 Univariate . . . . . . . . . . . . . . . . . . . . . . . . . A.2.2 Multivariate . . . . . . . . . . . . . . . . . . . . . . . . A.3 Finite-Sample and Asymptotic Sampling Distributions of the Sample Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3.1 Exact Finite-Sample Results . . . . . . . . . . . . . . . A.3.2 Approximate Asymptotic Results (Under Weaker Assumptions) . . . . . . . . . . . . . . . . . . . . . . . . A.4 Exercises, Problems and Complements . . . . . . . . . . . . . A.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

311 311 311 314 315 315 317

B Construction of the Wage Datasets

323

C Some Popular Books Worth Encountering

329

318 318 319 320 322

xviii

DETAILED TABLE OF CONTENTS

About the Author

Francis X. Diebold is Professor of Economics, Finance and Statistics at the University of Pennsylvania. He has won both undergraduate and graduate economics “teacher of the year” awards, and his academic “family” includes thousands of undergraduate students and nearly 75 Ph.D. students. Diebold has published widely in econometrics, forecasting, finance, and macroeconomics. He is an NBER Faculty Research Associate, as well as an elected Fellow of the Econometric Society, the American Statistical Association, and the International Institute of Forecasters. He has also been the recipient of Sloan, Guggenheim, and Humboldt fellowships, Co-Director of the Wharton Financial Institutions Center, and President of the Society for Financial Econometrics. His academic research is firmly linked to practical matters: During 1986-1989 he served as an economist under both Paul Volcker and Alan Greenspan at the Board of Governors of the Federal Reserve System, during 2007-2008 he served as Executive Director of Morgan Stanley Investment Management, and during 2012-2013 he served as Chairman of the Federal Reserve System’s Model Validation Council. xix

About the Cover

The colorful painting is Enigma, by Glen Josselsohn, from Wikimedia Commons. As noted there: Glen Josselsohn was born in Johannesburg in 1971. His art has been exhibited in several art galleries around the country, with a number of sell-out exhibitions on the South African art scene ... Glen’s fascination with abstract art comes from the likes of Picasso, Pollock, Miro, and local African art. I used the painting mostly just because I like it. But econometrics is indeed something of an enigma, part economics and part statistics, part science and part art, hunting faint and fleeting signals buried in massive noise. Yet, perhaps somewhat miraculously, it often succeeds.

xxi

Guide to e-Features

• Hyperlinks to internal items (table of contents, index, footnotes, etc.) appear in red. • Hyperlinks to bibliographic references appear in green. • Hyperlinks to the web appear in cyan. • Hyperlinks to external files (e.g., video) appear in blue. • Many images are clickable to reach related material. • Key concepts appear in bold, and they also appear in the book’s (hyperlinked) index. • Additional related materials appear on the book’s web page. These may include book updates, presentation slides, datasets, and computer code. • Facebook group: Diebold Econometrics. • Additional relevant material sometimes appears on Facebook groups Diebold Forecasting and Diebold Time Series Econometrics, on Twitter @FrancisDiebold, and on the No Hesitations blog.

xxiii

xxiv

Guide

List of Figures 1.1 1.2 1.3 1.4 1.5

Resources for Economists Web Eviews Homepage . . . . . . . Stata Homepage . . . . . . . . R Homepage . . . . . . . . . . Python Homepage . . . . . .

. . . . .

6 7 8 9 10

2.1 2.2 2.3

1-Year Goverment Bond Yield, Levels and Changes . . . . . . Histogram of 1-Year Government Bond Yield . . . . . . . . . . Bivariate Scatterplot, 1-Year and 10-Year Government Bond Yields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scatterplot Matrix, 1-, 10-, 20- and 30-Year Government Bond Yields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distributions of Wages and Log Wages . . . . . . . . . . . . . Tufte Teaching, with a First Edition Book by Galileo . . . . .

15 16

2.4 2.5 2.6 3.1 3.2 3.3

Page . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

17 18 22 28

3.4 3.5 3.6 3.7

Distributions of Log Wage, Education and Experience . . (Log Wage, Education) Scatterplot . . . . . . . . . . . . (Log Wage, Education) Scatterplot with Superimposed gression Line . . . . . . . . . . . . . . . . . . . . . . . . Regression Output . . . . . . . . . . . . . . . . . . . . . Wage Regression Residual Scatter . . . . . . . . . . . . . Wage Regression Residual Plot . . . . . . . . . . . . . . Carl Friedrich Gauss . . . . . . . . . . . . . . . . . . . .

. . . . . . Re. . . . . . . . . . . . . . .

35 42 52 54 68

5.1 5.2 5.3 5.4 5.5

OLS OLS OLS OLS OLS

. . . . .

89 90 91 92 93

Wage Wage Wage Wage Wage

Regression Regression: Regression: Regression: Regression:

. . . . . . . . . . . . . . . . . . . Residual Plot . . . . . . . . . . . Residual Histogram and Statistics Residual Gaussian QQ Plot . . . Leave-One-Out Plot . . . . . . . xxv

. . . . .

. . . . .

32 34

xxvi

LIST OF FIGURES

5.6 5.7 5.8 5.9 5.10 5.11 5.12

LAD Wage Regression . . OLS Log Wage Regression OLS Log Wage Regression: OLS Log Wage Regression: OLS Log Wage Regression: OLS Log Wage Regression: LAD Log Wage Regression

6.1 6.2 6.3 6.4

Histograms for Wage Covariates . . . . . . . . . . . . . . . . . 100 Wage Regression on Education and Experience . . . . . . . . . 102 Wage Regression on Education, Experience and Group Dummies102 Residual Scatter from Wage Regression on Education, Experience and Group Dummies . . . . . . . . . . . . . . . . . . . 103 Sir Ronald Fischer . . . . . . . . . . . . . . . . . . . . . . . . 106

6.5 7.1 7.2 7.3 7.4 7.5 8.1 8.2 8.3 8.4 8.5

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Residual Plot . . . . . . . . . . . . Residual Histogram and Statistics Residual Gaussian QQ Plot . . . . Leave-One-Out Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Basic Linear Wage Regression . . . . . . . . . . . . . . . . . . Quadratic Wage Regression . . . . . . . . . . . . . . . . . . . Wage Regression on Education, Experience, Group Dummies, and Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . Wage Regression with Continuous Non-Linearities and Interactions, and Discrete Interactions . . . . . . . . . . . . . . . . Regression Output . . . . . . . . . . . . . . . . . . . . . . . .

94 94 95 95 96 96 97

116 117 118 119 124

8.6

Final Wage Regression . . . . . . . . . . . . . . . . . . . . . . Squared Residuals vs. Years of Education . . . . . . . . . . . BPG Test Regression and Results . . . . . . . . . . . . . . . . White Test Regression and Results . . . . . . . . . . . . . . . Wage Regression with Heteroskedasticity-Robust Standard Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression Weighted by Fit From White Test Regression . . .

131 134

9.1 9.2 9.3 9.4 9.5 9.6 9.7

Various Linear Trends . . . . . . . . . . . . . . . . . . . . Liquor Sales . . . . . . . . . . . . . . . . . . . . . . . . . . Log Liquor Sales . . . . . . . . . . . . . . . . . . . . . . . Linear Trend Estimation . . . . . . . . . . . . . . . . . . . Residual Plot, Linear Trend Estimation . . . . . . . . . . . Estimation Results, Linear Trend with Seasonal Dummies Residual Plot, Linear Trend with Seasonal Dummies . . .

140 145 146 147 148 149 150

. . . . . . .

. . . . . . .

127 128 129 130

LIST OF FIGURES

9.8

xxvii

Seasonal Pattern . . . . . . . . . . . . . . . . . . . . . . . . . 151

10.1 10.2 10.3 10.4 10.5

Various Exponential Trends . . . . . . . . . . . . . . . . . . . Various Quadratic Trends . . . . . . . . . . . . . . . . . . . . Log-Quadratic Trend Estimation . . . . . . . . . . . . . . . . Residual Plot, Log-Quadratic Trend Estimation . . . . . . . . Liquor Sales Log-Quadratic Trend Estimation with Seasonal Dummies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Residual Plot, Liquor Sales Log-Quadratic Trend Estimation With Seasonal Dummies . . . . . . . . . . . . . . . . . . . . .

154 156 163 164

12.1 12.2 12.3 12.4

229 230 230 232

14.1 14.2 14.3 14.4 14.5

***. ***. ***. ***.

***. ***. ***. ***.

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

165 166

Housing Starts and Completions, 1968 - 1996 . . . . . . . . . . 279 Housing Starts Correlogram . . . . . . . . . . . . . . . . . . . 280 Housing Starts Autocorrelations and Partial Autocorrelations 281 Housing Completions Correlogram . . . . . . . . . . . . . . . . 282 Housing Completions Autocorrelations and Partial Autocorrelations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 14.6 Housing Starts and Completions Sample Cross Correlations . . 283 14.7 VAR Starts Model . . . . . . . . . . . . . . . . . . . . . . . . 284 14.8 VAR Starts Residual Correlogram . . . . . . . . . . . . . . . . 285 14.9 VAR Starts Equation - Sample Autocorrelation and Partial Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . 286 14.10VAR Completions Model . . . . . . . . . . . . . . . . . . . . . 287 14.11VAR Completions Residual Correlogram . . . . . . . . . . . . 288 14.12VAR Completions Equation - Sample Autocorrelation and Partial Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . 289 14.13Housing Starts and Completions - Causality Tests . . . . . . . 290 14.14Housing Starts and Completions - VAR Impulse Response Functions. Response is to 1 SD innovation. . . . . . . . . . . . . . 291 14.15Housing Starts and Completions - VAR Variance Decompositions292 14.16Housing Starts Forecast . . . . . . . . . . . . . . . . . . . . . 293 14.17Housing Starts Forecast and Realization . . . . . . . . . . . . 293

xxviii

LIST OF FIGURES

14.18Housing Completions Forecast . . . . . . . . . . . . . . . . . . 294 14.19Housing Completions Forecast and Realization . . . . . . . . . 294

List of Tables 2.1

Yield Statistics . . . . . . . . . . . . . . . . . . . . . . . . . .

xxix

25

xxx

LIST OF TABLES

Preface

Econometrics: A Predictive Modeling Approach should be useful to students in a variety of fields – in economics, of course, but also statistics, business, finance, public policy, and even engineering. The “predictive modeling” perspective, emphasized throughout, connects students to the modern perspectives of machine learning, data science, etc., in both causal and noncausal environments. I have used the material successfully for many years in my undergraduate econometrics course at Penn, as background for various other undergraduate courses, and in master’s-level executive education courses given to professionals in economics, business, finance and government. It is directly accessible at the undergraduate and master’s levels, as the only prerequisite is an introductory probability and statistics course. Many people have contributed to the development of this book. One way or another, all of the following deserve thanks: Xu Cheng, University of Pennsylvania; Barbara Chizzolini, Bocconi University; Frank Di Traglia, University of Pennsylvania; Carlo Favero, Bocconi University; Bruce Hansen, University of Wisconsin; Frank Schorfheide, University of Pennsylvania; Jim Stock, Harvard University; Mark Watson, Princeton University. I am especially grateful to the University of Pennsylvania, which for many years has provided an unparalleled intellectual home, the perfect incubator for the ideas that have congealed here. Related, I am grateful to an army of

xxxi

xxxii

PREFACE

energetic and enthusiastic Penn graduate and undergraduate students, who read and improved much of the manuscript and code. Finally, I apologize and accept full responsibility for the many errors and shortcomings that undoubtedly remain – minor and major – despite ongoing efforts to eliminate them. Francis X. Diebold Philadelphia Sunday 22nd April, 2018

Econometrics

Part I Beginnings

1

Chapter 1 Introduction to Econometrics 1.1 1.1.1

Welcome Who Uses Econometrics?

Econometrics is important — it is used constantly in business, finance, economics, government, consulting and many other fields. Econometric models are used routinely for tasks ranging from data description to policy analysis, and ultimately they guide many important decisions. To develop a feel for the tremendous diversity of econometric applications, let’s explore some of the areas where they feature prominently, and the corresponding diversity of decisions supported. One key field is economics (of course), broadly defined. Governments, businesses, policy organizations, central banks, financial services firms, and economic consulting firms around the world routinely use econometrics. Governments, central banks and policy organizations use econometric models to guide monetary policy, fiscal policy, as well as education and training, health, and transfer policies. Businesses use econometrics for strategic planning tasks. These include management strategy of all types including operations management and control (hiring, production, inventory, investment, ...), marketing (pricing, distributing, advertising, ...), accounting (budgeting revenues and expenditures), 3

4

CHAPTER 1. INTRODUCTION TO ECONOMETRICS

and so on. Sales modeling is a good example. Firms routinely use econometric models of sales to help guide management decisions in inventory management, sales force management, production planning, new market entry, and so on. More generally, firms use econometric models to help decide what to produce (What product or mix of products should be produced?), when to produce (Should we build up inventories now in anticipation of high future demand? How many shifts should be run?), how much to produce and how much capacity to build (What are the trends in market size and market share? Are there cyclical or seasonal effects? How quickly and with what pattern will a newly-built plant or a newly-installed technology depreciate?), and where to produce (Should we have one plant or many? If many, where should we locate them?). Firms also use forecasts of future prices and availability of inputs to guide production decisions. Econometric models are also crucial in financial services, including asset management, asset pricing, mergers and acquisitions, investment banking, and insurance. Portfolio managers, for example, have been interested in empirical modeling and understanding asset returns such as stock returns, interest rates, exchange rates, and commodity prices. Econometrics is similarly central to financial risk management. In recent decades, econometric methods for volatility modeling have been developed and widely applied to evaluate and insure risks associated with asset portfolios, and to price assets such as options and other derivatives. Finally, econometrics is central to the work of a wide variety of consulting firms, many of which support the business functions already mentioned. Litigation support is also a very active area, in which econometric models are routinely used for damage assessment (e.g., lost earnings), “but for” analyses, and so on. Indeed these examples are just the tip of the iceberg. Surely you can think

1.2. TYPES OF RECORDED ECONOMIC DATA

5

of many more situations in which econometrics is used. 1.1.2

What Distinguishes Econometrics?

Econometrics is much more than just “statistics using economic data,” although it is of course very closely related to statistics. • Econometrics has special interest in prediction, causal estimation, and their interface. • Econometrics must confront the special issues and features that arise routinely in economic data, such as trends, seasonality and cycles. • Econometrics must confront the special problems arising due to to its non-experimental nature: Model mis-specification, structural change, etc. With so many applications and issues in econometrics, you might fear that a huge variety of econometric techniques exists, and that you’ll have to master all of them. Fortunately, that’s not the case. Instead, a relatively small number of tools form the common core of much econometric modeling. We will focus on those underlying core principles.

1.2

Types of Recorded Economic Data

Several aspects of economic data will concern us frequently. One issue is whether the data are continuous or binary. Continuous data take values on a continuum, as for example with GDP growth, which in principle can take any value in the real numbers. Binary data, in contrast, take just two values, as with a 0-1 indicator for whether or not someone purchased a particular product during the last month. Another issue is whether the data are recorded over time, over space, or some combination of the two. Time series data are recorded over time, as

6

CHAPTER 1. INTRODUCTION TO ECONOMETRICS

for example with U.S. GDP, which is measured once per quarter. A GDP dataset might contain data for, say, 1960 to the present. Cross sectional data, in contrast, are recorded over space (at a point in time), as with yesterday’s closing stock price for each of the U.S. S&P 500 firms. The data structures can be blended, as for example with a time series of cross sections. If, moreover, the cross-sectional units are identical over time, we speak of panel data, or longitudinal data. An example would be the daily closing stock price for each of the U.S. S&P 500 firms, recorded over each of the last 30 days.

1.3

Online Information and Data

Figure 1.1: Resources for Economists Web Page

1.3. ONLINE INFORMATION AND DATA

7

Much useful information is available on the web. The best way to learn about what’s out there is to spend a few hours searching the web for whatever interests you. Here we mention just a few key “must-know” sites. Resources for Economists, maintained by the American Economic Association, is a fine portal to almost anything of interest to economists. (See Figure 1.1.) It contains hundreds of links to data sources, journals, professional organizations, and so on. FRED (Federal Reserve Economic Data) is a tremendously convenient source for economic data. The National Bureau of Economic Research site has data on U.S. business cycles, and the Real-Time Data Research Center at the Federal Reserve Bank of Philadelphia has real-time vintage macroeconomic data. Finally, check out Quandl, which provides access to millions of data series on the web.

Figure 1.2: Eviews Homepage

8

CHAPTER 1. INTRODUCTION TO ECONOMETRICS

Figure 1.3: Stata Homepage

1.4

Software

Econometric software tools are widely available. One of the best high-level environments is Eviews, a modern object-oriented environment with extensive time series, modeling and forecasting capabilities. (See Figure 1.2.) It implements almost all of the methods described in this book, and many more. Eviews reflects a balance of generality and specialization that makes it ideal for the sorts of tasks that will concern us, and most of the examples in this book are done using it. If you feel more comfortable with another package, however, that’s fine – none of our discussion is wed to Eviews in any way, and most of our techniques can be implemented in a variety of packages. Eviews has particular strength in time series environments. Stata is a similarly good packaged with particular strength in cross sections and panels. (See Figure 1.3.) Eviews and Stata are examples of very high-level modeling environments.

1.5. TIPS ON HOW TO USE THIS BOOK

9

Figure 1.4: R Homepage

If you go on to more advanced econometrics, you’ll probably want also to have available lower-level (“mid-level”) environments in which you can quickly program, evaluate and apply new tools and techniques. R is one very powerful and popular such environment, with special strengths in modern statistical methods and graphical data analysis. (See Figure 1.4.) Other notable such environments include Python (see Figure 1.5) and Julia.

1.5

Tips on How to use this book

As you navigate through the book, keep the following in mind. • Hyperlinks to internal items (table of contents, index, footnotes, etc.) appear in red.

10

CHAPTER 1. INTRODUCTION TO ECONOMETRICS

Figure 1.5: Python Homepage

• Hyperlinks to references appear in green. • Hyperlinks to external items (web pages, video, etc.) appear in cyan. • Key concepts appear in bold, and they also appear in the (hyperlinked) index. • Many figures are clickable to reach related material, as are, for example, all figures in this chapter. • Most chapters contain at least one extensive empirical example in the “Econometrics in Action” section. • The end-of-chapter “Exercises, Problems and Complements” sections are of central importance and should be studied carefully. Exercises are generally straightforward checks of your understanding. Problems,

1.6. EXERCISES, PROBLEMS AND COMPLEMENTS

11

in contrast, are generally significantly more involved, whether analytically or computationally. Complements generally introduce important auxiliary material not covered in the main text.

1.6

Exercises, Problems and Complements

1. (No empirical example is definitive) Recall that, as mentioned in the text, most chapters contain at least one extensive empirical example. At the same time, those examples should not be taken as definitive or complete treatments – there is no such thing. A good idea is to think of the implicit “Problem 0” at the end of each chapter as “Critique the empirical modeling in this chapter, obtain the relevant data, and produce a superior analysis”. 2. (Nominal, ordinal, interval and ratio data) We emphasized time series, cross-section and panel data, whether continuous or discrete, but there are other complementary categorizations. In particular, distinctions are often made among nominal data, ordinal data, interval data, and ratio data. Which are most common and useful in economics and related fields, and why? 3. (Software differences and bugs: caveat emptor) Be warned: no software is perfect. In fact, all software is highly imperfect. The results obtained when modeling in different software environments may differ – sometimes a little and sometimes a lot – for a variety of reasons. The details of implementation may differ across packages, for example, and small differences in details can sometimes produce large differences in results. Hence, it is important that you understand precisely what your software is doing (insofar as possible, as some software documentation is more complete than others). And

12

CHAPTER 1. INTRODUCTION TO ECONOMETRICS

of course, quite apart from correctly-implemented differences in details, deficient implementations can and do occur: there is no such thing as bug-free software.

1.7

Notes

For a compendium of econometric and statistical software, see the software links site, maintained by Marius Ooms at the Econometrics Journal. R is available for free as part of a massive and highly-successful opensource project. RStudio provides a fine R working environment, and, like R, it’s free. A good R tutorial, first given on Coursera and then moved to YouTube, is here. R-bloggers is a massive blog with all sorts of information about all things R. Finally, Quandl has a nice R interface. Python and Julia are also free.

Chapter 2 Graphics and Graphical Style It’s almost always a good idea to begin an econometric analysis with graphical data analysis. When compared to the modern array of econometric methods, graphical analysis might seem trivially simple, perhaps even so simple as to be incapable of delivering serious insights. Such is not the case: in many respects the human eye is a far more sophisticated tool for data analysis and modeling than even the most sophisticated statistical techniques. Put differently, graphics is a sophisticated technique. That’s certainly not to say that graphical analysis alone will get the job done – certainly, graphical analysis has its limitations of its own – but it’s usually the best place to start. With that in mind, we introduce in this chapter some simple graphical techniques, and we consider some basic elements of graphical style.

2.1

Simple Techniques of Graphical Analysis

We will segment our discussion into two parts: univariate (one variable) and multivariate (more than one variable). Because graphical analysis “lets the data speak for themselves,” it is most useful when the dimensionality of the data is low; that is, when dealing with univariate or low-dimensional multivariate data.

13

14

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

2.1.1

Univariate Graphics

First consider time series data. Graphics is used to reveal patterns in time series data. The great workhorse of univariate time series graphics is the simple time series plot, in which the series of interest is graphed against time. In the top panel of Figure 2.1, for example, we present a time series plot of a 1-year Government bond yield over approximately 500 months. A number of important features of the series are apparent. Among other things, its movements appear sluggish and persistent, it appears to trend gently upward until roughly the middle of the sample, and it appears to trend gently downward thereafter. The bottom panel of Figure 2.1 provides a different perspective; we plot the change in the 1-year bond yield, which highlights volatility fluctuations. Interest rate volatility is very high in mid-sample. Univariate graphical techniques are also routinely used to assess distributional shape, whether in time series or cross sections. A histogram, for example, provides a simple estimate of the probability density of a random variable. The observed range of variation of the series is split into a number of segments of equal length, and the height of the bar placed at a segment is the percentage of observations falling in that segment.1 In Figure 2.2 we show a histogram for the 1-year bond yield. 2.1.2

Multivariate Graphics

When two or more variables are available, the possibility of relations between the variables becomes important, and we use graphics to uncover the existence and nature of such relationships. We use relational graphics to 1

In some software packages (e.g., Eviews), the height of the bar placed at a segment is simply the number, not the percentage, of observations falling in that segment. Strictly speaking, such histograms are not density estimators, because the “area under the curve” doesn’t add to one, but they are equally useful for summarizing the shape of the density.

2.1. SIMPLE TECHNIQUES OF GRAPHICAL ANALYSIS

Figure 2.1: 1-Year Goverment Bond Yield, Levels and Changes

15

16

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

Figure 2.2: Histogram of 1-Year Government Bond Yield

display relationships and flag anomalous observations. You already understand the idea of a bivariate scatterplot.2 In Figure 2.3, for example, we show a bivariate scatterplot of the 1-year U.S. Treasury bond rate vs. the 10-year U.S. Treasury bond rate, 1960.01-2005.03. The scatterplot indicates that the two move closely together; in particular, they are positively correlated. Thus far all our discussion of multivariate graphics has been bivariate. That’s because graphical techniques are best-suited to low-dimensional data. Much recent research has been devoted to graphical techniques for highdimensional data, but all such high-dimensional graphical analysis is subject to certain inherent limitations. One simple and popular scatterplot technique for high-dimensional data – and one that’s been around for a long time – is the scatterplot matrix, or multiway scatterplot. The scatterplot matrix is just the set of all possible bivariate scatterplots, arranged in the upper right or lower left part of a matrix to facilitate comparisons. If we have data on N variables, there are 2

Note that “connecting the dots” is generally not useful in scatterplots. This contrasts to time series plots, for which connecting the dots is fine and is typically done.

2.1. SIMPLE TECHNIQUES OF GRAPHICAL ANALYSIS

17

Figure 2.3: Bivariate Scatterplot, 1-Year and 10-Year Government Bond Yields N 2 −N 2

such pairwise scatterplots. In Figure 2.4, for example, we show a

scatterplot matrix for the 1-year, 10-year, 20-year, and 30-year U.S. Treasury Bond rates, 1960.01-2005.03. There are a total of six pairwise scatterplots, and the multiple comparison makes clear that although the interest rates are closely related in each case, with a regression slope of approximately one, the relationship is more precise in some cases (e.g., 20- and 30-year rates) than in others (e.g., 1- and 30-year rates). 2.1.3

Summary and Extension

Let’s summarize and extend what we’ve learned about the power of graphics: a. Graphics helps us summarize and reveal patterns in univariate time-series data. Time-series plots are helpful for learning about many features of time-series data, including trends, seasonality, cycles, the nature and location of any aberrant observations (“outliers”), structural breaks, etc. b. Graphics helps us summarize and reveal patterns in univariate cross-section data. Histograms are helpful for learning about distributional shape.

18

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

Figure 2.4: Scatterplot Matrix, 1-, 10-, 20- and 30-Year Government Bond Yields

2.2. ELEMENTS OF GRAPHICAL STYLE

19

c. Graphics helps us identify relationships and understand their nature, in both multivariate time-series and multivariate cross-section environments. The key graphical device is the scatterplot, which can help us to begin answering many questions, including: Does a relationship exist? Is it linear or nonlinear? Are there outliers? d. Graphics helps us identify relationships and understand their nature in panel data. One can, for example, examine cross-sectional histograms across time periods, or time series plots across cross-sectional units. e. Graphics facilitates and encourages comparison of different pieces of data via multiple comparisons. The scatterplot matrix is a classic example of a multiple comparison graphic. We might add to this list another item of tremendous relevance in our age of big data: Graphics enables us to summarize and learn from huge datasets.

2.2

Elements of Graphical Style

In the preceding sections we emphasized the power of graphics and introduced various graphical tools. As with all tools, however, graphical tools can be used effectively or ineffectively, and bad graphics can be far worse than no graphics. In this section you’ll learn what makes good graphics good and bad graphics bad. In doing so you’ll learn to use graphical tools effectively. Bad graphics is like obscenity: it’s hard to define, but you know it when you see it. Conversely, producing good graphics is like good writing: it’s an iterative, trial-and-error procedure, and very much an art rather than a science. But that’s not to say that anything goes; as with good writing, good graphics requires discipline. There are at least three keys to good graphics: a. Know your audience, and know your goals.

20

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

b. Show the data, and only the data, withing the bounds of reason. c. Revise and edit, again and again (and again). Graphics produced using software defaults are almost never satisfactory. We can use a number of devices to show the data. First, avoid distorting the data or misleading the viewer, in order to reveal true data variation rather than spurious impressions created by design variation. Thus, for example, avoid changing scales in midstream, use common scales when performing multiple comparisons, and so on. The sizes of effects in graphics should match their size in the data. Second, minimize, within reason, non-data ink (ink used to depict anything other than data points). Avoid chartjunk (elaborate shadings and grids that are hard to decode, superfluous decoration including spurious 3-D perspective, garish colors, etc.) Third, choose a graph’s aspect ratio (the ratio of the graph’s height, h, to its width, w) to maximize pattern revelation. A good aspect ratio often makes the average absolute slope of line segments connecting the data points approximately equal 45 degrees. This procedure is called banking to 45 degrees. Fourth, maximize graphical data density. Good graphs often display lots of data, indeed so much data that it would be impossible to learn from them in tabular form.3 Good graphics can present a huge amount of data in a concise and digestible form, revealing facts and prompting new questions, at both “micro” and “macro” levels.4 Graphs can often be shrunken greatly with no loss, as with sparklines (tiny graphics, typically time-series plots, meant to flow with text) and the 3 Conversely, for small amounts of data, a good table may be much more appropriate and informative than a graphic. 4 Note how maximization of graphical data density complements our earlier prescription to maximize the ratio of data ink to non-data ink, which deals with maximizing the relative amount of data ink. High data density involves maximizing as well the absolute amount of data ink.

2.3. U.S. HOURLY WAGES

21

sub-plots in multiple comparison graphs, increasing the amount of data ink per unit area.

2.3

U.S. Hourly Wages

We now begin our examination of CPS wage data, which we will use extensively. Here we use the 1995 CPS hourly wage data; for a detailed description see Appendix B. Figure 6.1 has four panels; consider first the left panels. In the upper left we show a histogram of hourly wage for the 1000+ people in the dataset. The distribution is clearly skewed right, with a mean around $12/hour. In the lower left panel we show a density estimate (basically just a smoothed histogram) together with the best fitting normal distribution (a normal with mean and variance equal to the sample mean and sample variance of the wage data). Clearly the normal fits poorly. The right panels of Figure 6.1 have the same structure, except that we now work with (natural) logarithms of the wages rather than the original “raw” wage data.5 The log is often used as a “symmetrizing” transformation for data with a right-skewed distribution, because the log transformation compresses things, pulling in long right tails. Sometimes taking logs can even produce approximate normality.6 Inspection of the log wage histogram in the upper right panel reveals that the log wage does indeed appear more symmetrically distributed, and comparison of the density estimate to the best-fitting normal in the lower-right panel indicates approximate normality of the log wage. 5

Whenever we say “log” in this book, we mean “natural log”. Recall the famous lognormal density: A random variable x is defined to be lognormal if log(x) is normal. Hence if the wage data is approximately lognormally distributed, then, log(wage) will be approximately normal. Of course lognormality may or may hold – whether data are lognormal is entirely an empirical matter. 6

22

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

Figure 2.5: Distributions of Wages and Log Wages

2.4

Concluding Remarks

Ultimately good graphics proceeds just like good writing, and if good writing is good thinking, then so too is good graphics good thinking. And good writing is just good thinking. So the next time you hear someone pronounce ignorantly along the lines of “I don’t like to write; I like to think,” rest assured, both his writing and his thinking are likely poor. Indeed many of the classic prose style references contain many insights that can be adapted to improve graphics (even if Strunk and White would view as worthless filler my use of “indeed” earlier in this sentence (“non-thought ink?”)). So when doing graphics, just as when writing, think. Then revise and edit, revise and edit, ...

2.5

Exercises, Problems and Complements

1. (NBER recession bars: A useful graphical device) In U.S. time-series situations it’s often useful to superimpose “NBER

2.5. EXERCISES, PROBLEMS AND COMPLEMENTS

23

Recession Bars” on time-series plots, to help put things in context. You can find the dates of NBER expansions and contractions at http:// www.nber.org/cycles.html. 2. (Empirical warm-up) (a) Obtain time series of quarterly real GDP and quarterly real consumption for a country of your choice. Provide details. (b) Display time-series plots and a scatterplot (put consumption on the vertical axis). (c) Convert your series to growth rates in percent, and again display time series plots. (d) From now on use the growth rate series only. (e) For each series, provide summary statistics (e.g., mean, standard deviation, range, skewness, kurtosis, ...). (f) For each series, perform t-tests of the null hypothesis that the population mean growth rate is 2 percent. (g) For each series, calulate 90 and 95 percent confidence intervals for the population mean growth rate. For each series, which interval is wider, and why? (h) Regress consumption on GDP. Discuss. 3. (Simple vs. partial correlation) The set of pairwise scatterplots that comprises a multiway scatterplot provides useful information about the joint distribution of the set of variables, but it’s incomplete information and should be interpreted with care. A pairwise scatterplot summarizes information regarding the simple correlation between, say, x and y. But x and y may appear highly related in a pairwise scatterplot even if they are in fact unrelated, if

24

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

each depends on a third variable, say z. The crux of the problem is that there’s no way in a pairwise scatterplot to examine the correlation between x and y controlling for z, which we call partial correlation. When interpreting a scatterplot matrix, keep in mind that the pairwise scatterplots provide information only on simple correlation. 4. (Graphics and Big Data) Another aspect of the power of statistical graphics comes into play in the analysis of large datasets, so it’s increasingly more important in our era of “Big Data”: Graphics enables us to present a huge amount of data in a small space, and hence helps to make huge datasets coherent. We might, for example, have supermarket-scanner data, recorded in fiveminute intervals for a year, on the quantities of goods sold in each of four food categories – dairy, meat, grains, and vegetables. Tabular or similar analysis of such data is simply out of the question, but graphics is still straightforward and can reveal important patterns. 5. (Color) There is a temptation to believe that color graphics is always better than grayscale. That’s often far from the truth, and in any event, color is typically best used sparingly. a. Color can be (and often is) chartjunk. How and why? b. Color has no natural ordering, despite the evident belief in some quarters that it does. What are the implications for “heat map” graphics? Might shades of a single color (e.g., from white or light gray through black) be better? c. On occasion, however, color can aid graphics both in showing the data and in appealing to the viewer. One key “show the data” use is in

2.5. EXERCISES, PROBLEMS AND COMPLEMENTS

25

Table 2.1: Yield Statistics Maturity (Months)



σ ˆy

ρˆy (1) ρˆy (12)

6 12 24 36 60 120

4.9 5.1 5.3 5.6 5.9 6.5

2.1 2.1 2.1 2.0 1.9 1.8

0.98 0.98 0.97 0.97 0.97 0.97

0.64 0.65 0.65 0.65 0.66 0.68

Notes: We present descriptive statistics for end-of-month yields at various maturities. We show sample mean, sample standard deviation, and first- and twelfth-order sample autocorrelations. Data are from the Board of Governors of the Federal Reserve System. The sample period is January 1985 through December 2008.

annotation. Can you think of others? What about uses in appealing to the viewer? d. Keeping in mind the principles of graphical style, formulate as many guidelines for color graphics as you can. 6. (Principles of tabular style) The power of tables for displaying data and revealing patterns is very limited compared to that of graphics, especially in this age of Big Data. Nevertheless, tables are of course sometimes helpful, and there are principles of tabular style, just as there are principles of graphical style. Compare, for example, the nicely-formatted Table 2.1 (no need to worry about what it is or from where it comes...) to what would be produced by a spreadsheet such as Excel. Try to formulate a set of principles of tabular style. (Hint: One principle is that vertical lines should almost never appear in tables, as in the table above.) 7. (More on graphical style: Appeal to the viewer)

26

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

Other graphical guidelines help us appeal to the viewer. First, use clear and modest type, avoid mnemonics and abbreviations, and use labels rather then legends when possible. Second, make graphics self-contained; a knowledgeable reader should be able to understand your graphics without reading pages of accompanying text. Third, as with our prescriptions for showing the data, avoid chartjunk. 8. (The “golden” aspect ratio, visual appeal, and showing the data) A time-honored approach to visual graphical appeal is use of an aspect ratio such that height is to width as width is to the sum of height and width. This turns out to correspond to height approximately sixty percent of width, the so-called “golden ratio.” Graphics that conform to the golden ratio, with height a bit less than two thirds of width, are visually appealing. Other things the same, it’s a good idea to keep the golden ratio in mind when producing graphics. Other things are not always the same, however. In particular, the golden aspect ratio may not be the one that maximizes pattern revelation (e.g., by banking to 45 degrees). 9. (Graphics, non-profit and for-profit) Check out the non-profit “community of creative people” at www.visualizing. org. Check out Google Charts at https://developers.google.com/chart/. Poke around.

What’s good?

sparklines? Check out www.zevross.com.

What’s bad?

Can you use it to do

2.6. NOTES

2.6

27

Notes

R implements a variety of sophisticated graphical techniques and in many respects represents the cutting edge of statistical graphics software. ggplot2 is a key R package that provides a broad catalog of graphics capabilities; see www.ggplot2.org. It implements the grammar of graphics developed by Leland Wilkenson, which allows you to produce highly customized graphics in a modular fashion. This grammar leads to a slightly unusual syntax, which must be learned, but once learned you can do almost anything. (The simple plot commands in R allow for some customization and have a shorter learning curve, but they’re not as powerful.) ggplot2 documentation is at www.cran.r-project.org/web/packages/ggplot2/ggplot2.pdf. A

helpful “cheatsheet” is at www.zevross.com/blog/2014/08/04/beautiful-plotting-in #change-the-grid-lines-panel.grid.major.

28

CHAPTER 2. GRAPHICS AND GRAPHICAL STYLE

2.7

Graphics Legend: Edward Tufte

Figure 2.6: Tufte Teaching, with a First Edition Book by Galileo

This chapter has been heavily influenced by Tufte (1983), as are all modern discussions of statistical graphics.7 Tufte’s book is an insightful and entertaining masterpiece on graphical style, and I recommend enthusiastically. Be sure to check out his web page and other books, which go far beyond his 1983 work.

7

Photo details follow. Date: 7 February 2011. Source: http://www.flickr.com/photos/roebot/5429634725/in/set-72157625883623225. Author: Aaron Fulkerson. Originally posted to Flickr by Roebot at http://flickr.com/photos/40814689@N00/5429634725. Reviewed on 24 May 2011 by the FlickreviewR robot and confirmed to be licensed under the terms of the cc-by-sa-2.0. Licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.

Part II Cross-Sections

29

Chapter 3 Regression Under the Ideal Conditions You have already been introduced to probability and statistics, but chances are that you could use a bit of review before plunging into regression, so begin by studying Appendix A. Be warned, however: it is no substitute for a full-course introduction to probability and statistics, which you should have had already. Instead it is intentionally much more narrow, reviewing some material related to moments of random variables, which we will use repeatedly. It also introduces notation, and foreshadows certain ideas, that we develop subsequently in greater detail.

3.1

Preliminary Graphics

In this chapter we’ll be working with cross-sectional data on log wages, education and experience. We already examined the distribution of log wages. For convenience we reproduce it in Figure 3.1, together with the distributions of the new data on education and experience.

31

32

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Figure 3.1: Distributions of Log Wage, Education and Experience

3.2. REGRESSION AS CURVE FITTING

3.2 3.2.1

33

Regression as Curve Fitting Bivariate, or Simple, Linear Regression

Suppose that we have data on two variables, y and x, as in Figure 3.2, and suppose that we want to find the linear function of x that best fits y, where “best fits” means that the sum of squared (vertical) deviations of the data points from the fitted line is as small as possible. When we “run a regression,” or “fit a regression line,” that’s what we do. The estimation strategy is called least squares, or sometimes “ordinary least squares” to distinguish it from fancier versions that we’ll introduce later. The specific data that we show in Figure 3.2 are log wages (LWAGE, y) and education (EDUC, x) for a random sample of nearly 1500 people, as described in Appendix B. Let us elaborate on the fitting of regression lines, and the reason for the name “least squares.” When we run the regression, we use a computer to fit the line by solving the problem min β

T X

(yt − β1 − β2 xt )2 ,

t=1

where β is shorthand notation for the set of two parameters, β1 and β2 . We ˆ and its elements by βˆ1 and βˆ2 . denote the set of fitted parameters by β, It turns out that the β1 and β2 values that solve the least squares problem have well-known mathematical formulas. (More on that later.) We can use a computer to evaluate the formulas, simply, stably and instantaneously. The fitted values are yˆt = βˆ1 + βˆ2 xt , t = 1, ..., T . The residuals are the difference between actual and fitted values, et = yt − yˆt ,

34

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Figure 3.2: (Log Wage, Education) Scatterplot

3.2. REGRESSION AS CURVE FITTING

35

Figure 3.3: (Log Wage, Education) Scatterplot with Superimposed Regression Line

t = 1, ..., T . In Figure 3.3, we illustrate graphically the results of regressing LWAGE on EDUC. The best-fitting line slopes upward, reflecting the positive correlation between LWAGE and EDUC.1 Note that the data points don’t satisfy the fitted linear relationship exactly; rather, they satisfy it on average. To predict LWAGE for any given value of EDUC, we use the fitted line to find the value of LWAGE that corresponds to the given value of EDUC. 1

Note that use of log wage promostes several desiderata. First, it promotes normality, as we discussed \ in Chapter 2. Second, it enforces positivity of the fitted wage, because W\ AGE = exp(LW AGE), and exp(x) > 0 for any x.

36

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Numerically, the fitted line is \ LW AGE = 1.273 + .081EDU C. We conclude with a brief comment on notation. A standard cross-section notation for indexing the cross-sectional units is i = 1, ..., N . A standard time-series notation for indexing time periods is t = 1, ..., T . Much of our discussion will be valid in both cross-section and time-series environments, but still we have to pick a notation. Without loss of generality, we will typically use t = 1, ..., T regardless of whether we’re in a cross section or time series environment. 3.2.2

Multiple Linear Regression

Everything generalizes to allow for more than one RHS variable. This is called multiple linear regression. Suppose, for example, that we have two RHS variables, x2 and x3 . Before we fit a least-squares line to a two-dimensional data cloud; now we fit a leastsquares plane to a three-dimensional data cloud. We use the computer to find the values of β1 , β2 , and β3 that solve the problem min β

T X

(yt − β1 − β2 x2t − β3 x3t )2 ,

t=1

where β denotes the set of three model parameters. We denote the set of ˆ with elements βˆ1 , βˆ2 , and βˆ3 . The fitted values estimated parameters by β, are yˆt = βˆ1 + βˆ2 x2t + βˆ3 x3t , and the residuals are et = yt − yˆt , t = 1, ..., T .

3.2. REGRESSION AS CURVE FITTING

37

For our wage data, the fitted model is \ LW AGE = .867 + .093EDU C + .013EXP ER. Extension to the general multiple linear regression model, with an arbitrary number of right-hand-side (RHS) variables (K, including the constant), is immediate. The computer again does all the work. The fitted line is yˆt = βˆ1 + βˆ2 x2t + βˆ3 x3t + ... + βˆK xKt , which we sometimes write more compactly as yˆt =

K X

βˆk xkt ,

k=1

where x1t = 1 for all t. 3.2.3

Onward

Before proceeding, two aspects of what we’ve done so far are worth noting. First, we now have two ways to analyze data and reveal its patterns. One is the graphical scatterplot of Figure 3.2, with which we started, which provides a visual view of the data. The other is the fitted regression line of Figure 3.3, which summarizes the data through the lens of a linear fit. Each approach has its merit, and the two are complements, not substitutes, but note that linear regression generalizes more easily to high dimensions. Second, least squares as introduced thus far has little to do with statistics or econometrics. Rather, it is simply a way of instructing a computer to fit a line to a scatterplot in a way that’s rigorous, replicable and arguably reasonable. We now turn to a probabilistic interpretation.

38

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

3.3

Regression as a Probability Model

We work with the full multiple regression model (simple regression is of course a special case). Collect the RHS variables into the vector x, where x0t = (1, x1t , x2t , ..., xKt ). 3.3.1

A Population Model and a Sample Estimator

Thus far we have not postulated a probabilistic model that relates yt and xt ; instead, we simply ran a mechanical regression of yt on xt to find the best fit to yt formed as a linear function of xt . It’s easy, however, to construct a probabilistic framework that lets us make statistical assessments about the properties of the fitted line. We assume that yt is linearly related to an exogenously-determined xt , and we add an independent and identically distributed zero-mean (iid) Gaussian disturbance: yt = β1 + β2 x2t + ... + βK xKt + εt εt ∼ iidN (0, σ 2 ), t = 1, ..., T . The intercept of the line is β1 , the slope parameters are the βi ’s, and the variance of the disturbance is σ 2 .2 Collectively, we call the β’s the model’s parameters. The index t keeps track of time; the data sample begins at some time we’ve called “1” and ends at some time we’ve called “T ”, so we write t = 1, ..., T . (Or, in cross sections, we index cross-section units by i and write i = 1, ..., N .) Note that in the linear regression model the expected value of yt conditional upon xt taking a particular value, say x∗t , is E(yt |xt = x∗t ) = β1 + β2 x∗2t + ... + βK x∗Kt . 2

We speak of the regression intercept and the regression slope.

3.3. REGRESSION AS A PROBABILITY MODEL

39

That is, the regression function is the conditional expectation of yt . We assume that the the linear model sketched is true in population; that is, it is the data-generating process (DGP). But in practice, of course, we don’t know the values of the model’s parameters, β1 , β2 , ..., βK and σ 2 . Our job is to estimate them using a sample of data from the population. We estimate P the β’s precisely as before, using the computer to solve minβ Tt=1 ε2t . 3.3.2

Notation, Assumptions and Results: The Ideal Conditions

The discussion thus far was intentionally a bit loose, focusing on motivation and intuition. Let us now be more precise about what we assume and what results we obtain. A Bit of Matrix Notation

It will be useful to arrange all RHS variables into a matrix X. X has K columns, one for each regressor. Inclusion of a constant in a regression amounts to including a special RHS variable that is always 1.

We put

that in the leftmost column of the X matrix, which is just ones. The other columns contain the data on the other RHS variables, over the cross section in the cross-sectional case, i = 1, ..., N , or over time in the time-series case, t = 1, ..., T . With no loss of generality, suppose that we’re in a time-series situation; then notationally X is a T × K matrix.   1 x21 x31 . . . xK1   1 x22 x32 . . . xK2  . X=  ...    1 x2T x3T . . . xKT One reason that the X matrix is useful is because the regression model

40

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

can be written very compactly using it. We have written the model as yt = β1 + β2 x2t + ... + βK xKt + εt , t = 1, ..., T. Alternatively, stack yt , t = 1, ..., T into the vector y, where y 0 = (y1 , y2 , ..., yT ), and stack βj , j = 1, ..., K into the vector β, where β 0 = (β1 , β2 , ..., βK ), and stack εt , t = 1, ..., T , into the vector ε, where ε0 = (ε1 , ε2 , ..., εT ). Then we can write the complete model over all observations as y = Xβ + ε.

(3.1)

Our requirement that εt ∼ iid N (0, σ 2 ) becomes ε ∼ N (0, σ 2 I)

(3.2)

This concise representation is very convenient. Indeed representation (3.1)-(3.2) is crucially important, not simply because it is concise, but because the various assumptions that we need to make to get various statistical results are most naturally and simply stated on X and ε in equation (3.1). We now proceed to discuss such assumptions. Assumptions: The Ideal Conditions (IC)

• The data-generating process (DGP) is:

yt = β1 + β2 x2t + ... + βK xKt + εt εt ∼ iidN (0, σ 2 ), and the fitted model matches it exactly. • εt and xit are independent, for all i, t

3.4. A WAGE EQUATION

41

IC 3.3.2 has many important sub-conditions embedded. For example: 1. The fitted model is correctly specified 2. The disturbances are Gaussian 3. The coefficients (β’s) are fixed (whether over space or time, depending on whether we’re working in a time-series or cross-section environment) 4. The relationship is linear 5. The εt ’s have constant variance σ 2 6. The εt ’s are uncorrelated The IC are surely heroic in many contexts, and much of econometrics is devoted to detecting and dealing with various IC failures. But before we worry about IC failures, it’s invaluable first to understand what happens when they hold. Results

The least squares estimator is βˆLS = (X 0 X)−1 X 0 y, and under the IC it is MVUE and normally distributed with covariance matrix σ 2 (X 0 X)−1 . We write  βˆLS ∼ N β, σ 2 (X 0 X)−1 . We estimate the covariance matrix σ 2 (X 0 X)−1 using s2 (X 0 X)−1 , where s2 = PT 2 t=1 et /(T − K).

42

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Figure 3.4: Regression Output

3.4

A Wage Equation

Now let’s do more than a simple graphical analysis of the regression fit. Instead, let’s look in detail at the computer output, which we show in Figure 6.2 for a regression of LW AGE on an intercept, EDU C and EXP ER. We run regressions dozens of times in this book, and the output format and interpretation are always the same, so it’s important to get comfortable with it quickly. The output is in Eviews format. Other software will produce more-or-less the same information, which is fundamental and standard. Before proceeding, note well that the IC may not be satisfied for this dataset, yet we will proceed assuming that they are satisfied. As we proceed through this book, we will confront violations of the various assumptions – indeed that’s what econometrics is largely about – and we’ll return repeatedly to this dataset and others. But we must begin at the beginning. The printout begins by reminding us that we’re running a least-squares (LS) regression, and that the left-hand-side (LHS) variable is the log wage

3.4. A WAGE EQUATION

43

(LWAGE), using a total of 1323 observations. Next comes a table listing each RHS variable together with four statistics. The RHS variables EDUC and EXPER are education and experience, and the C variable refers to the earlier-mentioned intercept. The C variable always equals one, so the estimated coefficient on C is the estimated intercept of the regression line.3 The four statistics associated with each RHS variable are the estimated coefficient (“Coefficient”), its standard error (“Std. Error”), a t statistic, and a corresponding probability value (“Prob.”). The standard errors of the estimated coefficients indicate their likely sampling variability, and hence their reliability. The estimated coefficient plus or minus one standard error is approximately a 68% confidence interval for the true but unknown population parameter, and the estimated coefficient plus or minus two standard errors is approximately a 95% confidence interval, assuming that the estimated coefficient is approximately normally distributed, which will be true if the regression disturbance is normally distributed or if the sample size is large. Thus large coefficient standard errors translate into wide confidence intervals. Each t statistic provides a test of the hypothesis of variable irrelevance: that the true but unknown population parameter is zero, so that the corresponding variable contributes nothing to the forecasting regression and can therefore be dropped. One way to test variable irrelevance, with, say, a 5% probability of incorrect rejection, is to check whether zero is outside the 95% confidence interval for the parameter. If so, we reject irrelevance. The t statistic is just the ratio of the estimated coefficient to its standard error, so if zero is outside the 95% confidence interval, then the t statistic must be bigger than two in absolute value. Thus we can quickly test irrelevance at the 5% level by checking whether the t statistic is greater than two in absolute 3

Sometimes the population coefficient on C is called the constant term, and the regression estimate is called the estimated constant term.

44

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

value.4 Finally, associated with each t statistic is a probability value, which is the probability of getting a value of the t statistic at least as large in absolute value as the one actually obtained, assuming that the irrelevance hypothesis true. Hence if a t statistic were two, the corresponding probability value would be approximately .05. The smaller the probability value, the stronger the evidence against irrelevance. There’s no magic cutoff, but typically probability values less than 0.1 are viewed as strong evidence against irrelevance, and probability values below 0.05 are viewed as very strong evidence against irrelevance. Probability values are useful because they eliminate the need for consulting tables of the t distribution. Effectively the computer does it for us and tells us the significance level at which the irrelevance hypothesis is just rejected. Now let’s interpret the actual estimated coefficients, standard errors, t statistics, and probability values. The estimated intercept is approximately .867, so that conditional on zero education and experience, our best forecast of the log wage would be 86.7 cents. Moreover, the intercept is very precisely estimated, as evidenced by the small standard error of .08 relative to the estimated coefficient. An approximate 95% confidence interval for the true but unknown population intercept is .867 ± 2(.08), or [.71, 1.03]. Zero is far outside that interval, so the corresponding t statistic is huge, with a probability value that’s zero to four decimal places. The estimated coefficient on EDUC is .093, and the standard error is again small in relation to the size of the estimated coefficient, so the t statistic is large and its probability value small. The coefficient is positive, so that LWAGE tends to rise when EDUC rises. In fact, the interpretation of the estimated coefficient of .09 is that, holding everything else constant, a one4

If the sample size is small, or if we want a significance level other than 5%, we must refer to a table of critical values of the t distribution. We also note that use of the t distribution in small samples also requires an assumption of normally distributed disturbances.

3.4. A WAGE EQUATION

45

year increase in EDUC will produce a .093 increase in LWAGE. The estimated coefficient on EXPER is .013. Its standard error is also small, and hence its t statistic is large, with a very small probability value. Hence we reject the hypothesis that EXPER contributes nothing to the forecasting regression. A one-year increase in EXP ER tends to produce a .013 increase in LWAGE. A variety of diagnostic statistics follow; they help us to evaluate the adequacy of the regression. We provide detailed discussions of many of them elsewhere. Here we introduce them very briefly: 3.4.1

Mean dependent var 2.342

The ample mean of the dependent variable is T 1X yt . y¯ = T t=1

It measures the central tendency, or location, of y. 3.4.2

S.D. dependent var .561

The sample standard deviation of the dependent variable is s PT ¯)2 t=1 (yt − y SD = . T −1 It measures the dispersion, or scale, of y. 3.4.3

Sum squared resid 319.938

Minimizing the sum of squared residuals is the objective of least squares estimation. It’s natural, then, to record the minimized value of the sum of squared residuals. In isolation it’s not of much value, but it serves as an

46

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

input to other diagnostics that we’ll discuss shortly. Moreover, it’s useful for comparing models and testing hypotheses. The formula is SSR =

T X

e2t .

t=1

3.4.4

Log likelihood -938.236

The likelihood function is the joint density function of the data, viewed as a function of the model parameters. Hence a natural estimation strategy, called maximum likelihood estimation, is to find (and use as estimates) the parameter values that maximize the likelihood function. After all, by construction, those parameter values maximize the likelihood of obtaining the data that were actually obtained. In the leading case of normally-distributed regression disturbances, maximizing the likelihood function (or equivalently, the log likelihood function, because the log is a monotonic transformation) turns out to be equivalent to minimizing the sum of squared residuals, hence the maximum-likelihood parameter estimates are identical to the least-squares parameter estimates. The number reported is the maximized value of the log of the likelihood function.5 Like the sum of squared residuals, it’s not of direct use, but it’s useful for comparing models and testing hypotheses. We will rarely use the log likelihood function directly; instead, we’ll focus for the most part on the sum of squared residuals. 3.4.5

F statistic 199.626

We use the F statistic to test the hypothesis that the coefficients of all variables in the regression except the intercept are jointly zero.6 That is, we test whether, taken jointly as a set, the variables included in the forecasting 5

Throughout this book, “log” refers to a natural (base e) logarithm. We don’t want to restrict the intercept to be zero, because under the hypothesis that all the other coefficients are zero, the intercept would equal the mean of y, which in general is not zero. See Problem 6. 6

3.4. A WAGE EQUATION

47

model have any predictive value. This contrasts with the t statistics, which we use to examine the predictive worth of the variables one at a time.7 If no variable has predictive value, the F statistic follows an F distribution with k − 1 and T − k degrees of freedom. The formula is F =

(SSRres − SSR)/(K − 1) , SSR/(T − K)

where SSRres is the sum of squared residuals from a restricted regression that contains only an intercept. Thus the test proceeds by examining how much the SSR increases when all the variables except the constant are dropped. If it increases by a great deal, there’s evidence that at least one of the variables has predictive content. 3.4.6

Prob(F statistic) 0.000000

The probability value for the F statistic gives the significance level at which we can just reject the hypothesis that the set of RHS variables has no predictive value. Here, the value is indistinguishable from zero, so we reject the hypothesis overwhelmingly. 3.4.7

S.E. of regression .492

If we knew the elements of β and forecasted yt using x0t β, then our forecast errors would be the εt ’s, with variance σ 2 . We’d like an estimate of σ 2 , because it tells us whether our forecast errors are likely to be large or small. The observed residuals, the et ’s, are effectively estimates of the unobserved population disturbances, the εt ’s . Thus the sample variance of the e’s, which 7

In the degenerate case of only one RHS variable, the t and F statistics contain exactly the same information, and F = t2 . When there are two or more RHS variables, however, the hypotheses tested differ, and F 6= t2 .

48

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

we denote s2 (read “s-squared”), is a natural estimator of σ 2 : PT

2

s =

2 t=1 et

T −K

.

s2 is an estimate of the dispersion of the regression disturbance and hence is used to assess goodness of fit of the model, as well as the magnitude of forecast errors that we’re likely to make. The larger is s2 , the worse the model’s fit, and the larger the forecast errors we’re likely to make. s2 involves a degreesof-freedom correction (division by T − K rather than by T − 1, reflecting the fact that K regression coefficients have been estimated), which is an attempt to get a good estimate of the out-of-sample forecast error variance on the basis of the in-sample residuals. The standard error of the regression (SER) conveys the same information; it’s an estimator of σ rather than σ 2 , so we simply use s rather than s2 . The formula is √ SER =

s s2

=

PT

2 t=1 et

T −K

.

The standard error of the regression is easier to interpret than s2 , because its units are the same as those of the e’s, whereas the units of s2 are not. If the e’s are in dollars, then the squared e’s are in dollars squared, so s2 is in dollars squared. By taking the square root at the end of it all, SER converts the units back to dollars. Sometimes it’s informative to compare the standard error of the regression (or a close relative) to the standard deviation of the dependent variable (or a close relative). The standard error of the regression is an estimate of the standard deviation of forecast errors from the regression model, and the standard deviation of the dependent variable is an estimate of the standard deviation of the forecast errors from a simpler forecasting model, in which the forecast each period is simply y¯ . If the ratio is small, the variables in the model appear very helpful in forecasting y. R-squared measures, to which we

3.4. A WAGE EQUATION

49

now turn, are based on precisely that idea. 3.4.8

R-squared .232

If an intercept is included in the regression, as is almost always the case, R-squared must be between zero and one. In that case, R-squared, usually written R2 , is the percent of the variance of y explained by the variables included in the regression. R2 measures the in-sample success of the regression equation in forecasting y; hence it is widely used as a quick check of goodness of fit, or forecastability, of y based on the variables included in the regression. Here the R2 is about 23% – well above zero but not great. The formula is PT

2

2 t=1 et

R = 1 − PT

¯)2 t=1 (yt − y

.

We can write R2 in a more roundabout way as R2 = 1 −

1 T 1 T

PT

2 t=1 et

PT

t=1 (yt



y¯)2

,

which makes clear that the numerator in the large fraction is very close to s2 , and the denominator is very close to the sample variance of y. 3.4.9

Adjusted R-squared .231

The interpretation is the same as that of R2 , but the formula is a bit different. Adjusted R2 incorporates adjustments for degrees of freedom used in fitting the model, in an attempt to offset the inflated appearance of good fit if many RHS variables are tried and the “best model” selected. Hence adjusted R2 is a more trustworthy goodness-of-fit measure than R2 . As long as there is more than one RHS variable in the model fitted, adjusted R2 is smaller than R2 ; here, however, the two are extremely close (23.1% vs. 23.2%). Adjusted

50

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

¯ 2 ; the formula is R2 is often denoted R ¯2 = 1 − R

PT 2 1 t=1 et T −K , P T 1 2 (y − y ¯ ) t=1 t T −1

where K is the number of RHS variables, including the constant term. Here the numerator in the large fraction is precisely s2 , and the denominator is precisely the sample variance of y. 3.4.10

Akaike info criterion 1.423

The Akaike information criterion, or AIC, is effectively an estimate of the out-of-sample forecast error variance, as is s2 , but it penalizes degrees of freedom more harshly. It is used to select among competing forecasting models. The formula is: AIC = e( ) 2K T

PT

2 t=1 et

, T and “smaller is better”. That is, we select the model with smallest AIC. We will discuss AIC in greater depth in Chapter 4. 3.4.11

Schwarz criterion 1.435

The Schwarz information criterion, or SIC, is an alternative to the AIC with the same interpretation, but a still harsher degrees-of-freedom penalty. The formula is: SIC = T ( ) K T

PT

2 t=1 et

, T and “smaller is better”. That is, we select the model with smallest SIC. We will discuss SIC in greater depth in Chapter 4. 3.4.12

Hannan-Quinn criter. 1.427

Hannan-Quinn is yet another information criterion for use in model selection. We will not use it in this book.

3.4. A WAGE EQUATION

3.4.13

51

Durbin-Watson stat. 1.926

The Durbin-Watson statistic is useful in time series environments for assessing whether the εt ’s are correlated over time; that is, whether the iid assumption (part of the ideal conditions) is violated. It is irrelevant in the present application to wages, which uses cross-section data. We nevertheless introduce it briefly here. Later in the book we will discuss time series extensively. The Durbin-Watson statistic tests for correlation over time, called serial correlation, in regression disturbances. It works within the context of a regression model with disturbances εt = φεt−1 + vt vt ∼ iidN (0, σ 2 ). The regression disturbance is serially correlated when φ 6= 0 . The hypothesis of interest is that φ = 0 . When φ = 0, the ideal conditions hold, but when φ 6= 0, the disturbance is serially correlated. More specifically, when φ 6= 0, we say that εt follows an autoregressive process of order one, or AR(1) for short.8 If φ > 0 the disturbance is positively serially correlated, and if φ < 0 the disturbance is negatively serially correlated. Positive serial correlation is typically the relevant alternative in the applications that will concern us. The formula for the Durbin-Watson (DW ) statistic is PT DW =

2 t=2 (et − et−1 ) . PT 2 e t=1 t

DW takes values in the interval [0, 4], and if all is well, DW should be around 2. If DW is substantially less than 2, there is evidence of positive serial correlation. As a rough rule of thumb, if DW is less than 1.5, there may be cause for alarm, and we should consult the tables of the DW statistic, 8

Although the Durbin-Watson test is designed to be very good at detecting serial correlation of the AR(1) type. Many other types of serial correlation are possible; we’ll discuss them extensively in Chapter 11.1.

52

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Figure 3.5: Wage Regression Residual Scatter

available in many statistics and econometrics texts. 3.4.14

The Residual Scatter

The residual scatter is often useful in both cross-section and time-series situations. It is a plot of y vs yˆ. A perfect fit (R2 = 1) corresponds to all points on the 45 degree line, and no fit (R2 = 0) corresponds to all points on a vertical line corresponding to y = y¯. In Figure 3.5 we show the residual scatter for the wage regression. It is not a vertical line, but certainly also not the 45 degree line, corresponding to the positive but relatively low R2 of .23. 3.4.15

The Residual Plot

In time-series settings, it’s always a good idea to assess visually the adequacy of the model via time series plots of the actual data (yt ’s), the fitted values

3.4. A WAGE EQUATION

53

(ˆ yt ’s), and the residuals (et ’s). Often we’ll refer to such plots, shown together in a single graph, as a residual plot.9 We’ll make use of residual plots throughout this book. Note that even with many RHS variables in the regression model, both the actual and fitted values of y, and hence the residuals, are simple univariate series that can be plotted easily. The reason we examine the residual plot is that patterns would indicate violation of our iid assumption. In time series situations, we are particularly interested in inspecting the residual plot for evidence of serial correlation in the et ’s, which would indicate failure of the assumption of iid regression disturbances. More generally, residual plots can also help assess the overall performance of a model by flagging anomalous residuals, due for example to outliers, neglected variables, or structural breaks. Our wage regression is cross-sectional, so there is no natural ordering of the observations, and the residual plot is of limited value. But we can still use it, for example, to check for outliers. In Figure 3.6, we show the residual plot for the regression of LWAGE on EDUC and EXPER. The actual and fitted values appear at the top of the graph; their scale is on the right. The fitted values track the actual values fairly well. The residuals appear at the bottom of the graph; their scale is on the left. It’s important to note that the scales differ; the et ’s are in fact substantially smaller and less variable than either the yt ’s or the yˆt ’s. We draw the zero line through the residuals for visual comparison. No outliers are apparent. 9

Sometimes, however, we’ll use “residual plot” to refer to a plot of the residuals alone. The intended meaning should be clear from context.

54

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Figure 3.6: Wage Regression Residual Plot

3.5

More on Prediction

The linear regression DGP under the ideal conditions implies the conditional mean function, E(yt | x1t = 1, x2t = x∗2t , ..., xKt = x∗Kt ) = β1 + β2 x∗2t + ... + βK x∗Kt  or E(yt | xt = x∗t ) = x∗t 0 β . A major goal in econometrics is predicting y. The question is “If a new person arrives with characteristics x∗ , what is my minimum-MSE prediction of her y? The answer under quadratic loss is E(y|x = x∗ ) = x∗t 0 β. That is, “the conditional mean is the minimum MSE (point) predictor”. The nonoperational version (we don’t know β) is E(yt |xt = x∗t ) = x∗t 0 β, and the operational version (using βˆLS ) is E(yt\ |xt = x∗t ) = x∗t 0 βˆLS (regression fitted value at xt = x∗t ). It is important to notice that LS delivers operational MSE-optimal pre-

3.5. MORE ON PREDICTION

55

dictor with great generality (i.e., even when the IC fail), simply due to the MSE-optimization problem that it solves. Interval forecasts are also of interest. The linear regression DGP implies the conditional variance function var(yt | xt = x∗t ) = σ 2 , which we can use to form interval forecasts. The non-operational version is yt ∈ [x∗t 0 β ± 1.96 σ] w.p. 0.95, and the operational version is yt ∈ [x∗t 0 βˆLS ± 1.96 s] w.p. 0.95. Finally full density forecasts are of interest. The linear regression DGP implies the conditional density function yt | xt = x∗t ∼ N (x∗t 0 β, σ 2 ). Hence a non-operational density forecast is yt | xt = x∗t ∼ N (x∗t 0 β, σ 2 ), with operational version yt | xt = x∗t ∼ N (x∗t 0 βˆLS , s2 ). 3.5.1

Regression Output from a Predictive Perspective

Here we offer predictive perspectives on regression objects. • The sample, or historical, mean of the dependent variable, y¯, an estimate of the unconditional mean of y, is a benchmark forecast. It is obtained by

56

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

regressing y on an intercept alone – no conditioning on other regressors. • The sample standard deviation of y is a measure of the in-sample accuracy of the unconditional mean forecast y¯. ˆ are effectively in-sample regression • The OLS fitted values, yˆt = x0t β, predictions. • The OLS residuals, et = yt − yˆt , are effectively in-sample prediction errors corresponding to use of the regression predictions. • OLS coefficient signs and sizes give the weights put on the various x variables in forming the best in-sample prediction of y. • The standard errors, t statistics, and p-values let us do statistical inference as to which regressors are most relevant for predicting y. • SSR measures “total” in-sample accuracy of the regression predictions. It is closely related to in-sample M SE: T 1X 2 1 e M SE = SSR = T T t=1 t

(“average” in-sample accuracy of the regression predictions) • The F statistic effectively compares the accuracy of the regression-based forecast to that of the unconditional-mean forecast. It helps us assess whether the x variables, taken as a set, have predictive value for y. That contrasts with the t statistics, which assess predictive value of the x variables one at a time. • s2 is just SSR scaled by T − K, so again, it’s a measure of the in-sample accuracy of the regression-based forecast. It’s like MSE, but corrected for degrees of freedom.

3.6. EXERCISES, PROBLEMS AND COMPLEMENTS

57

¯ 2 effectively compare the in-sample accuracy of conditional• R2 and R mean and unconditional-mean forecasts. R2 is not corrected for d.f. and has M SE on top: 1 T

2

R =1−

1 T

PT

2 t=1 et

PT

¯)2 t=1 (yt − y

.

¯ 2 is corrected for d.f. and has s2 on top: In contrast, R ¯2 = 1 − R

PT 2 1 t=1 et T −K . P T 1 2 (y − y ¯ ) t=1 t T −1

• Residual plots are useful for visually flagging neglected things that impact forecasting. Residual serial correlation indicates that point forecasts could be improved. Residual volatility clustering indicates that interval and density forecasts could be improved.

3.6

Exercises, Problems and Complements

1. (Regression with and without a constant term) Consider Figure 3.3, in which we showed a scatterplot of y vs. x with a fitted regression line superimposed. a. In fitting that regression line, we included a constant term. How can you tell? b. Suppose that we had not included a constant term. How would the figure look? c. We almost always include a constant term when estimating regressions. Why? d. When, if ever, might you explicitly want to exclude the constant term? 2. (Interpreting coefficients and variables)

58

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Let yt = β1 + β2 xt + β3 zt + εt , where yt is the number of hot dogs sold at an amusement park on a given day, xt is the number of admission tickets sold that day, zt is the daily maximum temperature, and εt is a random error. Assume the IC. a. State whether each of yt , xt , zt , β1 , β2 and β3 is a coefficient or a variable. b. Determine the units of β1 , β2 and β3 , and describe the physical meaning of each. c. What do the signs of the a coefficients tell you about how the various variables affects the number of hot dogs sold? What are your expectations for the signs of the various coefficients (negative, zero, positive or unsure)? d. Is it sensible to entertain the possibility of a non-zero intercept (i.e., β1 6= 0)? β2 > 0? β3 < 0? 3. (Scatter plots and regression lines) Draw qualitative scatter plots and regression lines for each of the following two-variable datasets, and state the R2 in each case: a. Data set 1: y and x have correlation 1 b. Data set 2: y and x have correlation -1 c. Data set 3: y and x have correlation 0. 4. (Desired values of regression diagnostic statistics) For each of the diagnostic statistics listed below, indicate whether, other things the same, “bigger is better,” “smaller is better,” or neither. Explain your reasoning. (Hint: Be careful, think before you answer, and be sure to qualify your answers as appropriate.)

3.6. EXERCISES, PROBLEMS AND COMPLEMENTS

59

a. Coefficient b. Standard error c. t statistic d. Probability value of the t statistic e. R-squared f. Adjusted R-squared g. Standard error of the regression h. Sum of squared residuals i. Log likelihood j. Durbin-Watson statistic k. Mean of the dependent variable l. Standard deviation of the dependent variable m. Akaike information criterion n. Schwarz information criterion o. F statistic p. Probability-value of the F statistic 5. (Regression semantics) Regression analysis is so important, and used so often by so many people, that a variety of associated terms have evolved over the years, all of which are the same for our purposes. You may encounter them in your reading, so it’s important to be aware of them. Some examples: a. Ordinary least squares, least squares, OLS, LS. b. y, LHS variable, regressand, dependent variable, endogenous variable c. x’s, RHS variables, regressors, independent variables, exogenous variables, predictors, covariates

60

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

d. probability value, prob-value, p-value, marginal significance level e. Schwarz criterion, Schwarz information criterion, SIC, Bayes information criterion, BIC 6. (Regression when X Contains Only an Intercept) Consider the regression model (3.1)-(3.2), but where X contains only an intercept. a. What is the OLS estimator of the intercept? b. What is the distribution of the OLS estimator under the ideal conditions? c. Does the variance-covariance matrix of the OLS estimator under the ideal conditions depend on any unknown parameters, and if so, how would you estimate them? 7. (Dimensionality) We have emphasized, particularly in Chapter 2, that graphics is a powerful tool with a variety of uses in the construction and evaluation of econometric models. We hasten to add, however, that graphics has its limitations. In particular, graphics loses much of its power as the dimension of the data grows. If we have data in ten dimensions, and we try to squash it into two or three dimensions to make graphs, there’s bound to be some information loss. Thus, in contrast to the analysis of data in two or three dimensions, in which case learning about data by fitting models involves a loss of information whereas graphical analysis does not, graphical methods lose their comparative advantage in higher dimensions. In higher dimensions, graphical analysis can become comparatively laborious and less insightful.

3.6. EXERCISES, PROBLEMS AND COMPLEMENTS

61

8. (Wage regressions) The relationship among wages and their determinants is one of the most important in all of economics. In the text we have examined, and will continue to examine, the relationship for 1995 using a CPS subsample. Here you will thoroughly analyze the relationship for 2004 and 2012, compare your results to those for 1995, and think hard about the meaning and legitimacy of your results. (a) Obtain the relevant 1995, 2004 and 2012 CPS subsamples. (b) Discuss any differences in the datasets. Are the same people in each dataset? (c) For now, assume the validity of the ideal conditions. Using each dataset, run the OLS regression W AGE → c, EDU C, EXP ER. (Note that the LHS variable is W AGE, not LW AGE.) Discuss and compare the results in detail. (d) Now think of as many reasons as possible to be skeptical of your results. (This largely means think of as many reasons as possible why the IC might fail.) Which of the IC might fail? One? A few? All? Why? Insofar as possible, discuss the IC, one-by-one, how/why failure could happen here, the implications of failure, how you might detect failure, what you might do if failure is detected, etc. (e) Repeat all of the above using LW AGE as the LHS variable. 9. (Beyond OLS: Quantile regression) Recall that the OLS estimator, βˆOLS , solves: minβ

T X t=1

2

(yt − β1 − β2 x2t − ... − βK xKt ) = minβ

T X

ε2t

t=1

As you know, the solution has a simple analytic closed-form expression,

62

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

(X 0 X)−1 X 0 y), with wonderful properties under the IC (unbiased, consistent, Gaussian, MVUE). But other objectives are possible and sometimes useful. So-called quantile regression (QR) involves an objective function linear on each side of 0 but with (generally) unequal slopes. QR estimator βˆQR minimizes “linlin loss,” or “check function loss”: minβ

T X

linlin(εt ),

t=1

where: linlin(e) =

    a|e|, if e ≤ 0    b|e|, if e > 0

= a|e| I(e ≤ 0) + b|e| I(e > 0). I(x) = 1 if x is true, and I(x) = 0 otherwise. “I(·)” stands for “indicator” variable. “linlin” refers to linearity on each side of the origin. QR is not as simple as OLS, but it is still simple solves a linear pro-

3.6. EXERCISES, PROBLEMS AND COMPLEMENTS

63

gramming problem). 10. (What does quantile regression fit?) QR fits the d · 100% quantile: quantiled (y|X) = xβ where d=

b 1 = a + b 1 + a/b

This is an important generalization of regression (e.g., How do the wages of people in the far left tail of the wage distribution vary with education and experience, and how does that compare to those in the center of the wage distribution?) 11. (Quantile regression empirics) For the 1995 CPS subsample (see EPC 8) re-do the regression LW AGE → c, EDU C, EXP ER using 20%, 50% and 80% quantile regression instead of OLS regression. 12. (Parallels between the sampling distribution of the sample mean under simple random sampling, and the sampling distribution of the OLS estimator under the IC) Consider first the sample mean under Gaussian simple random sampling. (a) What is a Gaussian simple random sample? (b) What is the sample mean, and what finite-sample properties does it have under Gaussian simple random sampling? (c) Display and discuss the exact distribution of the sample mean. (d) How would you estimate and plot the exact distribution of the sample mean?

64

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Now consider the OLS regression estimator under the IC. (a) What are the IC? (b) What is the OLS estimator, and what finite-sample properties does it enjoy? (c) Display and discuss the exact distribution of the OLS estimator. Under what conditions, if any, do your “sample mean answers” and “OLS answers” precisely coincide? 13. (OLS regression residuals sum to zero) Assertion: As long as an intercept is included in a linear regression, the OLS residuals must sum to precisely zero. The intuition is simply that non-zero residual mean (residual “constant term”) would automatically be pulled into the residual constant term, hence guaranteeing a zero residual mean. (a) Prove the assertion precisely. (b) Evaluate the claim that the assertion implies the regression fits perfectly “on average,” despite the fact that it fits imperfectly pointby-point. 14. (Maximum-likelihood estimation and likelihood-ratio tests) A natural estimation strategy with wonderful asymptotic properties, called maximum likelihood estimation, is to find (and use as estimates) the parameter values that maximize the likelihood function. After all, by construction, those parameter values maximize the likelihood of obtaining the data that were actually obtained. In the leading case of normally-distributed regression disturbances, maximizing the likelihood function turns out to be equivalent to minimizing

3.6. EXERCISES, PROBLEMS AND COMPLEMENTS

65

the sum of squared residuals, hence the maximum-likelihood parameter estimates are identical to the least-squares parameter estimates. To see why maximizing the Gaussian log likelihood gives the same parameter estimate as minimizing the sum of squared residuals, let us derive the likelihood for the Gaussian linear regression model with nonstochastic regressors, yt = x0t β + ε εt ∼ iidN (0, σ 2 ). The model implies that yt ∼ iidN (x0t β, σ 2 ), so that −1

−1

0

2

f (yt ) = (2πσ 2 ) 2 e 2σ2 (yt −xt β) . Hence f (y1 , ..., yT ) = f (y1 )f (y2 ) · · · f (yT ) (by independence of the yt ’s). In particular, L=

T Y

−1

−1

0

2

(2πσ 2 ) 2 e 2σ2 (yt −xt β)

t=1

so 

2

ln L = ln (2πσ )

−T 2



T

1 X (yt − x0t β)2 − 2 2σ t=1 T

 −T T 1 X = ln(2π) − ln σ 2 − 2 (yt − x0t β)2 . 2 2 2σ t=1 Note in particular that the β vector that maximizes the likelihood (or log likelihood – the optimizers must be identical because the log is a positive monotonic transformation) is the β vector that minimizes the sum of squared residuals. The log likelihood is also useful for hypothesis testing via likelihood-ratio

66

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

tests. Under very general conditions we have asymptotically that: −2(ln L0 − ln L1 ) ∼ χ2d , where ln L0 is the maximized log likelihood under the restrictions implied by the null hypothesis, ln L1 is the unrestricted log likelihood, and d is the number of restrictions imposed under the null hypothesis. t and F tests are likelihood ratio tests under a normality assumption. That’s why they can be written in terms of minimized SSR’s rather than maximized ln L’s. 15. (Simulation algorithm for density prediction) (a) Take R draws from N (0, σ ˆ 2 ). (b) Add x∗ 0 βˆ to each disturbance draw. (c) Form a density forecast by fitting a density to the output from step 15b. (d) Form an interval forecast (95%, say) by sorting the output from step 15b to get the empirical cdf, and taking the left and right interval endpoints as the the .025% and .975% values, respectively. As R → ∞, the algorithmic and analytic results coincide. Note: This simulation algorithm may seem roundabout, but later we will drop normality.

3.7

Notes

Dozens of software packages implement linear regression analysis. Most automatically include an intercept in linear regressions unless explicitly instructed otherwise. That is, they automatically create and include a C variable.

3.7. NOTES

67

The R command for ordinary least squares regression is “lm”. It’s already pre-loaded into R as the default package for estimating linear models. It uses standard R format for such models, where you specify formula, data, and various estimation options. It returns a model estimated by OLS including coefficients, residuals, and fitted values. You can also easily calculate summary statistics using the summary function. The standard R quantile regression package is quantreg, written by Roger Koenker, the inventor of quantile regression. The command ”rq” functions similarly to ”lm”. It takes as input a formula, data, the quantile to be estimated, and various estimation options.

68

3.8

CHAPTER 3. REGRESSION UNDER THE IDEAL CONDITIONS

Regression’s Inventor: Carl Friedrich Gauss

Figure 3.7: Carl Friedrich Gauss

This is a photographic reproduction of a public-domain artwork, an oil painting of German mathematician and philosopher Carl Friedrich Gauss by G. Biermann (1824-1908). Date: 1887 (painting). Source Gau-Gesellschaft Gttingen e.V. (Foto: A. Wittmann).

Chapter 4 Model Selection Recall that the Akaike information criterion, or AIC, is effectively an estimate of the out-of-sample forecast error variance, as is s2 , but it penalizes degrees of freedom more harshly. It is used to select among competing forecasting models. The formula is: AIC = e( ) 2K T

PT

2 t=1 et

T

.

Also recall that the Schwarz information criterion, or SIC, is an alternative to the AIC with the same interpretation, but a still harsher degreesof-freedom penalty. The formula is: SIC = T ( ) K T

4.0.1

PT

2 t=1 et

T

.

A Bit More on AIC and SIC

The AIC and SIC are tremendously important for guiding model selection in a ways that avoid data mining and in-sample overfitting. You will want to start using AIC and SIC immediately, so we provide a bit more information here. Model selection by maximizing R2 , or equivalently minimizing residual SSR, is ill-advised, because they don’t penalize for degrees of freedom and therefore tend to prefer models that are “too big.” 69

70

CHAPTER 4. MODEL SELECTION

¯ 2 , or equivalently minimizing residual s2 , Model selection by maximizing R ¯ 2 and s2 penalize somewhat for degrees of is still ill-advised, even though R freedom, because they don’t penalize harshly enough and therefore still tend to prefer models that are too big. In contrast, AIC and SIC get things just right. SIC has a wonderful asymptotic optimality property when the set of candidate models is viewed as fixed: Basically SIC “gets it right” asymptotically, selecting either the DGP (if the DGP is among the models considered) or the best predictive approximation to the DGP (if the DGP is not among the models considered). AIC has a different and also-wonderful asymptotic optimality property, known as “efficiency,” when the set of candidate models is viewed as expanding as the sample size grows. In practice, the models selected by AIC and SIC rarely disagree.

4.1

All-Subsets Model Selection I: Information Criteria

All-subsets model selection means that we examine every possible combination of K regressors and select the best. Examples include SIC and AIC. Let us now discuss SIC and AIC in greater depth, as they are tremendously important tools for building forecasting models. We often could fit a wide variety of forecasting models, but how do we select among them? What are the consequences, for example, of fitting a number of models and selecting the model with highest R2 ? Is there a better way? This issue of model selection is of tremendous importance in all of forecasting. It turns out that model-selection strategies such as selecting the model with highest R2 do not produce good out-of-sample forecasting models. Fortunately, however, a number of powerful modern tools exist to assist with model selection. Most model selection criteria attempt to find the model with the smallest out-of-sample 1-step-ahead mean squared prediction error.

4.1. ALL-SUBSETS MODEL SELECTION I: INFORMATION CRITERIA

71

The criteria we examine fit this general approach; the differences among criteria amount to different penalties for the number of degrees of freedom used in estimating the model (that is, the number of parameters estimated). Because all of the criteria are effectively estimates of out-of-sample mean square prediction error, they have a negative orientation – the smaller the better. First consider the mean squared error, PT

2 t=1 et

, T where T is the sample size and et = yt − yˆt . M SE is intimately related to M SE =

two other diagnostic statistics routinely computed by regression software, the sum of squared residuals and R2 . Looking at the M SE formula reveals that the model with the smallest M SE is also the model with smallest sum of squared residuals, because scaling the sum of squared residuals by 1/T doesn’t change the ranking. So selecting the model with the smallest M SE is equivalent to selecting the model with the smallest sum of squared residuals. Similarly, recall the formula for R2 , 2

PT

R = 1 − PT

2 t=1 et

t=1 (yt

− y¯)2

=1−

1 T

M SE . PT 2 (y − y ¯ ) t=1 t

The denominator of the ratio that appears in the formula is just the sum of squared deviations of y from its sample mean (the so-called “total sum of squares”), which depends only on the data, not on the particular model fit. Thus, selecting the model that minimizes the sum of squared residuals – which as we saw is equivalent to selecting the model that minimizes MSE – is also equivalent to selecting the model that maximizes R2 . Selecting forecasting models on the basis of MSE or any of the equivalent forms discussed above – that is, using in-sample MSE to estimate the out-of-sample 1-step-ahead MSE – turns out to be a bad idea. In-sample MSE can’t rise when more variables are added to a model, and typically

72

CHAPTER 4. MODEL SELECTION

it will fall continuously as more variables are added, because the estimated parameters are explicitly chosen to minimize the sum of squared residuals. Newly-included variables could get estimated coefficients of zero, but that’s a probability-zero event, and to the extent that the estimate is anything else, the sum of squared residuals must fall. Thus, the more variables we include in a forecasting model, the lower the sum of squared residuals will be, and therefore the lower M SE will be, and the higher R2 will be. Again, the sum of squared residuals can’t rise, and due to sampling error it’s very unlikely that we’d get a coefficient of exactly zero on a newly-included variable even if the coefficient is zero in population. The effects described above go under various names, including in-sample overfitting, reflecting the idea that including more variables in a forecasting model won’t necessarily improve its out-of-sample forecasting performance, although it will improve the model’s “fit” on historical data. The upshot is that in-sample M SE is a downward biased estimator of out-of-sample M SE, and the size of the bias increases with the number of variables included in the model. In-sample M SE provides an overly-optimistic (that is, too small) assessment of out-of-sample M SE. To reduce the bias associated with M SE and its relatives, we need to penalize for degrees of freedom used. Thus let’s consider the mean squared error corrected for degrees of freedom, s2 =

PT

2 t=1 et

T −K

,

where K is the number of degrees of freedom used in model fitting.1 s2 is just the usual unbiased estimate of the regression disturbance variance. That is, it is the square of the usual standard error of the regression. So selecting the model that minimizes s2 is equivalent to selecting the model that minimizes the standard error of the regression. s2 is also intimately connected to the 1

The degrees of freedom used in model fitting is simply the number of parameters estimated.

4.1. ALL-SUBSETS MODEL SELECTION I: INFORMATION CRITERIA

73

¯ 2 ). Recall that R2 adjusted for degrees of freedom (the “adjusted R2 ,” or R

¯2

R =1−

PT

2 t=1 et /(T − K) PT ¯)2 /(T − t=1 (yt − y

1)

= 1 − PT

s2

¯)2 /(T − 1) t=1 (yt − y

.

¯ 2 expression depends only on the data, not the The denominator of the R particular model fit, so the model that minimizes s2 is also the model that ¯ 2 . In short, the strategies of selecting the model that minimizes maximizes R s2 , or the model that minimizes the standard error of the regression, or the ¯ 2 , are equivalent, and they do penalize for degrees of model that maximizes R freedom used. To highlight the degree-of-freedom penalty, let’s rewrite s2 as a penalty factor times the M SE, s2 =



T T −K

 PT

2 t=1 et

T

.

Note in particular that including more variables in a regression will not nec¯ 2 – the M SE will fall, but the degrees-of-freedom essarily lower s2 or raise R penalty will rise, so the product could go either way. As with s2 , many of the most important forecast model selection criteria are of the form “penalty factor times M SE.” The idea is simply that if we want to get an accurate estimate of the 1-step-ahead out-of-sample forecast M SE, we need to penalize the in-sample residual M SE to reflect the degrees of freedom used. Two very important such criteria are the Akaike Information Criterion (AIC) and the Schwarz Information Criterion (SIC). Their formulas are: AIC = e( ) 2K T

and

PT

2 t=1 et

T

74

CHAPTER 4. MODEL SELECTION

SIC = T ( ) K T

PT

2 t=1 et

. T How do the penalty factors associated with M SE, s2 , AIC and SIC compare in terms of severity? All of the penalty factors are functions of K/T , the number of parameters estimated per sample observation, and we can compare the penalty factors graphically as K/T varies. In Figure *** we show the penalties as K/T moves from 0 to .25, for a sample size of T = 100. The s2 penalty is small and rises slowly with K/T ; the AIC penalty is a bit larger and still rises only slowly with K/T . The SIC penalty, on the other hand, is substantially larger and rises much more quickly with K/T . It’s clear that the different criteria penalize degrees of freedom differently. In addition, we could propose many other criteria by altering the penalty. How, then, do we select among the criteria? More generally, what properties might we expect a “good” model selection criterion to have? Are s2 , AIC and SIC “good” model selection criteria? We evaluate model selection criteria in terms of a key property called con-

4.1. ALL-SUBSETS MODEL SELECTION I: INFORMATION CRITERIA

75

sistency, also known as the oracle property. A model selection criterion is consistent if: a. when the true model (that is, the data-generating process, or DGP) is among a fixed set models considered, the probability of selecting the true DGP approaches one as the sample size gets large, and b. when the true model is not among a fixed set of models considered, so that it’s impossible to select the true DGP, the probability of selecting the best approximation to the true DGP approaches one as the sample size gets large. We must of course define what we mean by “best approximation” above. Most model selection criteria – including all of those discussed here – assess goodness of approximation in terms of out-of-sample mean squared forecast error. Consistency is of course desirable. If the DGP is among those considered, then we’d hope that as the sample size gets large we’d eventually select it. Of course, all of our models are false – they’re intentional simplifications of a much more complex reality. Thus the second notion of consistency is the more compelling. M SE is inconsistent, because it doesn’t penalize for degrees of freedom; that’s why it’s unattractive. s2 does penalize for degrees of freedom, but as it turns out, not enough to render it a consistent model selection procedure. The AIC penalizes degrees of freedom more heavily than s2 , but it too remains inconsistent; even as the sample size gets large, the AIC selects models that are too large (“overparameterized”). The SIC, which penalizes degrees of freedom most heavily, is consistent. The discussion thus far conveys the impression that SIC is unambiguously superior to AIC for selecting forecasting models, but such is not the case. Until now, we’ve implicitly assumed a fixed set of models. In that case,

76

CHAPTER 4. MODEL SELECTION

SIC is a superior model selection criterion. However, a potentially more compelling thought experiment for forecasting may be that we may want to expand the set of models we entertain as the sample size grows, to get progressively better approximations to the elusive DGP. We’re then led to a different optimality property, called asymptotic efficiency. An asymptotically efficient model selection criterion chooses a sequence of models, as the sample size get large, whose out-of-sample forecast MSE approaches the one that would be obtained using the DGP at a rate at least as fast as that of any other model selection criterion. The AIC, although inconsistent, is asymptotically efficient, whereas the SIC is not. In practical forecasting we usually report and examine both AIC and SIC. Most often they select the same model. When they don’t, and despite the theoretical asymptotic efficiency property of AIC, this author recommends use of the more parsimonious model selected by the SIC, other things equal. This accords with the parsimony principle of Chapter ?? and with the results of studies comparing out-of-sample forecasting performance of models selected by various criteria. The AIC and SIC have enjoyed widespread popularity, but they are not universally applicable, and we’re still learning about their performance in specific situations. However, the general principle that we need somehow to inflate in-sample loss estimates to get good out-of-sample loss estimates is universally applicable. The versions of AIC and SIC introduced above – and the claimed optimality properties in terms of out-of-sample forecast MSE – are actually specialized to the Gaussian case, which is why they are written in terms of minimized SSR’s rather than maximized lnL’s.2 More generally, AIC and SIC are written not in terms of minimized SSR’s, but rather in terms of 2

Recall that in the Gaussian case SSR minimization and lnL maximization are equivalent.

4.2. EXERCISES, PROBLEMS AND COMPLEMENTS

77

maximized lnL’s. We have: AIC = −2lnL + 2K and SIC = −2lnL + KlnT. These are useful for any model estimated by maximum likelihood, Gaussian or non-Gaussian.

4.2

Exercises, Problems and Complements

1. (The sum of squared residuals, SSR) (a) What is SSR and why is it reported? (b) Do you agree with “bigger is better,” “smaller is better,” or neither? Be careful. ¯ 2, (c) Describe in detail and discuss the use of regression statistics R2 , R F , SER, and SIC. What role does SSR play in each of the test statistics? (d) Under the IC, is the maximized log likelihood related to the SSR? If so, how? Would your answer change if we dropped normality? 2. (The variety of “information criteria” reported across software packages) Some authors, and software packages, examine and report the logarithms of the AIC and SIC, PT

2 t=1 et

ln(AIC) = ln

T PT

ln(SIC) = ln

!

2 t=1 et

T

 +

! +

2K T



K ln(T ) . T

78

CHAPTER 4. MODEL SELECTION

The practice is so common that log(AIC) and log(SIC) are often simply called the “AIC” and “SIC.” AIC and SIC must be greater than zero, so log(AIC) and log(SIC) are always well-defined and can take on any real value. The important insight, however, is that although these variations will of course change the numerical values of AIC and SIC produced by your computer, they will not change the rankings of models under the various criteria. Consider, for example, selecting among three models. If AIC1 < AIC2 < AIC3 , then it must be true as well that ln(AIC1 ) < ln(AIC2 ) < ln(AIC3 ) , so we would select model 1 regardless of the “definition” of the information criterion used. 3. (The sum of squared residuals, SSR) (a) What is SSR and why is it reported? (b) Do you agree with “bigger is better,” “smaller is better,” or neither? Be careful. ¯ 2, (c) Describe in detail and discuss the use of regression statistics R2 , R F , SER, and SIC. What role does SSR play in each of the test statistics? (d) Under the IC, is the maximized log likelihood related to the SSR? If so, how? Would your answer change if we dropped normality?

Chapter 5 Non-Normality Here we consider a violation of the full ideal conditions, non-normal disturbances. Non-normality and outliers, which we introduce in this chapter, are closely related, because deviations from Gaussian behavior are often characterized by fatter tails than the Gaussian, which produce outliers. It is important to note that outliers are not necessarily “bad,” or requiring “treatment.” Every data set must have some most extreme observation, by definition! Statistical estimation efficiency, moreover, increases with data variability. The most extreme observations can be the most informative about the phenomena of interest. “Bad” outliers, in contrast, are those associated with things like data recording errors (e.g., you enter .753 when you mean to enter 75.3) or one-off events (e.g., a strike or natural disaster). 5.0.1

Results

To understand the properties of OLS without normality, it is helpful first to consider the properties of the sample mean without normality. As reviewed in Appendix A, for a non-Gaussian simple random sample, yt ∼ iid(µ, σ 2 ), i = 1, ..., T,

79

80

CHAPTER 5. NON-NORMALITY

we have that the sample mean is best linear unbiased (BLUE), with a

  σ2 y¯ ∼ N µ, . T This result forms the basis for asymptotic inference. It is a Gaussian central limit theorem. We consistently estimate the large-sample variance of the sample mean using s2 /T . Now consider the full linear regression model without normal disturbances. We have that the linear regression estimator is BLUE, with a  βˆOLS ∼ N β, σ 2 (X 0 X)−1 .

We consistently estimate the large-sample variance of βˆOLS using s2 (X 0 X)−1 . Clearly the linear regression results for Gaussian vs. non-Gaussian situations precisely parallel those for the sample mean in Gaussian vs. nonGaussian situations.1 In each case the original result obtains even when we drop normality, except that it becomes a large-sample rather than an exact finite-sample result. The central limit theorem is a wonderful thing!

5.1

Assessing Normality

There are many methods, ranging from graphics to formal tests. 1

3.

Indeed they must, as the sample mean corresponds to regression on an intercept. See EPC 6 of Chapter

5.1. ASSESSING NORMALITY

5.1.1

81

QQ Plots

We introduced histograms earlier in Chapter 2 as a graphical device for learning about distributional shape. If, however, interest centers on the tails of distributions, QQ plots often provide sharper insight as to the agreement or divergence between the actual and reference distributions. The QQ plot is simply a plot of the quantiles of the standardized data against the quantiles of a standardized reference distribution (e.g., normal). If the distributions match, the QQ plot is the 45 degree line. To the extent that the QQ plot does not match the 45 degree line, the nature of the divergence can be very informative, as for example in indicating fat tails. 5.1.2

Residual Sample Skewness and Kurtosis

Recall skewness and kurtosis, which we reproduce here for convenience: S=

E(y − µ)3 σ3

E(y − µ)4 K= . σ4 Obviously, each tells about a different aspect of non-normality. Kurtosis, in particular, tells about fatness of distributional tails relative to the normal. A simple strategy is to check various implications of residual normality, ˆ such as S = 0 and K = 3 , via informal examination of Sˆ and K. 5.1.3

The Jarque-Bera Test

The Jarque-Bera test (JB) effectively aggregates the information in the data about both skewness and kurtosis to produce an overall test of the joint ˆ The test statistic hypothesis that S = 0 and K = 3 , based upon Sˆ and K. is

T JB = 6

  1 ˆ − 3)2 . Sˆ2 + (K 4

82

CHAPTER 5. NON-NORMALITY

Under the null hypothesis of independent normally-distributed observations (S = 0, K = 3), JB is distributed in large samples as a χ2 random variable with two degrees of freedom.2

5.2

Outliers

Outliers refer to big disturbances (in population) or residuals (in sample). Outliers may emerge for a variety of reasons, and they may require special attention because they can have substantial influence on the fitted regression line. On the one hand, OLS retains its magic in such outlier situations – it is BLUE regardless of the disturbance distribution. On the other hand, the fully-optimal (MVUE) estimator may be highly non-linear, so the fact that OLS remains BLUE is less than fully comforting. Indeed OLS parameter estimates are particularly susceptible to distortions from outliers, because the quadratic least-squares objective really hates big errors (due to the squaring) and so goes out of its way to tilt the fitted surface in a way that minimizes them. How to identify and treat outliers is a time-honored problem in data analysis, and there’s no easy answer. If an outlier is simply a data-recording mistake, then it may well be best to discard it if you can’t obtain the correct data. On the other hand, every dataset, even perfectly “clean” datasets have a “most extreme observation,” but it doesn’t follow that it should be discarded. Indeed the most extreme observations are often the most informative – precise estimation requires data variation. 2

We have discussed the case of an observed time series. If the series being tested for normality is the residual from a model, then T can be replaced with T − K, where K is the number of parameters estimated, although the distinction is inconsequential asymptotically.

5.3. ROBUST ESTIMATION

5.2.1

83

Outlier Detection

Graphics

One obvious way to identify outliers in bivariate regression situations is via graphics: one xy scatterplot can be worth a thousand words. In higher dimensions, residual yˆy scatterplots remain invaluable, as does the residual plot of y − yˆ. Leave-One-Out and Leverage

Another way to identify outliers is a “leave-one-out” coefficient plot, where we use the computer to sweep through the sample, leaving out successive observations, and examining differences in parameter estimates with observation t in vs. out. That is, in an obvious notation, we examine and plot (−t) βˆ − βˆOLS , t = 1, ..., T . OLS

It can be shown, however, that the change in βˆOLS is (−t) βˆOLS − βˆOLS = −

1 (X 0 X)−1 x0t et , 1 − ht

where ht is the t-th diagonal element of the “hat matrix,” X(X 0 X)−1 X 0 . (−t) Hence the estimated coefficient change βˆ − βˆOLS is driven by 1 . ht is OLS

1−ht

called the time-t leverage. ht can be shown to be in [0, 1], so that the larger (−t) is ht , the larger is βˆ − βˆOLS . Hence one really just needs to examine the OLS

leverage sequence, and scrutinize carefully observations with high leverage.

5.3

Robust Estimation

Robust estimation provides a useful middle ground between completely discarding allegedly-outlying observations (“dummying them out”) and doing nothing. Here we introduce outlier-robust approaches to regression. The first involves OLS regression, but on weighted data, an the second involves

84

CHAPTER 5. NON-NORMALITY

switching from OLS to a different estimator. 5.3.1

Robustness Iteration

Fit at robustness iteration 0: yˆ(0) = X βˆ(0) where βˆ(0) = argmin

" T X

# (yt − x0t β)2 .

t=1

Robustness weight at iteration 1: (1)

ρt = S

!

(0) et (0)

6 med|et |

where (0)

(0)

et = yt − yˆt , and S(z) is a function such that S(z) = 1 for z ∈ [−1, 1] but downweights outside that interval. Fit at robustness iteration 1: yˆ(1) = X βˆ(1) where

" βˆ(1) = argmin

T X t=1

Continue as desired.

# (1)

2

ρt (yt − xt 0 β)

.

5.3. ROBUST ESTIMATION

5.3.2

85

Least Absolute Deviations

Recall that the OLS estimator solves min β

T X

(yt − β1 − β2 x2t − ... − βK xKt )2 .

t=1

Now we simply change the objective to min β

T X

|yt − β1 − β2 x2t − ... − βK xKt |.

t=1

or minβ

T X

|εt |

t=1

That is, we change from squared-error loss to absolute-error loss. We call the new estimator “least absolute deviations” (LAD) and we write βˆLAD .3 By construction, βˆLAD is not influenced by outliers as much as βˆOLS . Put differently, LAD is more robust to outliers than is OLS. Of course nothing is free, and the price of LAD is a bit of extra computational complexity relative to OLS. In particular, the LAD estimator does not have a tidy closed-form analytical expression like OLS, so we can’t just plug into a simple formula to obtain it. Instead we need to use the computer to find the optimal β directly. If that sounds complicated, rest assured that it’s largely trivial using modern numerical methods, as embedded in modern software.4 It is important to note that whereas OLS fits the conditional mean function: mean(y|X) = xβ, 3

Note that LAD regression is just quantile regression for d = .50. Indeed computation of the LAD estimator turns out to be a linear programming problem, which is well-studied and simple. 4

86

CHAPTER 5. NON-NORMALITY

LAD fits the conditional median function (50% quantile): median(y|X) = xβ The conditional mean and median are equal under symmetry and hence under normality, but not under asymmetry, in which case the median is a better measure of central tendency. Hence LAD delivers two kinds of robustness to non-normality: it is robust to outliers and robust to asymmetry.

5.4

Wage Determination

Here we show some empirical results that make use of the ideas sketched above. There are many tables and figures appearing at the end of the chapter. We do not refer to them explicitly, but all will be clear upon examination. 5.4.1

W AGE

We run W AGE → c, EDU C, EXP ER. We show the regression results, the residual plot, the residual histogram and statistics, the residual Gaussian QQ plot, the leave-one-out plot, and the results of LAD estimation. The residual plot shows lots of positive outliers, and the residual histogram and Gaussian QQ plot indicate right-skewed residuals. 5.4.2

LW AGE

Now we run LW AGE → c, EDU C, EXP ER. Again we show the regression results, the residual plot, the residual histogram and statistics, the residual Gaussian QQ plot, the leave-one-out plot, and the results of LAD estimation. Among other things, and in sharp contrast to the results for WAGE and opposed to LWAGE, the residual histogram and Gaussian QQ plot indicate approximate residual normality.

5.5. EXERCISES, PROBLEMS AND COMPLEMENTS

5.5

87

Exercises, Problems and Complements

1. (Taleb’s The Black Swan) Nassim Taleb is a financial markets trader turned pop author. His book, The Black Swan (Taleb (2007)), deals with many of the issues raised in this chapter. “Black swans” are seemingly impossible or very lowprobability events – after all, swans are supposed to be white – that occur with annoying regularity in reality. Read his book. Where does your reaction fall on the spectrum from A to B below? A. Taleb offers crucial lessons for econometricians, heightening awareness in ways otherwise difficult to achieve. After reading Taleb, it’s hard to stop worrying about non-normality, model uncertainty, etc. B. Taleb belabors the obvious for hundreds of pages, arrogantly “informing”’ us that non-normality is prevalent, that all models are misspecified, and so on. Moreover, it takes a model to beat a model, and Taleb offers little. 2. (Additional ways of quantifying “outliers”) (a) Consider the outlier probability, P |y − µ| > 5σ (there is of course nothing magical about our choice of 5). In practice we use a sample version of the population object. (b) Consider the “tail index” γ, such that γ s.t. P (y > y ∗ ) = ky ∗ −γ . In practice we use a sample version of the population object. 3. (“Leave-one-out” coefficient plots)

88

CHAPTER 5. NON-NORMALITY

Leave-one-out coefficient plots are more appropriate for cross-section data than for time-series data. Why? How might you adapt them to handle time-series data?

5.5. EXERCISES, PROBLEMS AND COMPLEMENTS

Figure 5.1: OLS Wage Regression

89

90

CHAPTER 5. NON-NORMALITY

Figure 5.2: OLS Wage Regression: Residual Plot

5.5. EXERCISES, PROBLEMS AND COMPLEMENTS

Figure 5.3: OLS Wage Regression: Residual Histogram and Statistics

91

92

CHAPTER 5. NON-NORMALITY

Figure 5.4: OLS Wage Regression: Residual Gaussian QQ Plot

5.5. EXERCISES, PROBLEMS AND COMPLEMENTS

93

Leave−One−Out Plot 1.23 1.21

● ●











● ●

● ●



● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ●●● ● ● ● ● ●● ●● ● ●● ● ● ●● ●● ● ●● ● ●●● ● ● ●● ● ● ●● ●●● ● ●● ● ●● ●●●● ●●● ● ●●●●●● ● ● ●● ● ● ● ●● ● ● ●● ● ● ●●●●● ●● ● ● ●●●●● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●●●●● ●● ●●●●●● ● ●● ● ●● ●● ● ● ●● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ●● ●●●● ● ●● ● ●●● ● ● ● ● ● ● ● ●● ● ●● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ●●● ● ●● ●● ● ● ● ●● ●● ● ●● ●● ● ●●● ● ●●●●●●● ● ●● ● ●●●● ● ●● ●●●●● ● ● ●● ● ● ● ● ●●● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ●● ●●●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●



1.19

Coefficient (Education)





● ●



0

200

400

600

800

1000 1200

Leave t out Figure 5.5: OLS Wage Regression: Leave-One-Out Plot

94

CHAPTER 5. NON-NORMALITY

Figure 5.6: LAD Wage Regression

Figure 5.7: OLS Log Wage Regression

5.5. EXERCISES, PROBLEMS AND COMPLEMENTS

Figure 5.8: OLS Log Wage Regression: Residual Plot

Figure 5.9: OLS Log Wage Regression: Residual Histogram and Statistics

95

96

CHAPTER 5. NON-NORMALITY

Figure 5.10: OLS Log Wage Regression: Residual Gaussian QQ Plot

Figure 5.11: OLS Log Wage Regression: Leave-One-Out Plot

5.5. EXERCISES, PROBLEMS AND COMPLEMENTS

Figure 5.12: LAD Log Wage Regression

97

98

CHAPTER 5. NON-NORMALITY

Chapter 6 Group Heterogeneity and Indicator Variables From one perspective we continue working under the FIC. From another we now begin relaxing the FIC, effectively by recognizing RHS variables that were omitted from, but should not have been omitted from, our original wage regression.

6.1

0-1 Dummy Variables

A dummy variable, or indicator variable, is just a 0-1 variable that indicates something, such as whether a person is female, non-white, or a union member. We use dummy variables to account for such “group effects,” if any. We might define the dummy UNION, for example, to be 1 if a person is a union member, and 0 otherwise. That is, ( 1, if observation t corresponds to a union member U N IONt = 0, otherwise. In Figure 6.1 we show histograms and statistics for all potential determinants of wages. Education (EDUC) and experience (EXPER) are standard continuous variables, although we measure them only discretely (in years);

99

100

CHAPTER 6. GROUP HETEROGENEITY AND INDICATOR VARIABLES

Figure 6.1: Histograms for Wage Covariates

we have examined them before and there is nothing new to say. The new variables are 0-1 dummies, UNION (already defined) and NONWHITE, where ( 1, if observation t corresponds to a non − white person N ON W HIT Et = 0, otherwise. Note that the sample mean of a dummy variable is the fraction of the sample with the indicated attribute. The histograms indicate that roughly one-fifth of people in our sample are union members, and roughly one-fifth are non-white. We also have a third dummy, FEMALE, where ( 1, if observation t corresponds to a female F EM ALEt = 0, otherwise. We don’t show its histogram because it’s obvious that FEMALE should be approximately 0 w.p. 1/2 and 1 w.p. 1/2, which it is.

6.2. GROUP DUMMIES IN THE WAGE REGRESSION

101

Sometimes dummies like UNION, NONWHITE and FEMALE are called intercept dummies, because they effectively allow for a different intercept for each group (union vs. non-union, non-white vs. white, female vs. male). The regression intercept corresponds to the “base case” (zero values for all dummies) and the dummy coefficients give the extra effects when the respective dummies equal one. For example, in a wage regression with an intercept and a single dummy (UNION, say), the intercept corresponds to non-union members, and the estimated coefficient on UNION is the extra effect (up or down) on LWAGE accruing to union members. Alternatively, we could define and use a full set of dummies for each category (e.g., include both a union dummy and a non-union dummy) and drop the intercept, reading off the union and non-union effects directly. In any event, never include a full set of dummies and an intercept. Doing so would be redundant because the sum of a full set of dummies is just a unit vector, but that’s what the intercept is. If an intercept is included, one of the dummy categories must be dropped.

6.2

Group Dummies in the Wage Regression

Recall our basic wage regression, LW AGE → c, EDU C, EXP ER, shown in Figure 6.2. Both explanatory variables are highly significant, with expected signs. Now consider the same regression, but with our three group dummies added, as shown in Figure 6.3. All dummies are significant with the expected signs, and R2 is higher. Both SIC and AIC favor including the group dummies. We show the residual scatter in Figure 6.4. Of course it’s hardly the forty-five degree line (the regression R2 is higher but still only .31), but it’s

102

CHAPTER 6. GROUP HETEROGENEITY AND INDICATOR VARIABLES

Figure 6.2: Wage Regression on Education and Experience

Figure 6.3: Wage Regression on Education, Experience and Group Dummies

6.3. EXERCISES, PROBLEMS AND COMPLEMENTS

103

Figure 6.4: Residual Scatter from Wage Regression on Education, Experience and Group Dummies

getting closer.

6.3

Exercises, Problems and Complements

1. (Slope dummies) Consider the regression yt = β1 + β2 xt + εt . The dummy variable model as introduced in the text generalizes the intercept term such that it can change across groups. Instead of writing

104

CHAPTER 6. GROUP HETEROGENEITY AND INDICATOR VARIABLES

the intercept as β1 , we write it as β1 + δDt . We can also allow slope coefficients to vary with groups. Instead of writing the slope as β2 , we write it as β2 + γDt . Hence to capture slope variation across groups we regress not only on an intercept and x, but also on D ∗ x. Allowing for both intercept and slope variation across groups corresponds to regressing on an intercept, D, x, and D ∗ x. 2. (Dummies vs. separate regression) Consider the simple regression, yt → c, xt . (a) How is inclusion of a group G intercept dummy related to the idea of running separate regressions, one for G and one for non-G? Are the two strategies equivalent? Why or why not? (b) How is inclusion of group G intercept and slope dummies related to the idea of running separate regressions, one for G and one for non-G? Are the two strategies equivalent? Why or why not? 3. (Analysis of variance (ANOVA) and dummy variable regression) [You should have learned about analysis of variance (ANOVA) in your earlier statistics course. In any event there’s good news: If you understand regression on dummy variables, you understand analysis of variance (ANOVA), as any ANOVA analysis can be done via regression on dummies. So here we go.] You treat each of 1000 randomly-selected farms that presently use no fertilizer. You either do nothing, or you apply one of four experimental fertilizers, A, B, C or D. Using a dummy variable regression setup: (a) How would you test the hypothesis that none of the four new fertilizers is effective?

6.4. NOTES

105

(b) Assuming that you reject the null, how would you estimate the improvement (or worsening) due to using fertilizer A, B, C or D?

6.4

Notes

ANOVA traces to Sir Ronald Fischer’s 1918 article, “The Correlation Between Relatives on the Supposition of Mendelian Inheritance,” and it was featured prominently in his classic 1925 book, Statistical Methods for Research Workers. Fischer is in many ways the “father” of much of modern statistics.

106

6.5

CHAPTER 6. GROUP HETEROGENEITY AND INDICATOR VARIABLES

Dummy Variables, ANOVA, and Sir Ronald Fischer

Figure 6.5: Sir Ronald Fischer

Photo credit: From Wikimedia commons. Source: http://www.swlearning.com/quant/kohler/stat/ biographical_sketches/Fisher_3.jpeg Rationale: Photographer died ¿70yrs ago =¿ PD. Date: 2008-0530 (original upload date). Source: Transferred from en.wikipedia. Author: Original uploader was Bletchley at en.wikipedia. Permission (Reusing this file): Released under the GNU Free Documentation License; PDOLD-70. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.

Chapter 7 Nonlinearity In general there is no reason why the conditional mean function should be linear. That is, the appropriate functional form may not be linear. Whether linearity provides an adequate approximation is an empirical matter. Non-linearity is related to non-normality, which we studied in chapter 5. In particular, in the mutivariate normal case, the conditional mean function is linear in the conditioning variables. But once we leave the terra firma of multivariate normality, anything goes. The conditional mean function and disturbances may be linear and Gaussian, non-linear and Gaussian, linear and non-Gaussian, or non-linear and non-Gaussian. In the Gaussian case, because the conditional mean is a linear function of the conditioning variable(s), it coincides with the linear projection. In non-Gaussian cases, however, linear projections are best viewed as approximations to generally non-linear conditional mean functions. That is, we can view the linear regression model as a linear approximation to a generally nonlinear conditional mean function. Sometimes the linear approximation may be adequate, and sometimes not.

107

108

7.1

CHAPTER 7. NONLINEARITY

Models Linear in Transformed Variables

Models can be non-linear but nevertheless linear in non-linearly-transformed variables. A leading example involves logarithms, to which we now turn. This can be very convenient. Moreover, coefficient interpretations are special, and similarly convenient. 7.1.1

Logarithms

Logs turn multiplicative models additive, and they neutralize exponentials. Logarithmic models, although non-linear, are nevertheless “linear in logs.” In addition to turning certain non-linear models linear, they can be used to enforce non-negativity of a left-hand-side variable and to stabilize a disturbance variance. (More on that later.) Log-Log Regression First, consider log-log regression. We write it out for the simple regression case, but of course we could have more than one regressor. We have lnyt = β1 + β2 lnxt + εt . yt is a non-linear function of the xt , but the function is linear in logarithms, so that ordinary least squares may be applied. To take a simple example, consider a Cobb-Douglas production function with output a function of labor and capital, yt = ALαt Ktβ exp(εt ). Direct estimation of the parameters A, α, β would require special techniques. Taking logs, however, yields lnyt = lnA + αlnLt + βlnKt + εt .

7.1. MODELS LINEAR IN TRANSFORMED VARIABLES

109

This transformed model can be immediately estimated by ordinary least squares. We simply regress lnyt on an intercept, lnLt and lnKt . Such log-log regressions often capture relevant non-linearities, while nevertheless maintaining the convenience of ordinary least squares. Note that the estimated intercept is an estimate of lnA (not A, so if you want an estimate of A you must exponentiate the estimated intercept), and the other estimated parameters are estimates of α and β, as desired. Recall that for close yt and xt , (ln yt − ln xt ) is approximately the percent difference between yt and xt . Hence the coefficients in log-log regressions give the expected percent change in E(yt |xt ) for a one-percent change in xt , the so-called elasticity of yt with respect to xt . Log-Lin Regression Second, consider log-lin regression, in which lnyt = βxt + ε. We have a log on the left but not on the right. The classic example involves the workhorse model of exponential growth: yt = Aert It’s non-linear due to the exponential, but taking logs yields lnyt = lnA + rt, which is linear. The growth rate r gives the approximate percent change in E(yt |t) for a one-unit change in time (because logs appear only on the left). Lin-Log Regression Finally, consider lin-log Regression: yt = βlnxt + ε. It’s a bit exotic but it sometimes arises. β gives the effect on E(yt |xt ) of a one-percent change in xt , because logs appear only on the right.

110

7.1.2

CHAPTER 7. NONLINEARITY

Box-Cox and GLM

Box-Cox The Box-Cox transformation generalizes log-lin regression. We have B(yt ) = β1 + β2 xt + εt , where

ytλ − 1 B(yt ) = . λ

Hence E(yt |xt ) = B −1 (β1 + β2 xt ). Because

 limλ→0

yλ − 1 λ

 = ln(yt ),

the Box-Cox model corresponds to the log-lin model in the special case of λ = 0. GLM The so-called “generalized linear model” (GLM) provides an even more flexible framework. Almost all models with left-hand-side variable transformations are special cases of those allowed in the generalized linear model (GLM). In the GLM, we have G(yt ) = β1 + β2 xt + εt , so that E(yt |xt ) = G−1 (β1 + β2 xt ). Wide classes of “link functions” G can be entertained. Log-lin regression, for example, emerges when G(yt ) = ln(yt ), and Box-Cox regression emerges when G(yt ) =

ytλ −1 λ .

7.2. INTRINSICALLY NON-LINEAR MODELS

7.2

111

Intrinsically Non-Linear Models

Sometimes we encounter intrinsically non-linear models. That is, there is no way to transform them to linearity, so that they can then be estimated simply by least squares, as we have always done so far. As an example, consider the logistic model, y=

1 , a + brx

with 0 < r < 1. The precise shape of the logistic curve of course depends on the precise values of a, b and r, but its “S-shape” is often useful. The key point for our present purposes is that there is no simple transformation of y that produces a model linear in the transformed variables. 7.2.1

Nonlinear Least Squares

The least squares estimator is often called “ordinary” least squares, or OLS. As we saw earlier, the OLS estimator has a simple closed-form analytic expression, which makes it trivial to implement on modern computers. Its computation is fast and reliable. The adjective “ordinary” distinguishes ordinary least squares from more laborious strategies for finding the parameter configuration that minimizes the sum of squared residuals, such as the non-linear least squares (NLS) estimator. When we estimate by non-linear least squares, we use a computer to find the minimum of the sum of squared residual function directly, using numerical methods, by literally trying many (perhaps hundreds or even thousands) of different β values until we find those that appear to minimize the sum of squared residuals. This is not only more laborious (and hence slow), but also less reliable, as, for example, one may arrive at a minimum that is local but not global. Why then would anyone ever use non-linear least squares as opposed to

112

CHAPTER 7. NONLINEARITY

OLS? Indeed, when OLS is feasible, we generally do prefer it. For example, in all regression models discussed thus far OLS is applicable, so we prefer it. Intrinsically non-linear models can’t be estimated using OLS, but they can be estimated using non-linear least squares. We resort to non-linear least squares in such cases. Intrinsically non-linear models obviously violate the linearity assumption of the IC. But the violation is not a big deal. Under the remaining IC (that is, dropping only linearity), βˆN LS has a sampling distribution similar to that under the IC.

7.3

Series Expansions

There is really no such thing as an intrinsically non-linear model. In the bivariate case we can think of the relationship as yt = g(xt , εt ) or slightly less generally as yt = f (xt ) + εt . First consider Taylor series expansions of f (xt ). The linear (first-order) approximation is f (xt ) ≈ β1 + β2 x, and the quadratic (second-order) approximation is f (xt ) ≈ β1 + β2 xt + β3 x2t . In the multiple regression case, Taylor approximations also involves inter-

7.4. A FINAL WORD ON NONLINEARITY AND THE IC

113

action terms. Consider, for example, f (xt , zt ): f (xt , zt ) ≈ β1 + β2 xt + β3 zt + β4 x2t + β5 zt2 + β6 xt zt + .... Such interaction effects are also relevant in situations involving dummy variables. There we capture interactions by including products of dummies.1 The ultimate point is that so-called “intrinsically non-linear” models are themselves linear when viewed from the series-expansion perspective. In principle, of course, an infinite number of series terms are required, but in practice nonlinearity is often quite gentle (e.g., quadratic) so that only a few series terms are required. From this viewpoint non-linearity is in some sense really an omitted-variables problem. One can also use Fourier series approximations: f (xt ) ≈ β1 + β2 sin(xt ) + β3 cos(xt ) + β4 sin(2xt ) + β5 cos(2xt ) + ..., and one can also mix Taylor and Fourier approximations by regressing not only on powers and cross products (“Taylor terms”), but also on various sines and cosines (“Fourier terms”). Mixing may facilitate parsimony.

7.4

A Final Word on Nonlinearity and the IC

It is of interest to step back and ask what parts of the IC are violated in our various non-linear models. Models linear in transformed variables (e.g., log-log regression) actually don’t violate the IC, after transformation. Neither do series expansion models, if the adopted expansion order is deemed correct, because they too are linear in transformed variables. The series approach to handling non-linearity is actually very general and handles intrinsically non-linear models as well, and low-ordered expansions 1

Notice that a product of dummies is one if and only if both individual dummies are one.

114

CHAPTER 7. NONLINEARITY

are often adequate in practice, even if an infinite expansion is required in theory. If series terms are needed, a purely linear model would suffer from misspecification of the X matrix (a violation of the IC) due to the omitted higher-order expansion terms. Hence the failure of the IC discussed in this chapter can be viewed either as: 1. The linearity assumption (E(y|X) = X 0 β) is incorrect, or 2. The linearity assumption (E(y|X) = X 0 β) is correct, but the assumption that X is correctly specified (i.e., no omitted variables) is incorrect, due to the omitted higher-order expansion terms.

7.5 7.5.1

Selecting a Non-Linear Model t and F Tests, and Information Criteria

One can use the usual t and F tests for testing linear models against nonlinear alternatives in nested cases, and information criteria (AIC and SIC) for testing against non-linear alternatives in non-nested cases. To test linearity against a quadratic alternative in a simple regression case, for example, we can simply run y → c, x, x2 and perform a t-test for the relevance of x2 . And of course, use AIC and SIC as always. 7.5.2

The RESET Test

Direct inclusion of powers and cross products of the various X variables in the regression can be wasteful of degrees of freedom, however, particularly if there are more than just one or two right-hand-side variables in the regression and/or if the non-linearity is severe, so that fairly high powers and interactions would be necessary to capture it. In light of this, a useful strategy is first to fit a linear regression yt → c, Xt and obtain the fitted values yˆt . Then, to test for non-linearity, we run the

7.6. NON-LINEARITY IN WAGE DETERMINATION

115

regression again with various powers of yˆt included, yt → c, Xt , yˆt2 , ..., yˆtm . Note that the powers of yˆt are linear combinations of powers and cross products of the X variables – just what the doctor ordered. There is no need to include the first power of yˆt , because that would be redundant with the included X variables. Instead we include powers yˆt2 , yˆt3 , ... Typically a small m is adequate. Significance of the included set of powers of yˆt can be checked using an F test. This procedure is called RESET (Regression Specification Error Test).

7.6

Non-Linearity in Wage Determination

For convenience we reproduce in Figure 7.1 the results of our current linear wage regression, LW AGE → c, EDU C, EXP ER, F EM ALE, U N ION, N ON W HIT E. The RESET test from that regression suggests neglected non-linearity; the p-value is .03 when using yˆt2 and yˆt3 in the RESET test regression. Non-Linearity in EDU C and EXP ER: Powers and Interactions Given the results of the RESET test, we proceed to allow for non-linearity. In Figure 7.2 we show the results of the quadratic regression LW AGE → EDU C, EXP ER EDU C 2 , EXP ER2 , EDU C ∗ EXP ER, F EM ALE, U N ION, N ON W HIT E Two of the non-linear effects are significant. The impact of experience is

116

CHAPTER 7. NONLINEARITY

Figure 7.1: Basic Linear Wage Regression

decreasing, and experience seems to trade off with education, insofar as the interaction is negative. Non-Linearity in F EM ALE, U N ION and N ON W HIT E: Interactions Just as continuous variables like EDU C and EXP ER may interact (and we found that they do), so too may discrete dummy variables. For example, the wage effect of being female and non-white might not simply be the sum of the individual effects. We would estimate it as the sum of coefficients on the individual dummies F EM ALE and N ON W HIT E plus the coefficient on the interaction dummy F EM ALE*N ON W HIT E. In Figure 7.4 we show results for LW AGE → EDU C, EXP ER, F EM ALE, U N ION, N ON W HIT E, F EM ALE∗U N ION, F EM ALE∗N ON W HIT E, U N ION ∗N ON W HIT E. The dummy interactions are insignificant.

7.6. NON-LINEARITY IN WAGE DETERMINATION

117

Figure 7.2: Quadratic Wage Regression

7.6.1

Non-Linearity in Continuous and Discrete Variables Simultaneously

Now let’s incorporate powers and interactions in EDU C and EXP ER, and interactions in F EM ALE, U N ION and N ON W HIT E. In Figure 7.4 we show results for LW AGE → EDU C, EXP ER, EDU C 2 , EXP ER2 , EDU C ∗ EXP ER, F EM ALE, U N ION, N ON W HIT E, F EM ALE∗U N ION, F EM ALE∗N ON W HIT E, U N ION ∗N ON W HIT E. The dummy interactions remain insignificant. Note that we could explore additional interactions among EDU C, EXP ER and the various dummies. We leave that to the reader. Assembling all the results, our tentative “best” model thus far is tht of

118

CHAPTER 7. NONLINEARITY

Figure 7.3: Wage Regression on Education, Experience, Group Dummies, and Interactions

section 7.6, LW AGE → EDU C, EXP ER, EDU C 2 , EXP ER2 , EDU C ∗ EXP ER, F EM ALE, U N ION, N ON W HIT E. The RESET statistic has a p-value of .19, so we would not reject adequacy of functional form at conventional levels.

7.7

Exercises, Problems and Complements

1. (Tax revenue and the tax rate) The U.S. Congressional Budget Office (CBO) is helping the president to set tax policy. In particular, the president has asked for advice on where to set the average tax rate to maximize the tax revenue collected

7.7. EXERCISES, PROBLEMS AND COMPLEMENTS

119

Figure 7.4: Wage Regression with Continuous Non-Linearities and Interactions, and Discrete Interactions

per taxpayer. For each of 65 countries the CBO has obtained data on the tax revenue collected per taxpayer and the average tax rate. (a) Is tax revenue likely related to the tax rate? (That is, do you think that the mean of tax revenue conditional on the tax rate actually is a function of the tax rate?) (b) Is the relationship likely linear? (Hint: how much revenue would be collected at tax rates of zero or one hundred percent?) (c) If not, is a linear regression nevertheless likely to produce a good approximation to the true relationship? 2. (Graphical regression diagnostic: scatterplot of et vs. xt ) This plot helps us assess whether the relationship between y and the set of x’s is truly linear, as assumed in linear regression analysis. If not, the linear regression residuals will depend on x. In the case where there is only one right-hand side variable, as above, we can simply make a scatterplot of et vs. xt . When there is more than one right-hand side

120

CHAPTER 7. NONLINEARITY

variable, we can make separate plots for each, although the procedure loses some of its simplicity and transparency. 3. (Difficulties with non-linear optimization) Non-linear optimization can be a tricky business, fraught with problems. Some problems are generic. It’s relatively easy to find a local optimum, for example, but much harder to be confident that the local optimum is global. Simple checks such as trying a variety of startup values and checking the optimum to which convergence occurs are used routinely, but the problem nevertheless remains. Other problems may be software specific. For example, some software may use highly accurate analytic derivatives whereas other software uses approximate numerical derivatives. Even the same software package may change algorithms or details of implementation across versions, leading to different results. 4. (Conditional mean functions) Consider the regression model, yt = β1 + β2 xt + β3 x2t + β4 zt + εt under the full ideal conditions. Find the mean of yt conditional upon xt = x∗t and zt = zt∗ . Is the conditional mean linear in (x∗t ? zt∗ )? 5. (OLS vs. NLS) Consider the following three regression models: yt = β1 + β2 xt + εt yt = β1 eβ2 xt εt yt = β1 + eβ2 xt + εt . a. For each model, determine whether OLS may be used for estimation

7.7. EXERCISES, PROBLEMS AND COMPLEMENTS

121

(perhaps after transforming the data), or whether NLS is required. b. For those models for which OLS is feasible, do you expect NLS and OLS estimation results to agree precisely? Why or why not? c. For those models for which NLS is “required,” show how to avoid it using series expansions. 6. (Graphical regression diagnostic: scatterplot of et vs. xt ) This plot helps us assess whether the relationship between y and the set of x’s is truly linear, as assumed in linear regression analysis. If not, the linear regression residuals will depend on x. In the case where there is only one right-hand side variable, as above, we can simply make a scatterplot of et vs. xt . When there is more than one right-hand side variable, we can make separate plots for each, although the procedure loses some of its simplicity and transparency. 7. (What is linear regression really estimating?) It is important to note the distinction between a conditional mean and a linear projection. The conditional mean is not necessarily a linear function of the conditioning variable(s). The linear projection is of course a linear function of the conditioning variable(s), by construction. Linear projections are best viewed as approximations to generally nonlinear conditional mean functions. That is, we can view an empirical linear regression as estimating the population linear projection, which in turn is an approximation to the population conditional expectation. Sometimes the linear projection may be an adequate approximation, and sometimes not. 8. Putting lots of things together. Consider the cross-sectional (log) wage equation that we studied extensively, which appears again in Figure 7.5 for your reference.

122

CHAPTER 7. NONLINEARITY

(a) The model was estimated using ordinary least squares (OLS). What loss function is optimized in calculating the OLS estimate? (Give a formula and a graph.) What is the formula (if any) for the OLS estimator? (b) Consider instead estimating the same model numerically (i.e., by NLS) rather than analytically (i.e., by OLS). What loss function is optimized in calculating the NLS estimate? (Give a formula and a graph.) What is the formula (if any) for the NLS estimator? (c) Does the estimated equation indicate a statistically significant effect of union status on log wage? An economically important effect? What is the precise interpretation of the estimated coefficient on UNION? How would the interpretation change if the wage were not logged? (d) Precisely what hypothesis does the F-statistic test? What are the restricted and unrestricted sums of squared residuals to which it is related, and what are the two OLS regressions to which they correspond? (e) Consider an additional regressor, AGE, where AGE = 6 + EDUC + EXPER. (The idea is that 6 years of early childhood, followed by EDUC years of education, followed by EXPER years of work experience should, under certain assumptions, sum to a person’s age.) Discuss the likely effects, if any, of adding AGE to the regression. (f) The log wage may of course not be linear in EDUC and EXPER. How would you assess the possibility of quadratic nonlinear effects using t-tests? An F-test? The Schwarz criterion (SIC)? R2 ? (g) Suppose you find that the log wage relationship is indeed non-linear but still very simple, with only EXPER2 entering in addition to the variables in Figure 7.5. What is

∂ E(LWAGE | X) ∂ EXPER

in the expanded model?

7.8.

NOTES

How does it compare to

123 ∂ E(LWAGE | X) ∂ EXPER

in the original model of Figure

7.5? What are the economic interpretations of the two derivatives? (X refers to the full set of included right-hand-side variables in a regression.) (h) Return to the original model of Figure 7.5. How would you assess the overall adequacy of the fitted model using the Durbin-Watson statistic? The standard error of the regression? The model residuals? Which is likely to be most useful/informative? (i) Consider estimating the model not by OLS or NLS, but rather by quantile regression (QR). What loss function is optimized in calculating the QR estimate? (Give a formula and a graph.) What is the formula (if any) for the QR estimator? How is the least absolute deviations (LAD) estimator related to the QR estimator? Under the IC, are the OLS and LAD estimates likely very close? Why or why not? (j) Discuss whether and how you would incorporate trend and seasonality by using a linear time trend variable and a set of seasonal dummy variables.

7.8

Notes

124

CHAPTER 7. NONLINEARITY

Figure 7.5: Regression Output

Chapter 8 Heteroskedasticity We continue exploring issues associated with possible failure of the ideal conditions. This chapter’s issue is “Do we really believe that disturbance variances are constant?” As always, consider: ε ∼ N (0, Ω). Heteroskedasticity corresponds to Ω diagonal but Ω 6= σ 2 I 

σ12 0 . . . 0



   0 σ22 . . . 0   Ω=  ... ... . . . ...    0 0 . . . σT2 Heteroskedasticity can arise for many reasons. A leading cause is that σt2 may depend on one or more of the xt ’s. A classic example is an “Engel curve”, a regression relating food expenditure to income. Wealthy people have much more discretion in deciding how much of their income to spend on food, so their disturbances should be more variable, as routinely found.

8.1

Consequences of Heteroskedasticity for Estimation, Inference, and Prediction

As regards point estimation, OLS remains largely OK, insofar as parameter estimates remain consistent and asymptotically normal. They are, however, 125

126

CHAPTER 8. HETEROSKEDASTICITY

rendered inefficient. But consistency is key. Inefficiency is typically inconsequential in large samples, as long as we have consistency. As regards inference, however, heteroskedasticity wreaks significant havoc. Standard errors are biased and inconsistent. Hence t statistics do not have the t distribution in finite samples and do not even have the N (0, 1) distribution asymptotically. Finally, as regards prediction, results vary depending on whether we’re talking about point or density prediction. Our earlier feasible point forecasts constructed under homoskedasticity remain useful under heteroskedasticity. Because parameter estimates remain consistent, we still have E(yt\ | xt =x∗t ) →p E(yt | xt =x∗t ) . In contrast, our earlier feasible density forecasts do not remain useful, because under heteroskedasticity it is no longer appropriate to base them on “identical σ’s for different people”. Now we need to base them on “different σ’s for different people”.

8.2

Detecting Heteroskedasticity

We will consider both graphical heteroskedasticity diagnostics and formal heteroskedasticity tests. The two approaches are complements, not substitutes. 8.2.1

Graphical Diagnostics

The first thing we can do is graph e2t against xt , for various regressors, looking for relationships. This makes sense because e2t is effectively a proxy for σt2 . Recall, for example, our “Final” wage regression, shown in Figure 8.1.

8.2. DETECTING HETEROSKEDASTICITY

127

Figure 8.1: Final Wage Regression

In Figure 8.2 we graph the squared residuals agains EDUC. There is apparently a positive relationship, although it is noisy. This makes sense, because very low education almost always leads to very low wage, whereas high education can produce a larger variety of wages (e.g., both neurosurgeons and college professors are highly educated, but neurosurgeons typically earn much more). 8.2.2

Formal Tests

The Breusch-Pagan-Godfrey Test (BPG)

An important limitation of the graphical method for heteroskedasticity detection is that it is purely pairwise (we can only examine one x at a time), whereas the disturbance variance might actually depend on more than one x. Formal tests let us blend the information from multiple x’s, and they also let us assess statistical significance. The BPG test proceeds as follows: 1. Estimate the OLS regression, and obtain the squared residuals

128

CHAPTER 8. HETEROSKEDASTICITY

Figure 8.2: Squared Residuals vs. Years of Education

2. Regress the squared residuals on all regressors 3. To test the null hypothesis of no relationship, examine T · R2 from this regression. It can be shown that in large samples T · R2 ∼ χ2K−1 under the null of homoskedasticity, where K is the number of regressors in the test regression. We show the BPG test results in Figure 8.3. White’s Test

White’s test is a simple extension of BPG, replacing the linear BPG test regression with a more flexible (quadratic) regression: 1. Estimate the OLS regression, and obtain the squared residuals 2. Regress the squared residuals on all regressors, squared regressors, and pairwise regressor cross products

8.3. DEALING WITH HETEROSKEDASTICITY

129

Figure 8.3: BPG Test Regression and Results

3. To test the null hypothesis of no relationship, examine T · R2 from this regression. It can be shown that in large samples T · R2 ∼ χ2K−1 under the null of homoskedasticity, where K is the number of regressors in the test regression. We show the White test results in Figure 8.4.

8.3

Dealing with Heteroskedasticity

We will consider both adjusting standard errors and adjusting density forecasts. 8.3.1

Adjusting Standard Errors

Using advanced methods, one can obtain consistent standard errors, even when heteroskedasticity is present. Mechanically, it’s just a simple regression option. e.g., in EViews, instead of “ls y,c,x”, use “ls(cov=white) y,c,x” Even if you’re only interested in prediction, you still might want to use robust standard errors, in order to do credible inference regarding the con-

130

CHAPTER 8. HETEROSKEDASTICITY

Figure 8.4: White Test Regression and Results

tributions of the various x variables to the point prediction. In Figure 8.5 we show the final wage regression with robust standard errors. Although the exact values of the standard errors change, it happens in this case that significance of all coefficients is preserved. 8.3.2

Adjusting Density Forecasts

Recall operational density forecast under the ideal conditions (which include, among other things, Gaussian homoskedastic disturbances): yt | xt =x∗ ∼ N (x∗ 0 βˆLS , s2 ). Now, under heterskedasticity (but maintaining normality), we have the natural extension, ˆ∗2 ), yt | xt =x∗ ∼ N (x∗ 0 βˆLS , σ where σ ˆ∗2 is the fitted value from the BPG or White test regression evaluated at x∗ .

8.4. EXERCISES, PROBLEMS AND COMPLEMENTS

131

Figure 8.5: Wage Regression with Heteroskedasticity-Robust Standard Errors

8.4

Exercises, Problems and Complements

1. (Vocabulary) All these have the same meaning: (a) “Heteroskedasticity-robust standard errors” (b) “heteroskedasticity-consistent standard errors” (c) “Robust standard errors” (d) “White standard errors” (e) “White-washed” standard errors” 2. (Generalized Least Squares (GLS)) For arbitrary Ω matrix, it can be shown that full estimation efficiency requires “generalized least squares” (GLS) estimation. The GLS estimator is: βˆGLS = (X 0 Ω−1 X)−1 X 0 Ω−1 y. Under the ideal conditions (but allowing for Ω 6= σ 2 I) it is consistent,

132

CHAPTER 8. HETEROSKEDASTICITY

MVUE, and normally distributed with covariance matrix (X 0 Ω−1 X)−1 :  βˆGLS ∼ N β, (X 0 Ω−1 X)−1 . (a) Show that when Ω = σ 2 I the GLS estimator is just the standard OLS estimator: βˆGLS = βˆOLS = (X 0 X)−1 X 0 y. (b) Show that when Ω = σ 2 I the covariance matrix of the GLS estimator is just that of the standard OLS estimator: cov(βˆGLS ) = cov(βˆOLS ) = σ 2 (X 0 X)−1 . 3. (GLS for Heteroskedasticity) (a) Show that GLS for heteroskedasticity amounts to OLS on data weighted by the inverse disturbance standard deviation (1/σt ), often called “weighted least squares” (WLS). This is “infeasible” WLS since in general we don’t know the σt ’s. (b) To see why WLS works, consider the heteroskedastic DGP: yt = x0t β + εt εt ∼ idN (0, σt2 ). Now weight the data (yt , xt ) by 1/σt : yt x0t β εt = + . σt σt σt The transformed (but equivalent) DGP is then: yt∗ = x∗t 0 β + ε∗t

8.4. EXERCISES, PROBLEMS AND COMPLEMENTS

133

ε∗t ∼ iidN (0, 1). The weighted data satisfies the IC and so OLS is MVUE! So GLS is just OLS on appropriately transformed data. In the heteroskedasticity case the appropriate transfomation is weighting. We downweight high-variance observations, as is totally natural. 4. (Details of Weighted Least Squares) Note that weighting the data by 1/σt is the same as weighting the residuals by 1/σt2 :

min β

2 T  X yt − x 0 β t

t=1

σt

T X 1 2 = min (yt − x0t β) . 2 β σ t=1 t

5. (Feasible Weighted Least Squares) To make WLS feasible, we need to replace the unknown σt2 ’s with estimates. • Good idea: Use weights wt = 1/eb2t , where eb2t are from the BGP test regression • Good idea: Use wt = 1/eb2t , where eb2t are from the White test regression. • Bad idea: Use wt = 1/e2t is not a good idea. e2t is too noisy; we’d like to use not e2t but rather E(e2t |xt ). So we use an estimate of E(e2t |xt ), namely eb2 from e2 → X t

In Figure 8.6 we show regression results with weighting based on the results from the White test regression. 6. (Robustness iteration) Sometimes, after an OLS regression, people do a second-stage WLS with weights 1/|et |, or something similar. This is not a heteroskedasticity

134

CHAPTER 8. HETEROSKEDASTICITY

Figure 8.6: Regression Weighted by Fit From White Test Regression

correction, but rather a strategy to downweight outliers. But notice that the two are closely related. 7. (Spatial Correlation) So far we have studied a heteroskedastic situation (εt independent across t but not identically distributed across t). But do we really believe that the disturbances are uncorrelated over space (t)? Spatial correlation in cross sections is another type of violation of the IC. (This time it’s “nonzero disturbance covariances” as opposed to “non-constant disturbance variances”.) As always, consider ε ∼ N (0, Ω). Spatial correlation (with possible heteroskedasticity as well) corresponds to:   σ12 σ12 . . . σ1T    σ21 σ22 . . . σ2T   Ω= .. . . . ..  .  ... . .   σT 1 σT 2 . . . σT2

8. (“Clustering” in spatial correlation)

8.4. EXERCISES, PROBLEMS AND COMPLEMENTS

135

Ω could be non-diagonal in cross sections but still sparse in certain ways. A key case is block-diagonal Ω, in which there is nonzero covariance within certain sets of disturbances, but not across sets (“clustering”).

136

CHAPTER 8. HETEROSKEDASTICITY

Part III Time-Series

137

Chapter 9 Indicator Variables in Time Series: Trend and Seasonality The time series that we want to model vary over time, and we often mentally attribute that variation to unobserved underlying components related to trend and seasonality.

9.1

Linear Trend

Trend involves slow, long-run, evolution in the variables that we want to model and forecast. In business, finance, and economics, for example, trend is produced by slowly evolving preferences, technologies, institutions, and demographics. We’ll focus here on models of deterministic trend, in which the trend evolves in a perfectly predictable way. Deterministic trend models are tremendously useful in practice.1 Linear trend is a simple linear function of time, T rendt = β1 + β2 T IM Et . The indicator variable T IM E is constructed artificially and is called a “time trend” or “time dummy.” T IM E equals 1 in the first period of the sample, 1

Later we’ll broaden our discussion to allow for stochastic trend.

139

140CHAPTER 9. INDICATOR VARIABLES IN TIME SERIES: TREND AND SEASONALITY

Figure 9.1: Various Linear Trends

2 in the second period, and so on. Thus, for a sample of size T , T IM E = (1, 2, 3, ..., T − 1, T ). Put differently, T IM Et = t, so that the TIME variable simply indicates the time. β1 is the intercept; it’s the value of the trend at time t=0. β2 is the slope; it’s positive if the trend is increasing and negative if the trend is decreasing. The larger the absolute value of β1 , the steeper the trend’s slope. In Figure 9.1, for example, we show two linear trends, one increasing and one decreasing. The increasing trend has an intercept of β1 = −50 and a slope of β2 = .8, whereas the decreasing trend has an intercept of β1 = 10 and a gentler absolute slope of β2 = −.25. In business, finance, and economics, linear trends are typically increasing, corresponding to growth, but such need not be the case. In recent decades, for example, male labor force participation rates have been falling, as have the times between trades on stock exchanges. In other cases, such as records (e.g., world records in the marathon), trends are decreasing by definition. Estimation of a linear trend model (for a series y, say) is easy. First we need to create and store on the computer the variable T IM E. Fortunately we don’t have to type the T IM E values (1, 2, 3, 4, ...) in by hand; in most good software environments, a command exists to create the trend automatically.

9.2. SEASONALITY

141

Then we simply run the least squares regression y → c, T IM E.

9.2

Seasonality

In the last section we focused on the trends; now we’ll focus on seasonality. A seasonal pattern is one that repeats itself every year.2 The annual repetition can be exact, in which case we speak of deterministic seasonality, or approximate, in which case we speak of stochastic seasonality. Here we focus exclusively on deterministic seasonality models. Seasonality arises from links of technologies, preferences and institutions to the calendar. The weather (e.g., daily high temperature) is a trivial but very important seasonal series, as it’s always hotter in the summer than in the winter. Any technology that involves the weather, such as production of agricultural commodities, is likely to be seasonal as well. Preferences may also be linked to the calendar. Consider, for example, gasoline sales. People want to do more vacation travel in the summer, which tends to increase both the price and quantity of summertime gasoline sales, both of which feed into higher current-dollar sales. Finally, social institutions that are linked to the calendar, such as holidays, are responsible for seasonal variation in a variety of series. In Western countries, for example, sales of retail goods skyrocket every December, Christmas season. In contrast, sales of durable goods fall in December, as Christmas purchases tend to be nondurables. (You don’t buy someone a refrigerator for Christmas.) You might imagine that, although certain series are seasonal for the reasons described above, seasonality is nevertheless uncommon. On the contrary, and perhaps surprisingly, seasonality is pervasive in business and economics. Many industrialized economies, for example, expand briskly every 2

Note therefore that seasonality is impossible, and therefore not an issue, in data recorded once per year, or less often than once per year.

142CHAPTER 9. INDICATOR VARIABLES IN TIME SERIES: TREND AND SEASONALITY

fourth quarter and contract every first quarter. 9.2.1

Seasonal Dummies

A key technique for modeling seasonality is regression on seasonal dummies. Let s be the number of seasons in a year. Normally we’d think of four seasons in a year, but that notion is too restrictive for our purposes. Instead, think of s as the number of observations on a series in each year. Thus s = 4 if we have quarterly data, s = 12 if we have monthly data, s = 52 if we have weekly data, and so forth. The pure seasonal dummy model is Seasonalt =

s X

γi SEASit

i=1

( where SEASit =

1 if observation t falls in season i 0 otherwise

The SEASit variables are called seasonal dummy variables. They simply indicate which season we’re in. Operationalizing the model is simple. Suppose, for example, that we have quarterly data, so that s = 4. Then we create four variables3 : SEAS1 = (1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, ..., 0)0 SEAS2 = (0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, ..., 0)0 SEAS3 = (0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, ..., 0)0 SEAS4 = (0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, ..., 1)0 . SEAS1 indicates whether we’re in the first quarter (it’s 1 in the first quarter and zero otherwise), SEAS2 indicates whether we’re in the second quarter (it’s 1 in the second quarter and zero otherwise), and so on. At any given time, we can be in only one of the four quarters, so one seasonal dummy is 1, and all others are zero. 3

For illustrative purposes, assume that the data sample begins in Q1 and ends in Q4.

9.2. SEASONALITY

143

To estimate the model for a series y, we simply run the least squares regression, y → SEAS1 , ..., SEASs . Effectively, we’re just regressing on an intercept, but we allow for a different intercept in each season. Those different intercepts (that is γi ’s) are called the seasonal factors; they summarize the seasonal pattern over the year, and we often may want to examine them and plot them. In the absence of seasonality, those intercepts are all the same, so we can drop all the seasonal dummies and instead simply include an intercept in the usual way. In time-series contexts it’s often most natural to include a full set of seasonal dummies, without an intercept. But of course we could instead include any s − 1 seasonal dummies and an intercept. Then the constant term is the intercept for the omitted season, and the coefficients on the seasonal dummies give the seasonal increase or decrease relative to the omitted season. In no case, however, should we include s seasonal dummies and an intercept. Including an intercept is equivalent to including a variable in the regression whose value is always one, but note that the full set of s seasonal dummies sums to a variable whose value is always one, so it is completely redundant. Trend may be included as well. For example, we can account for seasonality and linear trend by running4 y → T IM E, SEAS1 , ..., SEASs . In fact, you can think of what we’re doing in this section as a generalization of what we did in the last, in which we focused exclusively on trend. We still want to account for trend, if it’s present, but we want to expand the model so that we can account for seasonality as well. 4

Note well that we drop the intercept! (Why?)

144CHAPTER 9. INDICATOR VARIABLES IN TIME SERIES: TREND AND SEASONALITY

9.2.2

More General Calendar Effects

The idea of seasonality may be extended to allow for more general calendar effects. “Standard” seasonality is just one type of calendar effect. Two additional important calendar effects are holiday variation and tradingday variation. Holiday variation refers to the fact that some holidays’ dates change over time. That is, although they arrive at approximately the same time each year, the exact dates differ. Easter is a common example. Because the behavior of many series, such as sales, shipments, inventories, hours worked, and so on, depends in part on the timing of such holidays, we may want to keep track of them in our forecasting models. As with seasonality, holiday effects may be handled with dummy variables. In a monthly model, for example, in addition to a full set of seasonal dummies, we might include an “Easter dummy,” which is 1 if the month contains Easter and 0 otherwise. Trading-day variation refers to the fact that different months contain different numbers of trading days or business days, which is an important consideration when modeling and forecasting certain series. For example, in a monthly forecasting model of volume traded on the London Stock Exchange, in addition to a full set of seasonal dummies, we might include a trading day variable, whose value each month is the number of trading days that month. More generally, you can model any type of calendar effect that may arise, by constructing and including one or more appropriate dummy variables.

9.3

Trend and Seasonality in Liquor Sales

We’ll illustrate trend and seasonal modeling with an application to liquor sales. The data are measured monthly. We show the time series of liquor sales in Figure 9.2, which displays clear trend (sales are increasing) and seasonality (sales skyrocket during the Christ-

9.3. TREND AND SEASONALITY IN LIQUOR SALES

145

Figure 9.2: Liquor Sales

mas season, among other things). We show log liquor sales in Figure 9.3 ; we take logs to stabilize the variance, which grows over time.5 Log liquor sales has a more stable variance, and it’s the series for which we’ll build models.6 Linear trend estimation results appear in Table 9.4. The trend is increasing and highly significant. The adjusted R2 is 84%, reflecting the fact that trend is responsible for a large part of the variation in liquor sales. The residual plot (Figure 9.5) suggests, however, that linear trend is inadequate. Instead, the trend in log liquor sales appears nonlinear, and the neglected nonlinearity gets dumped in the residual. (We’ll introduce nonlinear trend later.) The residual plot also reveals obvious residual seasonality. The Durbin-Watson statistic missed it, evidently because it’s not designed to have power against seasonal dynamics.7 In Figure 9.6 we show estimation results for a model with linear trend 5

The nature of the logarithmic transformation is such that it “compresses” an increasing variance. Make a graph of log(x) as a function of x, and you’ll see why. 6 From this point onward, for brevity we’ll simply refer to “liquor sales,” but remember that we’ve taken logs. 7 Recall that the Durbin-Watson test is designed to detect simple AR(1) dynamics. It also has the ability to detect other sorts of dynamics, but evidently not those relevant to the present application, which are very different from a simple AR(1).

146CHAPTER 9. INDICATOR VARIABLES IN TIME SERIES: TREND AND SEASONALITY

Figure 9.3: Log Liquor Sales

and seasonal dummies. (Note that we dropped the intercept!) The seasonal dummies are highly significant, and in many cases significantly different from each other. R2 is higher. In Figure 9.7 we show the corresponding residual plot. The model now picks up much of the seasonality, as reflected in the seasonal fitted series and the non-seasonal residuals. In Figure 9.8 we plot the estimated seasonal pattern, which peaks during the winter holidays. All of these results are crude approximations, because the linear trend is clearly inadequate. We will subsequently allow for more sophisticated (nonlinear) trends.

9.4

Exercises, Problems and Complements

1. (Mechanics of trend estimation and detrending) Obtain from the web a quarterly time series of U.S. real GDP in levels, spanning the last forty years, and ending in Q4. a. Produce a time series plot and discuss.

9.4. EXERCISES, PROBLEMS AND COMPLEMENTS

147

Dependent Variable: LSALES Method: Least Squares Date: 08/08/13 Time: 08:53 Sample: 1987M01 2014M12 Included observations: 336 Variable

Coefficient

Std. Error

t-Statistic

Prob.

C TIME

6.454290 0.003809

0.017468 8.98E-05

369.4834 42.39935

0.0000 0.0000

R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood F-statistic Prob(F-statistic)

0.843318 0.842849 0.159743 8.523001 140.5262 1797.705 0.000000

Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat

7.096188 0.402962 -0.824561 -0.801840 -0.815504 1.078573

Figure 9.4: Linear Trend Estimation

b. Fit a linear trend. Discuss both the estimation results and the residual plot. c. Is there any evidence of seasonality in the residuals? Why or why not? d. The residuals from your fitted model are effectively a linearly detrended version of your original series. Why? Discuss. 2. (Using model selection criteria to select a trend model) You are tracking and forecasting the earnings of a new company developing and applying proprietary nano-technology. The earnings are trending upward. You fit linear, quadratic, and exponential trend models, yielding sums of squared residuals of 4352, 2791, and 2749, respectively. Which trend model would you select, and why? 3. (Seasonal adjustment) Just as we sometimes want to remove the trend from a series, sometimes

148CHAPTER 9. INDICATOR VARIABLES IN TIME SERIES: TREND AND SEASONALITY

Figure 9.5: Residual Plot, Linear Trend Estimation

we want to seasonally adjust a series before modeling it. Seasonal adjustment may be done using a variety of methods. a. Discuss in detail how you’d use a linear trend plus seasonal dummies model to seasonally adjust a series. b. Seasonally adjust the log liquor sales data using a linear trend plus seasonal dummy model. Discuss the patterns present and absent from the seasonally adjusted series. c. Search the Web (or the library) for information on the latest U.S. Census Bureau seasonal adjustment procedure, and report what you learned. 4. (Handling sophisticated calendar effects) Describe how you would construct a purely seasonal model for the following monthly series. In particular, what dummy variable(s) would you use to capture the relevant effects?

9.4. EXERCISES, PROBLEMS AND COMPLEMENTS

149

Figure 9.6: Estimation Results, Linear Trend with Seasonal Dummies

a. A sporting goods store suspects that detrended monthly sales are roughly the same for each month in a given three-month season. For example, sales are similar in the winter months of January, February and March, in the spring months of April, May and June, and so on. b. A campus bookstore suspects that detrended sales are roughly the same for all first, all second, all third, and all fourth months of each trimester. For example, sales are similar in January, May, and September, the first months of the first, second, and third trimesters, respectively. c. (Trading-day effects) A financial-markets trader suspects that de-

150CHAPTER 9. INDICATOR VARIABLES IN TIME SERIES: TREND AND SEASONALITY

Figure 9.7: Residual Plot, Linear Trend with Seasonal Dummies

trended trading volume depends on the number of trading days in the month, which differs across months. d. (Time-varying holiday effects) A candy manufacturer suspects that detrended candy sales tend to rise at Easter. 5. (Testing for seasonality) Using the log liquor sales data: a. As in the chapter, construct and estimate a model with a full set of seasonal dummies. b. Test the hypothesis of no seasonal variation. Discuss. c. Test for the equality of the January through April seasonal factors. Discuss. d. Test for equality of the May through November seasonal factors. Discuss.

9.5. NOTES

151

Figure 9.8: Seasonal Pattern

e. Estimate a suitable “pruned” model with fewer than twelve seasonal dummies that nevertheless adequately captures the seasonal pattern.

9.5

Notes

Nerlove et al. (1979) and Harvey (1991) discuss a variety of models of trend and seasonality. The two most common and important “official” seasonal adjustment methods are X-12-ARIMA from the U.S. Census Bureau, and TRAMO-SEATS from the Bank of Spain.

152CHAPTER 9. INDICATOR VARIABLES IN TIME SERIES: TREND AND SEASONALITY

Chapter 10 Non-Linearity and Structural Change in Time Series In time series a central issue is nonlinear trend. Here we focus on it.

10.1

Exponential Trend

The insight that exponential growth is non-linear in levels but linear in logarithms takes us to the idea of exponential trend, or log-linear trend, which is very common in business, finance and economics.1 Exponential trend is common because economic variables often display roughly constant real growth rates (e.g., two percent per year). If trend is characterized by constant growth at rate β2 , then we can write T rendt = β1 eβ2 T IM Et . The trend is a non-linear (exponential) function of time in levels, but in logarithms we have ln(T rendt ) = ln(β1 ) + β2 T IM Et . Thus, ln(T rendt ) is a linear function of time. 1

Throughout this book, logarithms are natural (base e) logarithms.

153

(10.1)

154CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

Figure 10.1: Various Exponential Trends

In Figure 10.1 we show the variety of exponential trend shapes that can be obtained depending on the parameters. Depending on the signs and sizes of the parameter values, exponential trend can achieve a variety of patterns, increasing or decreasing at increasing or decreasing rates. Although the exponential trend model is non-linear, we can estimate it by simple least squares regression, because it is linear in logs. We simply run the least squares regression, ln y → c, T IM E. Note that because the intercept in equation (10.1) is not β1 , but rather ln(β1 ), we need to exponentiate the estimated intercept to get an estimate of β1 . Similarly, the fitted values from this regression are the fitted values of lny, so they must be exponentiated to get the fitted values of y. This is necessary, for example, for appropriately comparing fitted values or residuals (or statistics based on residuals, like AIC and SIC) from estimated exponential trend models to those from other trend models.

10.2. QUADRATIC TREND

155

It’s important to note that, although the same sorts of qualitative trend shapes can be achieved with quadratic and exponential trend, there are subtle differences between them. The non-linear trends in some series are well approximated by quadratic trend, while the trends in other series are better approximated by exponential trend. Ultimately it’s an empirical matter as to which is best in any particular application.

10.2

Quadratic Trend

Sometimes trend appears non-linear, or curved, as for example when a variable increases at an increasing or decreasing rate. Ultimately, we don’t require that trends be linear, only that they be smooth. We can allow for gentle curvature by including not only T IM E, but also T IM E 2 , T rendt = β1 + β2 T IM Et + β3 T IM Et2 . This is called quadratic trend, because the trend is a quadratic function of T IM E.2 Linear trend emerges as a special (and potentially restrictive) case when β3 = 0. A variety of different non-linear quadratic trend shapes are possible, depending on the signs and sizes of the coefficients; we show several in Figure 10.2. In particular, if β2 > 0 and β3 > 0 as in the upper-left panel, the trend is monotonically, but non-linearly, increasing, Conversely, if β2 < 0 and β3 < 0, the trend is monotonically decreasing. If β2 < 0 and β3 > 0 the trend has a U shape, and if β2 > 0 and β3 < 0 the trend has an inverted U shape. Keep in mind that quadratic trends are used to provide local approximations; one rarely has a “U-shaped” trend, for example. Instead, all of the data may lie on one or the other side of the “U”. Estimating quadratic trend models is no harder than estimating linear 2

Higher-order polynomial trends are sometimes entertained, but it’s important to use low-order polynomials to maintain smoothness.

156CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

Figure 10.2: Various Quadratic Trends

trend models. We first create T IM E and its square; call it T IM E2, where T IM E2t = T IM Et2 . Because T IM E = (1, 2, ..., T ), T IM E2 = (1, 4, ..., T 2 ). Then we simply run the least squares regression y → c, T IM E, T IM E2. Note in particular that although the quadratic is a non-linear function, it is linear in the variables T IM E and T IM E2.

10.3

More on Non-Linear Trend

The trend regression technique is one way to estimate trend. Two additional ways involve model-free smoothing techniques. They are moving-average smoothers and Hodrick-Prescott smoothers. We briefly introduce them here.

10.3. MORE ON NON-LINEAR TREND

10.3.1

157

Moving-Average Trend and De-Trending

We’ll focus on three: two-sided moving averages, one-sided moving averages, and one-sided weighted moving averages. Denote the original data by {yt }Tt=1 and the smoothed data by {st }Tt=1 . Then the two-sided moving average is st = (2m + 1)

−1

m X

yt−i ,

i=−m

the one-sided moving average is st = (m + 1)

−1

m X

yt−i ,

i=0

and the one-sided weighted moving average is st =

m X

wi yt−i ,

i=0

where the wi are weights and m is an integer chosen by the user. The “standard” one-sided moving average corresponds to a one-sided weighted moving average with all weights equal to (m + 1)−1 . a. For each of the smoothing techniques, discuss the role played by m. What happens as m gets very large? Very small? In what sense does m play a role similar to p, the order of a polynomial trend? b. If the original data runs from time 1 to time T , over what range can smoothed values be produced using each of the three smoothing methods? What are the implications for “real-time” smoothing or “on-line” smoothing versus “ex post” smoothing or “off-line” smoothing?

158CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

10.3.2

Hodrick-Prescott Trend and De-Trending

A final approach to trend fitting and de-trending is known as HodrickPrescott filtering. The “HP trend” solves: min T

{st }t=1

T X

2

(yt − st ) + λ

t=1

T −1 X

((st+1 − st ) − (st − st−1 ))2

t=2

a. λ is often called the “penalty parameter.” What does λ govern? b. What happens as λ → 0? c. What happens as λ → ∞? d. People routinely use bigger λ for higher-frequency data. Why? (Common values are λ = 100, 1600 and 14,400 for annual, quarterly, and monthly data, respectively.)

10.4

Structural Change

Recall the full ideal conditions. Here we deal with violation of the assumption that the coefficients, β, are fixed. The cross-section dummy variables that we already studied effectively allow for structural change in the cross section (heterogeneity across groups). But structural change is of special relevance in time series. It can be gradual (Lucas critique, learning, evolution of tastes, ...) or abrupt (e.g., new legislation). Structural change is related to nonlinearity, because abrupt structural change is actually a type of nonlinearity. Structural change is also related to outliers, because outliers can sometimes be viewed as a kind of structural change – a quick intercept break and return. For notational simplicity we consider the case of simple regression throughout, but the ideas extend immediately to multiple regression.

10.4. STRUCTURAL CHANGE

10.4.1

159

Gradual Parameter Evolution

In many cases, parameters may evolve gradually rather than breaking abruptly. Suppose, for example, that yt = β1t + β2t xt + εt where β1t = γ1 + γ2 T IM Et β2t = δ1 + δ2 T IM Et . Then we have: yt = (γ1 + γ2 T IM Et ) + (δ1 + δ2 T IM Et )xt + εt . We simply run: yt → c, , T IM Et , xt , T IM Et · xt . This is yet another important use of dummies. The regression can be used both to test for structural change (F test of γ2 = δ2 = 0), and to accommodate it if present. 10.4.2

Abrupt Parameter Breaks

Exogenously-Specified Breaks

Suppose that we don’t know whether a break occurred, but we know that if it did occur, it occurred at time T ∗ . A Dummy-Variable Approach That is, we entertain the possibility that ( β11 + β21 xt + εt , t = 1, ..., T ∗ yt = β12 + β22 xt + εt , t = T ∗ + 1, ..., T

160CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

Let

(

0, t = 1, ..., T ∗

Dt =

1, t = T ∗ + 1, ...T

Then we can write the model as: yt = (β11 + (β12 − β11 )Dt ) + (β21 + (β22 − β21 )Dt )xt + εt We simply run: yt → c, Dt , xt , Dt · xt The regression can be used both to test for structural change, and to accommodate it if present. It represents yet another use of dummies. The no-break null corresponds to the joint hypothesis of zero coefficients on Dt and Dt · xt , for which an F test is appropriate. The Chow Test The dummy-variable setup and associated F test above is actually just a laborious way of calculating the so-called Chow breakpoint test statistic, Chow =

(SSRres − SSR)/K , SSR/(T − 2K)

where SSRres is from the regression using sample t = 1, ..., T and SSR = SSR1 + SSR2 , where SSR1 is from the regression using sample t = 1, ..., T ∗ and SSR2 is from the regression using sample t = T ∗ + 1, ...T . Under the FIC, Chow is distributed F , with K and T − 2K degrees of freedom. The Chow test with Endogenous Break Selection

Thus far we have (unrealistically) assumed that the potential break date is known. In practice, potential break dates are often unknown and are identified by “peeking” at the data. We can capture this phenomenon in stylized fashion by imagining splitting the sample sequentially at each possible break date, and picking the split at which the Chow breakpoint test statistic is maximized. Implicitly, that’s what people often do in practice, even if they

10.5. DUMMY VARIABLES AND OMITTED VARIABLES, AGAIN AND AGAIN

161

don’t always realize or admit it. The distribution of such a test statistic is not F , as for the traditional Chow breakpoint test statistic. Rather, the distribution is that of the maximum of many draws from an F , which will be pushed far to the right of the distribution of a single F draw. The test statistic is M axChow = max Chow(τ ), τ1 ≤τ ≤τ2

where τ denotes sample fraction (typically we take τ1 = .15 and τ2 = .85). The distribution of M axChow has been tabulated.

10.5

Dummy Variables and Omitted Variables, Again and Again

10.5.1

Dummy Variables

Notice that dummy (indicator) variables have arisen repeatedly in our discussions. We used 0-1 dummies to handle group heterogeneity in cross-sections. We used time dummies to indicate the date in time series. We used 0-1 seasonal dummies to indicate the season in time series. Now, in this chapter, we used both (1) time dummies to allow for gradual parameter evolution, and (2) 0-1 dummies to indicate a sharp break date, in time series. 10.5.2

Omitted Variables

Notice that omitted variables have also arisen repeatedly in our discussions. 1. If there are neglected group effects in cross-section regression, we fix the problem (of omitted group dummies) by including the requisite group dummies.

162CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

2. If there is neglected trend or seasonality in time-series regression, we fix the problem (of omitted trend or seasonal dummies) by including the requisite trend or seasonal dummies. 3. If there is neglected non-linearity, we fix the problem (effectively one of omitted Taylor series terms) by including the requisite Taylor series terms. 4. If there is neglected structural change in time-series regression, we fix the problem (effectively one of omitted parameter trend dummies or break dummies) by including the requisite trend dummies or break dummies. You can think of the basic “uber-strategy” as ”If some systematic feature of the DGP is missing from the model, then include it.” That is, if something is missing, then model what’s missing, and then the new uber-model won’t have anything missing, and all will be well (i.e., the FIC will be satisfied). This is an important recognition. In subsequent chapters, for example, we’ll study violations of the FIC known as heteroskedasticity (Chapters 8 and 13) and serial correlation (Chapter 12). In each case the problem amounts to a feature of the DGP neglected by the initially-fitted model, and we address the problem by incorporating the neglected feature into the model.

10.6

Non-Linearity in Liquor Sales Trend

We already fit a non-linear (exponential) trend to liquor sales, when we fit a linear trend to log liquor sales. But it still didn’t fit so well. We now examine quadratic trend model (again in logs). The log-quadratic trend estimation results appear in Figure 10.3. Both T IM E and T IM E2 are highly significant. The adjusted R2 for the log-quadratic trend model is 89%, higher than for the the log-linear trend model. As with the loglinear trend model, the Durbin-Watson statistic provides no evidence against

10.6. NON-LINEARITY IN LIQUOR SALES TREND

163

Dependent Variable: LSALES Method: Least Squares Date: 08/08/13 Time: 08:53 Sample: 1987M01 2014M12 Included observations: 336 Variable

Coefficient

Std. Error

t-Statistic

Prob.

C TIME TIME2

6.231269 0.007768 -1.17E-05

0.020653 0.000283 8.13E-07

301.7187 27.44987 -14.44511

0.0000 0.0000 0.0000

R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood F-statistic Prob(F-statistic)

0.903676 0.903097 0.125439 5.239733 222.2579 1562.036 0.000000

Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter. Durbin-Watson stat

7.096188 0.402962 -1.305106 -1.271025 -1.291521 1.754412

Figure 10.3: Log-Quadratic Trend Estimation

the hypothesis that the regression disturbance is white noise. The residual plot (Figure 10.4) shows that the fitted quadratic trend appears adequate, and that it increases at a decreasing rate. The residual plot also continues to indicate obvious residual seasonality. (Why does the Durbin-Watson not detect it?) In Figure 10.5 we show the results of regression on quadratic trend and a full set of seasonal dummies. The trend remains highly significant, and the coefficients on the seasonal dummies vary significantly. The adjusted R2 rises to 99%. The Durbin-Watson statistic, moreover, has greater ability to detect residual serial correlation now that we have accounted for seasonality, and it sounds a loud alarm. The residual plot of Figure 10.6 shows no seasonality, as the model now accounts for seasonality, but it confirms the Durbin-Watson statistic’s warning of serial correlation. The residuals appear highly persistent. There remains one model as yet unexplored, exponential trend fit to LSALES. We do it by N LS (why?) and present the results in Figure ***.

164CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

Figure 10.4: Residual Plot, Log-Quadratic Trend Estimation

Among the linear, quadratic and exponential trend models for LSALES, both SIC and AIC clearly favor the quadratic. – Exogenously-specified break in log-linear trend model – Endogenously-selected break in log-linear trend model – SIC for best broken log-linear trend model vs. log-quadratic trend model

10.7

Exercises, Problems and Complements

1. Specifying and testing nonlinear trend models. In 1965, Intel co-founder Gordon Moore predicted that the number of transistors that one could place on a square-inch integrated circuit would double every twelve months. a. What sort of trend is this? b. Given a monthly series containing the number of transistors per square inch for the latest integrated circuit, how would you test Moore’s prediction? How would you test the currently accepted form of “Moore’s

10.7. EXERCISES, PROBLEMS AND COMPLEMENTS

165

Dependent Variable: LSALES Method: Least Squares Date: 08/08/13 Time: 08:53 Sample: 1987M01 2014M12 Included observations: 336 Variable

Coefficient

Std. Error

t-Statistic

Prob.

TIME TIME2 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12

0.007739 -1.18E-05 6.138362 6.081424 6.168571 6.169584 6.238568 6.243596 6.287566 6.259257 6.199399 6.221507 6.253515 6.575648

0.000104 2.98E-07 0.011207 0.011218 0.011229 0.011240 0.011251 0.011261 0.011271 0.011281 0.011290 0.011300 0.011309 0.011317

74.49828 -39.36756 547.7315 542.1044 549.3318 548.8944 554.5117 554.4513 557.8584 554.8647 549.0938 550.5987 552.9885 581.0220

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

R-squared Adjusted R-squared S.E. of regression Sum squared resid Log likelihood Durbin-Watson stat

0.987452 0.986946 0.046041 0.682555 564.6725 0.581383

Mean dependent var S.D. dependent var Akaike info criterion Schwarz criterion Hannan-Quinn criter.

7.096188 0.402962 -3.277812 -3.118766 -3.214412

Figure 10.5: Liquor Sales Log-Quadratic Trend Estimation with Seasonal Dummies

Law,” namely that the number of transistors actually doubles every eighteen months? 2. (Properties of polynomial trends) Consider a sixth-order deterministic polynomial trend: Tt = β1 + β2 T IM Et + β3 T IM Et2 + ... + β7 T IM Et6 . a. How many local maxima or minima may such a trend display? b. Plot the trend for various values of the parameters to reveal some of the different possible trend shapes. c. Is this an attractive trend model in general? Why or why not?

166CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

Figure 10.6: Residual Plot, Liquor Sales Log-Quadratic Trend Estimation With Seasonal Dummies

d. Fit the sixth-order polynomial trend model to a trending series that interests you, and discuss your results. 3. (Selecting non-linear trend models) Using AIC and SIC, perform a detailed comparison of polynomial vs. exponential trend in LSALES. Do you agree with our use of quadratic trend in the text? 4. (Difficulties with non-linear optimization) Non-linear optimization can be a tricky business, fraught with problems. Some problems are generic. It’s relatively easy to find a local optimum, for example, but much harder to be confident that the local optimum is global. Simple checks such as trying a variety of startup values and checking the optimum to which convergence occurs are used routinely, but the problem nevertheless remains. Other problems may be software specific. For example, some software may use highly accurate analytic derivatives whereas other software uses approximate numerical deriva-

10.7. EXERCISES, PROBLEMS AND COMPLEMENTS

167

tives. Even the same software package may change algorithms or details of implementation across versions, leading to different results. 5. (Direct estimation of exponential trend in levels) We can estimate an exponential trend in two ways. First, as we have emphasized, we can take logs and then use OLS to fit a linear trend. Alternatively we can use NLS, proceeding directly from the exponential representation and letting the computer find (βˆ1 , βˆ2 ) = argminβ1 ,β2

T X 

yt − β1 eβ2 T IM Et

2

.

t=1

a. The NLS approach is more tedious? Why? b. The NLS approach is less thoroughly numerically trustworthy? Why? c. Nevertheless the NLS approach can be very useful? Why? (Hint: Consider comparing SIC values for quadratic vs. exponential trend.) 6. (Logistic trend) In the main text we introduced the logistic functional form. A key example is logistic trend, which is T rendt =

1 , a + brT IM Et

with 0
168CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

Consider the liquor sales data. Never include an intercept. Discuss all results in detail. (a) Fit a linear trend plus seasonal dummy model to log liquor sales (LSALES), using a full set of seasonal dummies. (b) Find a “best” linear trend plus seasonal dummy LSALES model. That is, consider tightening the seasonal specification to include fewer than 12 seasonal dummies, and decide what’s best. (c) Keeping the same seasonality specification as in (7b), re-estimate the model in levels (that is, the LHS variable is now SALES rather than LSALES) using exponential trend and nonlinear least squares. Do your coefficient estimates match those from (7b)? Does the SIC match that from (7b)? (d) Repeat (7c), again using SALES and again leaving intact your seasonal specification from (7b), but try linear and quadratic trend instead of the exponential trend in (7c). What is your “final” SALES model? (e) Critique your final SALES model from (7d). In what ways is it likely still deficient? You will of course want to discuss its residual plot (actual values, fitted values, residuals), as well as any other diagnostic plots or statistics that you deem relevant. (f) Take your final estimated SALES model from (7d), and include as regressors three lags of SALES (i.e., SALESt−1 , SALESt−2 and SALESt−3 ). What role do the lags of SALES play? Consider this new model to be your “final, final” SALES model, and repeat (7e). 8. Regime Switching I: Observed-Regime Threshold Model

10.8. NOTES

169

yt =

        

(u)

c(u) + φ(u) yt−1 + εt , θ(u) < yt−d (m)

c(m) + φ(m) yt−1 + εt , θ(l) < yt−d < θ(u)

       

(l)

c(l) + φ(l) yt−1 + εt , θ(l) > yt−d

9. Regime Switching II: Markov-Switching Model Regime governed by latent 2-state Markov process: ! p00 1 − p00 M= 1 − p11 p11 Switching mean: 1 f (yt |st ) = √ exp 2πσ

! −(yt − µst )2 . 2σ 2

Switching regression: f (yt |st ) = √

10.8

Notes

1 exp 2πσ

−(yt − x0t βst )2 2σ 2

! .

170CHAPTER 10. NON-LINEARITY AND STRUCTURAL CHANGE IN TIME SERIES

Chapter 11 Serial Correlation in Observed Time Series Observed Time Series.

11.1

Characterizing Time-Series Dynamics

We’ve already considered models with trend and seasonal components. In this chapter we consider a crucial third component, cycles. When you think of a “cycle,” you probably think of the sort of rigid up-and-down pattern depicted in Figure ??. Such cycles can sometimes arise, but cyclical fluctuations in business, finance, economics and government are typically much less rigid. In fact, when we speak of cycles, we have in mind a much more general, all-encompassing, notion of cyclicality: any sort of dynamics not captured by trends or seasonals. Cycles, according to our broad interpretation, may display the sort of back-and-forth movement characterized in Figure ??, but they don’t have to. All we require is that there be some dynamics, some persistence, some way in which the present is linked to the past, and the future to the present. Cycles are present in most of the series that concern us, and it’s crucial that we know how to model and forecast them, because their history conveys information

171

172

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

regarding their future. Trend and seasonal dynamics are simple, so we can capture them with simple models. Cyclical dynamics, however, are more complicated. Because of the wide variety of cyclical patterns, the sorts of models we need are substantially more involved. Thus we split the discussion into three parts. First we develop methods for characterizing cycles, and then we introduce models of cycles. All of the material is crucial, and it’s also a bit difficult the first time around because it’s unavoidably rather mathematical, so careful, systematic study is required. 11.1.1

Covariance Stationary Time Series

A realization of a time series is an ordered set, {..., y−2 , y−1 , y0 , y1 , y2 , ...}. Typically the observations are ordered in time – hence the name time series – but they don’t have to be. We could, for example, examine a spatial series, such as office space rental rates as we move along a line from a point in midtown Manhattan to a point in the New York suburbs thirty miles away. But the most important case, by far, involves observations ordered in time, so that’s what we’ll stress. In theory, a time series realization begins in the infinite past and continues into the infinite future. This perspective may seem abstract and of limited practical applicability, but it will be useful in deriving certain very important properties of the models we’ll be using shortly. In practice, of course, the data we observe is just a finite subset of a realization, {y1 , ..., yT }, called a sample path. Shortly we’ll be building models for cyclical time series. If the underlying probabilistic structure of the series were changing over time, we’d be doomed – there would be no way to relate the future to the past, because the laws gov-

11.1. CHARACTERIZING TIME-SERIES DYNAMICS

173

erning the future would differ from those governing the past. At a minimum we’d like a series’ mean and its covariance structure (that is, the covariances between current and past values) to be stable over time, in which case we say that the series is covariance stationary. Let’s discuss covariance stationarity in greater depth. The first requirement for a series to be covariance stationary is that the mean of the series be stable over time. The mean of the series at time t is Eyt = µt . If the mean is stable over time, as required by covariance stationarity, then we can write Eyt = µ, for all t. Because the mean is constant over time, there’s no need to put a time subscript on it. The second requirement for a series to be covariance stationary is that its covariance structure be stable over time. Quantifying stability of the covariance structure is a bit tricky, but tremendously important, and we do it using the autocovariance function. The autocovariance at displacement τ is just the covariance between yt and yt−τ . It will of course depend on τ , and it may also depend on t, so in general we write γ(t, τ ) = cov(yt , yt−τ ) = E(yt − µ)(yt−τ − µ). If the covariance structure is stable over time, as required by covariance stationarity, then the autocovariances depend only on displacement, τ , not on time, t, and we write γ(t, τ ) = γ(τ ), for all t. The autocovariance function is important because it provides a basic summary of cyclical dynamics in a covariance stationary series. By examining the autocovariance structure of a series, we learn about its dynamic behavior. We graph and examine the autocovariances as a function of τ . Note that the autocovariance function is symmetric; that is, γ(τ ) = γ(−τ ), for all τ . Typically, we’ll consider only non-negative values of τ . Symmetry reflects the fact that the autocovariance of a covariance stationary series depends only on displacement; it doesn’t matter whether we go forward or backward. Note also that γ(0) = cov(yt , yt ) = var(yt ).

174

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

There is one more technical requirement of covariance stationarity: we require that the variance of the series – the autocovariance at displacement 0, γ(0), be finite. It can be shown that no autocovariance can be larger in absolute value than γ(0), so if γ(0) < ∞, then so too are all the other autocovariances. It may seem that the requirements for covariance stationarity are quite stringent, which would bode poorly for our models, almost all of which invoke covariance stationarity in one way or another. It is certainly true that many economic, business, financial and government series are not covariance stationary. An upward trend, for example, corresponds to a steadily increasing mean, and seasonality corresponds to means that vary with the season, both of which are violations of covariance stationarity. But appearances can be deceptive. Although many series are not covariance stationary, it is frequently possible to work with models that give special treatment to nonstationary components such as trend and seasonality, so that the cyclical component that’s left over is likely to be covariance stationary. We’ll often adopt that strategy. Alternatively, simple transformations often appear to transform nonstationary series to covariance stationarity. For example, many series that are clearly nonstationary in levels appear covariance stationary in growth rates. In addition, note that although covariance stationarity requires means and covariances to be stable and finite, it places no restrictions on other aspects of the distribution of the series, such as skewness and kurtosis.1 The upshot is simple: whether we work directly in levels and include special components for the nonstationary elements of our models, or we work on transformed data such as growth rates, the covariance stationarity assumption is not as unrealistic as it may seem. Recall that the correlation between two random variables x and y is defined 1

For that reason, covariance stationarity is sometimes called second-order stationarity or weak stationarity.

11.1. CHARACTERIZING TIME-SERIES DYNAMICS

175

by corr(x, y) =

cov(x, y) . σx σy

That is, the correlation is simply the covariance, “normalized,” or “standardized,” by the product of the standard deviations of x and y. Both the correlation and the covariance are measures of linear association between two random variables. The correlation is often more informative and easily interpreted, however, because the construction of the correlation coefficient guarantees that corr(x, y) ∈ [−1, 1], whereas the covariance between the same two random variables may take any value. The correlation, moreover, does not depend on the units in which x and y are measured, whereas the covariance does. Thus, for example, if x and y have a covariance of ten million, they’re not necessarily very strongly associated, whereas if they have a correlation of .95, it is unambiguously clear that they are very strongly associated. In light of the superior interpretability of correlations as compared to covariances, we often work with the correlation, rather than the covariance, between yt and yt−τ . That is, we work with the autocorrelation function, ρ(τ ), rather than the autocovariance function, γ(τ ). The autocorrelation function is obtained by dividing the autocovariance function by the variance, ρ(τ ) =

γ(τ ) , τ = 0, 1, 2, .... γ(0)

The formula for the autocorrelation is just the usual correlation formula, specialized to the correlation between yt and yt−τ . To see why, note that the variance of yt is γ(0), and by covariance stationarity, the variance of y at any other time yt−τ is also γ(0). Thus, cov(yt , yt−τ ) γ(τ ) γ(τ ) p p ρ(τ ) = p =p = , γ(0) var(yt ) var(yt−τ ) γ(0) γ(0) as claimed. Note that we always have ρ(0) =

γ(0) γ(0)

= 1 , because any series

176

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

is perfectly correlated with itself. Thus the autocorrelation at displacement 0 isn’t of interest; rather, only the autocorrelations beyond displacement 0 inform us about a series’ dynamic structure. Finally, the partial autocorrelation function, p(τ ), is sometimes useful. p(τ ) is just the coefficient of yt−τ in a population linear regression of yt on yt−1 , ..., yt−τ .2 We call such regressions autoregressions, because the variable is regressed on lagged values of itself. It’s easy to see that the autocorrelations and partial autocorrelations, although related, differ in an important way. The autocorrelations are just the “simple” or “regular” correlations between yt and yt−τ . The partial autocorrelations, on the other hand, measure the association between yt and yt−τ after controlling for the effects of yt−1 , ..., yt−τ +1 ; that is, they measure the partial correlation between yt and yt−τ . As with the autocorrelations, we often graph the partial autocorrelations as a function of τ and examine their qualitative shape, which we’ll do soon. Like the autocorrelation function, the partial autocorrelation function provides a summary of a series’ dynamics, but as we’ll see, it does so in a different way.3 All of the covariance stationary processes that we will study subsequently have autocorrelation and partial autocorrelation functions that approach zero, one way or another, as the displacement gets large. In Figure *** we show an autocorrelation function that displays gradual one-sided damping, and in Figure *** we show a constant autocorrelation function; the latter could not be the autocorrelation function of a stationary process, whose autocorrelation function must eventually decay. The precise decay patterns 2 To get a feel for what we mean by “population regression,” imagine that we have an infinite sample of data at our disposal, so that the parameter estimates in the regression are not contaminated by sampling variation – that is, they’re the true population values. The thought experiment just described is a population regression. 3 Also in parallel to the autocorrelation function, the partial autocorrelation at displacement 0 is always one and is therefore uninformative and uninteresting. Thus, when we graph the autocorrelation and partial autocorrelation functions, we’ll begin at displacement 1 rather than displacement 0.

11.2. WHITE NOISE

177

of autocorrelations and partial autocorrelations of a covariance stationary series, however, depend on the specifics of the series. In Figure ***, for example, we show an autocorrelation function that displays damped oscillation – the autocorrelations are positive at first, then become negative for a while, then positive again, and so on, while continuously getting smaller in absolute value. Finally, in Figure *** we show an autocorrelation function that differs in the way it approaches zero – the autocorrelations drop abruptly to zero beyond a certain displacement.

11.2

White Noise

In this section we’ll study the population properties of certain important time series models, or time series processes. Before we estimate time series models, we need to understand their population properties, assuming that the postulated model is true. The simplest of all such time series processes is the fundamental building block from which all others are constructed. In fact, it’s so important that we introduce it now. We use y to denote the observed series of interest. Suppose that yt = εt εt ∼ (0, σ 2 ), where the “shock,” εt , is uncorrelated over time. We say that εt , and hence yt , is serially uncorrelated. Throughout, unless explicitly stated otherwise, we assume that σ 2 < ∞. Such a process, with zero mean, constant variance, and no serial correlation, is called zero-mean white noise, or simply white

178

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

noise.4 Sometimes for short we write εt ∼ W N (0, σ 2 ) and hence yt ∼ W N (0, σ 2 ). Note that, although εt and hence yt are serially uncorrelated, they are not necessarily serially independent, because they are not necessarily normally distributed.5 If in addition to being serially uncorrelated, y is serially independent, then we say that y is independent white noise.6 We write yt ∼ iid(0, σ 2 ), and we say that “y is independently and identically distributed with zero mean and constant variance.” If y is serially uncorrelated and normally distributed, then it follows that y is also serially independent, and we say that y is normal white noise, or Gaussian white noise.7 We write yt ∼ iidN (0, σ 2 ). We read “y is independently and identically distributed as normal, with zero mean and constant variance,” or simply “y is Gaussian white noise.” In Figure *** we show a sample path of Gaussian white noise, of length T = 150, simulated on a computer. There are no patterns of any kind in the series due to the independence over time. You’re already familiar with white noise, although you may not realize it. 4

It’s called white noise by analogy with white light, which is composed of all colors of the spectrum, in equal amounts. We can think of white noise as being composed of a wide variety of cycles of differing periodicities, in equal amounts. 5 Recall that zero correlation implies independence only in the normal case. 6 Another name for independent white noise is strong white noise, in contrast to standard serially uncorrelated weak white noise. 7 Carl Friedrich Gauss, one of the greatest mathematicians of all time, discovered the normal distribution some 200 years ago; hence the adjective “Gaussian.”

11.2. WHITE NOISE

179

Recall that the disturbance in a regression model is typically assumed to be white noise of one sort or another. There’s a subtle difference here, however. Regression disturbances are not observable, whereas we’re working with an observed series. Later, however, we’ll see how all of our models for observed series can be used to model unobserved variables such as regression disturbances. Let’s characterize the dynamic stochastic structure of white noise, yt ∼ W N (0, σ 2 ). By construction the unconditional mean of y is E(yt ) = 0, and the unconditional variance of y is var(yt ) = σ 2 . Note that the unconditional mean and variance are constant. In fact, the unconditional mean and variance must be constant for any covariance stationary process. The reason is that constancy of the unconditional mean was our first explicit requirement of covariance stationarity, and that constancy of the unconditional variance follows implicitly from the second requirement of covariance stationarity, that the autocovariances depend only on displacement, not on time.8 To understand fully the linear dynamic structure of a covariance stationary time series process, we need to compute and examine its mean and its autocovariance function. For white noise, we’ve already computed the mean and the variance, which is the autocovariance at displacement 0. We have yet to compute the rest of the autocovariance function; fortunately, however, it’s very simple. Because white noise is, by definition, uncorrelated over time, all the autocovariances, and hence all the autocorrelations, are zero beyond displacement 0.9 Formally, then, the autocovariance function for a white noise process is γ(τ ) =

 2    σ ,τ = 0    0, τ ≥ 1,

8

Recall that σ 2 = γ(0). If the autocovariances are all zero, so are the autocorrelations, because the autocorrelations are proportional to the autocovariances. 9

180

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

and the autocorrelation function for a white noise process is     1, τ = 0 ρ(τ ) =    0, τ ≥ 1. In Figure *** we plot the white noise autocorrelation function. Finally, consider the partial autocorrelation function for a white noise series. For the same reason that the autocorrelation at displacement 0 is always one, so too is the partial autocorrelation at displacement 0. For a white noise process, all partial autocorrelations beyond displacement 0 are zero, which again follows from the fact that white noise, by construction, is serially uncorrelated. Population regressions of yt on yt−1 , or on yt−1 and yt−2 , or on any other lags, produce nothing but zero coefficients, because the process is serially uncorrelated. Formally, the partial autocorrelation function of a white noise process is

p(τ ) =

    1, τ = 0    0, τ ≥ 1.

We show the partial autocorrelation function of a white noise process in Figure ***. Again, it’s degenerate, and exactly the same as the autocorrelation function! White noise is very special, indeed degenerate in a sense, as what happens to a white noise series at any time is uncorrelated with anything in the past, and similarly, what happens in the future is uncorrelated with anything in the present or past. But understanding white noise is tremendously important for at least two reasons. First, as already mentioned, processes with much richer dynamics are built up by taking simple transformations of white noise. Second, the goal of all time series modeling (and 1-step-ahead forecasting)

11.2. WHITE NOISE

181

is to reduce the data (or 1-step-ahead forecast errors) to white noise. After all, if such forecast errors aren’t white noise, then they’re serially correlated, which means that they’re forecastable, and if forecast errors are forecastable then the forecast can’t be very good. Thus it’s important that we understand and be able to recognize white noise. Thus far we’ve characterized white noise in terms of its mean, variance, autocorrelation function and partial autocorrelation function. Another characterization of dynamics involves the mean and variance of a process, conditional upon its past. In particular, we often gain insight into the dynamics in a process by examining its conditional mean.10 In fact, throughout our study of time series, we’ll be interested in computing and contrasting the unconditional mean and variance and the conditional mean and variance of various processes of interest. Means and variances, which convey information about location and scale of random variables, are examples of what statisticians call moments. For the most part, our comparisons of the conditional and unconditional moment structure of time series processes will focus on means and variances (they’re the most important moments), but sometimes we’ll be interested in higher-order moments, which are related to properties such as skewness and kurtosis. For comparing conditional and unconditional means and variances, it will simplify our story to consider independent white noise, yt ∼ iid(0, σ 2 ). By the same arguments as before, the unconditional mean of y is 0 and the unconditional variance is σ 2 . Now consider the conditional mean and variance, where the information set Ωt−1 upon which we condition contains either the past history of the observed series, Ωt−1 = yt−1 , yt−2 , ..., or the past history of the shocks, Ωt−1 = εt−1 , εt−2 .... (They’re the same in the white noise case.) In contrast to the unconditional mean and variance, which must be constant by covariance stationarity, the conditional mean and variance need not be 10

If you need to refresh your memory on conditional means, consult any good introductory statistics book, such as Wonnacott and Wonnacott (1990).

182

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

constant, and in general we’d expect them not to be constant. The unconditionally expected growth of laptop computer sales next quarter may be ten percent, but expected sales growth may be much higher, conditional upon knowledge that sales grew this quarter by twenty percent. For the independent white noise process, the conditional mean is E(yt |Ωt−1 ) = 0, and the conditional variance is var(yt |Ωt−1 ) = E[(yt − E(yt |Ωt−1 ))2 |Ωt−1 ] = σ 2 . Conditional and unconditional means and variances are identical for an independent white noise series; there are no dynamics in the process, and hence no dynamics in the conditional moments.

11.3

Estimation and Inference for the Mean, Autocorrelation and Partial Autocorrelation Functions

Now suppose we have a sample of data on a time series, and we don’t know the true model that generated the data, or the mean, autocorrelation function or partial autocorrelation function associated with that true model. Instead, we want to use the data to estimate the mean, autocorrelation function, and partial autocorrelation function, which we might then use to help us learn about the underlying dynamics, and to decide upon a suitable model or set of models to fit to the data. 11.3.1

Sample Mean

The mean of a covariance stationary series is µ = Eyt .

11.3. ESTIMATION AND INFERENCE FOR THE MEAN, AUTOCORRELATION AND PARTIAL AUT

A fundamental principle of estimation, called the analog principle, suggests that we develop estimators by replacing expectations with sample averages. Thus our estimator for the population mean, given a sample of size T , is the sample mean, T 1X yt. y¯ = T t=1

Typically we’re not directly interested in the estimate of the mean, but it’s needed for estimation of the autocorrelation function. 11.3.2

Sample Autocorrelations

The autocorrelation at displacement τ for the covariance stationary series y is ρ(τ ) =

E [(yt − µ)(yt−τ − µ)] . E[(yt − µ)2 ]

Application of the analog principle yields a natural estimator, ρˆ(τ ) =

1 T

PT

¯)(yt−τ t=τ +1 [(yt − y P T 1 ¯)2 t=1 (yt − y T

− y¯)]

PT =

¯)(yt−τ t=τ +1 [(yt − y PT ¯)2 t=1 (yt − y

− y¯)]

.

This estimator, viewed as a function of τ , is called the sample autocorrelation function, or correlogram. Note that some of the summations begin at t = τ + 1, not at t = 1; this is necessary because of the appearance of yt−τ in the sum. Note that we divide those same sums by T , even though only T − τ terms appear in the sum. When T is large relative to τ (which is the relevant case), division by T or by T − τ will yield approximately the same result, so it won’t make much difference for practical purposes, and moreover there are good mathematical reasons for preferring division by T . It’s often of interest to assess whether a series is reasonably approximated as white noise, which is to say whether all its autocorrelations are zero in population. A key result, which we simply assert, is that if a series is white noise, then the distribution of the sample autocorrelations in large samples

184

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

is

 ρˆ(τ ) ∼ N

1 0, T

 .

Note how simple the result is. The sample autocorrelations of a white noise series are approximately normally distributed, and the normal is always a convenient distribution to work with. Their mean is zero, which is to say the sample autocorrelations are unbiased estimators of the true autocorrelations, which are in fact zero. Finally, the variance of the sample autocorrelations √ is approximately 1/T (equivalently, the standard deviation is 1/ T ), which is easy to construct and remember. Under normality, taking plus or minus two standard errors yields an approximate 95% confidence interval. Thus, if the series is white noise, approximately 95% of the sample autocorrelations √ should fall in the interval 0 ± 2/ T . In practice, when we plot the sample autocorrelations for a sample of data, we typically include the “two standard error bands,” which are useful for making informal graphical assessments of whether and how the series deviates from white noise. The two-standard-error bands, although very useful, only provide 95% bounds for the sample autocorrelations taken one at a time. Ultimately, we’re often interested in whether a series is white noise, that is, whether all its autocorrelations are jointly zero. A simple extension lets us test that hypothesis. Rewrite the expression   1 ρˆ(τ ) ∼ N 0, T as



T ρˆ(τ ) ∼ N (0, 1).

11.3. ESTIMATION AND INFERENCE FOR THE MEAN, AUTOCORRELATION AND PARTIAL AUT

Squaring both sides yields11 T ρˆ2 (τ ) ∼ χ21 . It can be shown that, in addition to being approximately normally distributed, the sample autocorrelations at various displacements are approximately independent of one another. Recalling that the sum of independent χ2 variables is also χ2 with degrees of freedom equal to the sum of the degrees of freedom of the variables summed, we have shown that the Box-Pierce Q-statistic, QBP = T

m X

ρˆ2 (τ ),

τ =1

is approximately distributed as a χ2m random variable under the null hypothesis that y is white noise.12 A slight modification of this, designed to follow more closely the χ2 distribution in small samples, is QLB = T (T + 2)

m  X τ =1

1 T −τ



ρˆ2 (τ ).

Under the null hypothesis that y is white noise, QLB is approximately distributed as a χ2m random variable. Note that the Ljung-Box Q-statistic is the same as the Box-Pierce Q statistic, except that the sum of squared autocorrelations is replaced by a weighted sum of squared autocorrelations, where the weights are (T + 2)/(T − τ ). For moderate and large T , the weights are approximately 1, so that the Ljung-Box statistic differs little from the BoxPierce statistic. Selection of m is done to balance competing criteria. On one hand, we don’t want m too small, because after all, we’re trying to do a joint test on 11

Recall that the square of a standard normal random variable is a χ2 random variable with one degree of freedom. We square the sample autocorrelations ρˆ(τ ) so that positive and negative values don’t cancel when we sum across various values of τ , as we will soon do. 12 m is a maximum displacement selected by the user. Shortly we’ll discuss how to choose it.

186

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

a large part of the autocorrelation function. On the other hand, as m grows relative to T , the quality of the distributional approximations we’ve invoked √ deteriorates. In practice, focusing on m in the neighborhood of T is often reasonable. 11.3.3

Sample Partial Autocorrelations

Recall that the partial autocorrelations are obtained from population linear regressions, which correspond to a thought experiment involving linear regression using an infinite sample of data. The sample partial autocorrelations correspond to the same thought experiment, except that the linear regression is now done on the (feasible) sample of size T . If the fitted regression is yˆt = cˆ + βˆ1 yt−1 + ... + βˆτ yt−τ , then the sample partial autocorrelation at displacement τ is pˆ(τ ) ≡ βˆτ . Distributional results identical to those we discussed for the sample autocorrelations hold as well for the sample partial autocorrelations. That is, if the series is white noise, approximately 95% of the sample partial autocorre√ lations should fall in the interval ±2/ T . As with the sample autocorrelations, we typically plot the sample partial autocorrelations along with their two-standard-error bands. A “correlogram analysis” simply means examination of the sample autocorrelation and partial autocorrelation functions (with two standard error bands), together with related diagnostics, such as Q statistics. We don’t show the sample autocorrelation or partial autocorrelation at displacement 0, because as we mentioned earlier, they equal 1.0, by construction, and therefore convey no useful information. We’ll adopt this convention

11.4. AUTOREGRESSIVE MODELS FOR SERIALLY-CORRELATED TIME SERIES187

throughout. Note that the sample autocorrelation and partial autocorrelation are identical at displacement 1. That’s because at displacement 1, there are no earlier lags to control for when computing the sample partial autocorrelation, so it equals the sample autocorrelation. At higher displacements, of course, the two diverge.

11.4

Autoregressive Models for Serially-Correlated Time Series

11.4.1

Some Preliminary Notation: The Lag Operator

The lag operator and related constructs are the natural language in which time series models are expressed. If you want to understand and manipulate time series models – indeed, even if you simply want to be able to read the software manuals – you have to be comfortable with the lag operator. The lag operator, L, is very simple: it “operates” on a series by lagging it. Hence Lyt = yt−1 . Similarly, L2 yt = L(L(yt )) = L(yt−1 ) = yt−2 , and so on. Typically we’ll operate on a series not with the lag operator but with a polynomial in the lag operator. A lag operator polynomial of degree m is just a linear function of powers of L, up through the m-th power, B(L) = b0 + b1 L + b2 L2 + ...bm Lm . To take a very simple example of a lag operator polynomial operating on a series, consider the m-th order lag operator polynomial Lm , for which Lm yt = yt−m . A well-known operator, the first-difference operator ∆, is actually a first-order

188

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

polynomial in the lag operator; you can readily verify that ∆yt = (1 − L)yt = yt − yt−1 . As a final example, consider the second-order lag operator polynomial 1 + .9L + .6L2 operating on yt . We have (1 + .9L + .6L2 )yt = yt + .9yt−1 + .6yt−2 , which is a weighted sum, or distributed lag, of current and past values. All time-series models, one way or another, must contain such distributed lags, because they’ve got to quantify how the past evolves into the present and future; hence lag operator notation is a useful shorthand for stating and manipulating time-series models. Thus far we’ve considered only finite-order polynomials in the lag operator; it turns out that infinite-order polynomials are also of great interest. We write the infinite-order lag operator polynomial as 2

B(L) = b0 + b1 L + b2 L + ... =

∞ X

bi Li .

i=0

Thus, for example, to denote an infinite distributed lag of current and past shocks we might write B(L)εt = b0 εt + b1 εt−1 + b2 εt−2 + ... =

∞ X

bi εt−i .

i=0

At first sight, infinite distributed lags may seem esoteric and of limited practical interest, because models with infinite distributed lags have infinitely many parameters (b0 , b1 , b2 , ...) and therefore can’t be estimated with a finite sample of data. On the contrary, and surprisingly, it turns out that models involving infinite distributed lags are central to time series modeling. Wold’s theorem, to which we now turn, establishes that centrality.

11.4. AUTOREGRESSIVE MODELS FOR SERIALLY-CORRELATED TIME SERIES189

11.4.2

Autoregressions

When building models, we don’t want to pretend that the model we fit is true. Instead, we want to be aware that we’re approximating a more complex reality. That’s the modern view, and it has important implications for timeseries modeling. In particular, the key to successful time series modeling is parsimonious, yet accurate, approximations. Here we emphasize a very important class of approximations, the autoregressive (AR) model. We begin by characterizing the autocorrelation function and related quantities under the assumption that the AR model is “true.”13 These characterizations have nothing to do with data or estimation, but they’re crucial for developing a basic understanding of the properties of the models, which is necessary to perform intelligent modeling. They enable us to make statements such as “If the data were really generated by an autoregressive process, then we’d expect its autocorrelation function to have property x.” Armed with that knowledge, we use the sample autocorrelations and partial autocorrelations, in conjunction with the AIC and the SIC, to suggest candidate models, which we then estimate. The autoregressive process is a natural approximation to time-series dynamics. It’s simply a stochastic difference equation, a simple mathematical model in which the current value of a series is linearly related to its past values, plus an additive stochastic shock. Stochastic difference equations are a natural vehicle for discrete-time stochastic dynamic modeling. The AR(1) Process

The first-order autoregressive process, AR(1) for short, is yt = φyt−1 + εt 13

Sometimes, especially when characterizing population properties under the assumption that the models are correct, we refer to them as processes, which is short for stochastic processes.

190

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

εt ∼ W N (0, σ 2 ). In lag operator form, we write (1 − φL)yt = εt. In Figure *** we show simulated realizations of length 150 of two AR(1) processes; the first is yt = .4yt−1 + εt , and the second is yt = .95yt−1 + εt , where in each case εt ∼ iidN (0, 1), and the same innovation sequence underlies each realization. The fluctuations in the AR(1) with parameter φ = .95 appear much more persistent that those of the AR(1) with parameter φ = .4. Thus the AR(1) model is capable of capturing highly persistent dynamics. Certain conditions must be satisfied for an autoregressive process to be covariance stationary. If we begin with the AR(1) process, yt = φyt−1 + εt , and substitute backward for lagged y’s on the right side, we obtain yt = εt + φεt−1 + φ2 εt−2 + ... In lag operator form we write yt =

1 εt . 1 − φL

This moving average representation for y is convergent if and only if |φ| < 1

11.4. AUTOREGRESSIVE MODELS FOR SERIALLY-CORRELATED TIME SERIES191

; thus, |φ| < 1 is the condition for covariance stationarity in the AR(1) case. Equivalently, the condition for covariance stationarity is that the inverse of the root of the autoregressive lag operator polynomial be less than one in absolute value. From the moving average representation of the covariance stationary AR(1) process, we can compute the unconditional mean and variance, E(yt ) = E(εt + φεt−1 + φ2 εt−2 + ...) = E(εt ) + φE(εt−1 ) + φ2 E(εt−2 ) + ... =0 and var(yt ) = var(εt + φεt−1 + φ2 εt−2 + ...) = σ 2 + φ2 σ 2 + φ4 σ 2 + ... = σ2 =

P∞

2i i=0 φ

σ2 1−φ2 .

The conditional moments, in contrast, are E(yt |yt−1 ) = E(φyt−1 + εt |yt−1 ) = φE(yt−1 |yt−1 ) + E(εt |yt−1 ) = φyt−1 + 0 = φyt−1

192

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

and var(yt |yt−1 ) = var((φyt−1 + εt )|yt−1 ) = φ2 var(yt−1 |yt−1 ) + var(εt |yt−1 ) = 0 + σ2 = σ2. Note in particular that the simple way that the conditional mean adapts to the changing information set as the process evolves. To find the autocovariances, we proceed as follows. The process is yt = φyt−1 + εt , so that multiplying both sides of the equation by yt−τ we obtain yt yt−τ = φyt−1 yt−τ + εt yt−τ . For τ ≥ 1, taking expectations of both sides gives γ(τ ) = φγ(τ − 1). This is called the Yule-Walker equation. It is a recursive equation; that is, given γ(τ ), for any τ , the Yule-Walker equation immediately tells us how to get γ(τ + 1). If we knew γ(0) to start things off (an “initial condition”), we could use the Yule-Walker equation to determine the entire autocovariance sequence. And we do know γ(0); it’s just the variance of the process, which we already showed to be 2

γ(0) = σ 1−φ2 . Thus we have 2

γ(0) = σ 1−φ2

11.4. AUTOREGRESSIVE MODELS FOR SERIALLY-CORRELATED TIME SERIES193 2

γ(1) = φσ 1−φ2 2

γ(2) = φ2 σ 1−φ2 , and so on. In general, then, 2

γ(τ ) = φτ σ 1−φ2 , τ = 0, 1, 2, .... Dividing through by γ(0) gives the autocorrelations, ρ(τ ) = φτ , τ = 0, 1, 2, .... Note the gradual autocorrelation decay, which is typical of autoregressive processes. The autocorrelations approach zero, but only in the limit as the displacement approaches infinity. In particular, they don’t cut off to zero, as is the case for moving average processes. If φ is positive, the autocorrelation decay is one-sided. If φ is negative, the decay involves back-and-forth oscillations. The relevant case in business and economics is φ > 0, but either way, the autocorrelations damp gradually, not abruptly. In Figure *** and *** we show the autocorrelation functions for AR(1) processes with parameters φ = .4 and φ = .95. The persistence is much stronger when φ = .95. Finally, the partial autocorrelation function for the AR(1) process cuts off abruptly; specifically, φ, τ = 1 p(τ ) =

. 0, τ > 1.

It’s easy to see why. The partial autocorrelations are just the last coefficients in a sequence of successively longer population autoregressions. If the true process is in fact an AR(1), the first partial autocorrelation is just the autoregressive coefficient, and coefficients on all longer lags are zero. In Figures *** and *** we show the partial autocorrelation functions for our two AR(1) processes. At displacement 1, the partial autocorrelations are

194

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

simply the parameters of the process (.4 and .95, respectively), and at longer displacements, the partial autocorrelations are zero. ——————————————— More on the Stability Condition in AR(1) The key stability condition is |φ| < 1 P j Recall yt = ∞ j=0 φ εt−j P 2j 2 =⇒ var(yt ) = ∞ j=0 φ σ This is the sum of a geometric series. Hence: var(yt ) =

σ2 if |φ| < 1 1 − φ2

var(yt ) = ∞ otherwise ——————————A More Complete Picture of AR(1) Stability (On Your Own) – Series yt is persistent but eventually reverts to a fixed mean – Shocks εt have persistent effects but eventually die out P j Hint: Consider yt = µ + ∞ j=0 φ εt−j , |φ| < 1 – Autocorrelations ρ(τ ) nonzero but decay to zero – Autocorrelations ρ(τ ) depend on τ (of course) but not on time Hint: Use back substitution to relate yt and yt−2 . How does it compare to the relation between yt and yt−1 when |φ| < 1? – Series yt varies but not too extremely Hint: Consider var(yt ) =

σ2 1−φ2 ,

|φ| < 1

All of this makes for a nice, stable environment. “Covariance stationarity”

11.4. AUTOREGRESSIVE MODELS FOR SERIALLY-CORRELATED TIME SERIES195

11.4.3

The AR(p) Process

The general p-th order autoregressive process, or AR(p) for short, is yt = φ1 yt−1 + φ2 yt−2 + ... + φp yt−p + εt εt ∼ W N (0, σ 2 ). In lag operator form we write Φ(L)yt = (1 − φ1 L − φ2 L2 − ... − φp Lp )yt = εt. In our discussion of the AR(p) process we dispense with mathematical derivations and instead rely on parallels with the AR(1) case to establish intuition for its key properties. An AR(p) process is covariance stationary if and only if the inverses of all roots of the autoregressive lag operator polynomial Φ(L) are inside the unit circle.14 In the covariance stationary case we can write the process in the convergent infinite moving average form yt =

1 εt. Φ(L)

The autocorrelation function for the general AR(p) process, as with that of the AR(1) process, decays gradually with displacement. Finally, the AR(p) partial autocorrelation function has a sharp cutoff at displacement p, for the same reason that the AR(1) partial autocorrelation function has a sharp cutoff at displacement 1. Let’s discuss the AR(p) autocorrelation function in a bit greater depth. The key insight is that, in spite of the fact that its qualitative behavior (gradual damping) matches that of the AR(1) autocorrelation function, it Pp A necessary condition for covariance stationarity, which is often useful as a quick check, is i=1 φi < 1. If the condition is satisfied, the process may or may not be stationary, but if the condition is violated, the process can’t be stationary. 14

196

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

can nevertheless display a richer variety of patterns, depending on the order and parameters of the process. It can, for example, have damped monotonic decay, as in the AR(1) case with a positive coefficient, but it can also have damped oscillation in ways that AR(1) can’t have. In the AR(1) case, the only possible oscillation occurs when the coefficient is negative, in which case the autocorrelations switch signs at each successively longer displacement. In higher-order autoregressive models, however, the autocorrelations can oscillate with much richer patterns reminiscent of cycles in the more traditional sense. This occurs when some roots of the autoregressive lag operator polynomial are complex.15 Consider, for example, the AR(2) process, yt = 1.5yt−1 − .9yt−2 + εt. The corresponding lag operator polynomial is 1 − 1.5L + .9L2 , with two complex conjugate roots, .83 ± .65i. The inverse roots are .75 ± .58i, both of which are close to, but inside, the unit circle; thus the process is covariance stationary. It can be shown that the autocorrelation function for an AR(2) process is ρ(0) = 1 ρ(τ ) = φ1 ρ(τ − 1) + φ2 ρ(τ − 2), τ = 2, 3, ... ρ(1) =

φ1 1 − φ2

Using this formula, we can evaluate the autocorrelation function for the process at hand; we plot it in Figure ***. Because the roots are complex, the autocorrelation function oscillates, and because the roots are close to the unit circle, the oscillation damps slowly. Finally, let’s step back once again to consider in greater detail the precise way that finite-order autoregressive processes approximate the Wold repre15

Note that complex roots can’t occur in the AR(1) case.

11.4. AUTOREGRESSIVE MODELS FOR SERIALLY-CORRELATED TIME SERIES197

sentation. As always, the Wold representation is yt = B(L)εt , where B(L) is of infinite order. The moving average representation associated with the AR(1) process is yt =

1 εt . 1 − φL

Thus, when we fit an AR(1) model, we’re using

1 1−φL ,

a rational polyno-

mial with degenerate numerator polynomial (degree zero) and denominator polynomial of degree one, to approximate B(L). The moving average representation associated with the AR(1) process is of infinite order, as is the Wold representation, but it does not have infinitely many free coefficients. In fact, only one parameter, φ, underlies it. The AR(p) is an obvious generalization of the AR(1) strategy for approximating the Wold representation. The moving average representation associated with the AR(p) process is yt =

1 εt . Φ(L)

When we fit an AR(p) model to approximate the Wold representation we’re still using a rational polynomial with degenerate numerator polynomial (degree zero), but the denominator polynomial is of higher degree. 11.4.4

Alternative Approaches to Estimating Autoregressions

We can estimate autoregressions directly by OLS. Alternatively, we can write the AR model as a regression on an intercept, with a serially correlated disturbance. We have yt = µ + ε t

198

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

Φ(L)εt = vt vt ∼ W N (0, σ 2 ). We can estimate each model in identical fashion using nonlinear least squares. Eviews and other packages proceed in precisely that way.16 This framework – regression on a constant with serially correlated disturbances – has a number of attractive features. First, the mean of the process is the regression constant term.17 Second, it leads us naturally toward regression on more than just a constant, as other right-hand side variables can be added as desired. ************* Non-Zero Mean I (AR(1) Example): Regression on an Intercept and yt−1 ,With White Noise Disturbances

(yt − µ) = φ(yt−1 − µ) + εt εt ∼ iidN (0, σ 2 ), |φ| < 1 =⇒ yt = c + φyt−1 + εt , where c = µ(1 − φ) Back-substitution reveals that: yt = µ +

∞ X

φj εt−j

j=0

=⇒ E(yt ) = µ Non-Zero Mean II (AR(1) Example, Cont’d): Regression on an Intercept Alone, with AR(1) Disturbances 16

That’s why, for example, information on the number of iterations required for convergence is presented even for estimation of the autoregressive model. 17 Hence the notation “µ” for the intercept.

11.5. EXERCISES, PROBLEMS AND COMPLEMENTS

199

y t = µ + εt

εt = φεt−1 + vt vt ∼ iidN (0, σ 2 ), |φ| < 1

11.5

Exercises, Problems and Complements

1. (Autocorrelation functions of covariance stationary series) While interviewing at a top investment bank, your interviewer is impressed by the fact that you have taken a course on time series. She decides to test your knowledge of the autocovariance structure of covariance stationary series and lists five autocovariance functions: a. γ(t, τ ) = α b. γ(t, τ ) = e−ατ c. γ(t, τ ) = ατ d. γ(t, τ ) =

α τ

, where α is a positive constant. Which autocovariance

function(s) are consistent with covariance stationarity, and which are not? Why? 2. (Autocorrelation vs. partial autocorrelation) Describe the difference between autocorrelations and partial autocorrelations. How can autocorrelations at certain displacements be positive while the partial autocorrelations at those same displacements are negative? 3. (Simulating time series processes)

200

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

Many cutting-edge estimation techniques involve simulation. Moreover, simulation is often a good way to get a feel for a model and its behavior. White noise can be simulated on a computer using random number generators, which are available in most statistics, econometrics and forecasting packages. a. Simulate a Gaussian white noise realization of length 200. Call the white noise εt . Compute the correlogram. Discuss. b. Form the distributed lag yt = εt + .9εt−1 , t = 2, 3, ..., 200. Compute the sample autocorrelations and partial autocorrelations. Discuss. c. Let y1 = 1 and yt = .9yt−1 + εt , t = 2, 3, ..., 200. Compute the sample autocorrelations and partial autocorrelations. Discuss. 4. (Sample autocorrelation functions for trending series) A tell-tale sign of the slowly-evolving nonstationarity associated with trend is a sample autocorrelation function that damps extremely slowly. a. Find three trending series, compute their sample autocorrelation functions, and report your results. Discuss. b. Fit appropriate trend models, obtain the model residuals, compute their sample autocorrelation functions, and report your results. Discuss. 5. (Sample autocorrelation functions for seasonal series) A tell-tale sign of seasonality is a sample autocorrelation function with sharp peaks at the seasonal displacements (4, 8, 12, etc. for quarterly data, 12, 24, 36, etc. for monthly data, and so on). a. Find a series with both trend and seasonal variation. Compute its sample autocorrelation function. Discuss.

11.6.

NOTES

201

b. Detrend the series. Discuss. c. Compute the sample autocorrelation function of the detrended series. Discuss. d. Seasonally adjust the detrended series. Discuss. e. Compute the sample autocorrelation function of the detrended, seasonallyadjusted series. Discuss. 6. (Outliers in Time Series) Outliers can arise for a number of reasons. Perhaps the outlier is simply a mistake due to a clerical recording error, in which case you’d want to replace the incorrect data with the correct data. We’ll call such outliers measurement outliers, because they simply reflect measurement errors. In a time-series context, if a particular value of a recorded series is plagued by a measurement outlier, there’s no reason why observations at other times should necessarily be affected. Alternatively, outliers in time series may be associated with large unanticipated shocks, the effects of which may certainly linger. If, for example, an adverse shock hits the U.S. economy this quarter (e.g., the price of oil on the world market triples) and the U.S. plunges into a severe depression, then it’s likely that the depression will persist for some time. Such outliers are called innovation outliers, because they’re driven by shocks, or “innovations,” whose effects naturally last more than one period due to the dynamics operative in business, economic, and financial series.

11.6

Notes

202

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

11.6.

NOTES

203

204

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

11.6.

NOTES

205

206

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

11.6.

NOTES

207

208

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

11.6.

NOTES

209

210

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

11.6.

NOTES

211

212

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

11.6.

NOTES

213

214

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

11.6.

NOTES

215

216

CHAPTER 11. SERIAL CORRELATION IN OBSERVED TIME SERIES

Chapter 12 Serial Correlation in Time Series Regression Recall the full ideal conditions. Here we deal with violation of the assumption that *** Consider: ε ∼ N (0, Ω) The FIC case is Ω = σ 2 I. When is Ω 6= σ 2 I? We’ve already seen heteroskedasticity. Now we consider “serial correlation” or “autocorrelation.” → εt is correlated with εt−τ



Can arise for many reasons, but they all boil down to: The included X variables fail to capture all the dynamics in y. – No additional explanation needed! On Ω with Heteroskedasticity vs. Serial Correlation 217

218

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

With heteroskedasticity, εi is independent across i but not identically distributed across i (variance of εi varies with i):

σ12 0 . . .

0



  0 σ22 . . . Ω=  ... ... . . . 

0 .. .

    



0

2 0 . . . σN

With serial correlation, εt is correlated across t but unconditionally identically distributed across t:



σ2

γ(1)

. . . γ(T − 1)



  2  γ(1)  σ . . . γ(T − 2)  Ω= .. .. . .   . . . . . .   γ(T − 1) γ(T − 2) . . . σ2 Consequences of Serial Correlation OLS inefficient (no longer BLUE), in finite samples and asymptotically Standard errors biased and inconsistent. Hence t ratios do not have the t distribution in finite samples and do not have the N (0, 1) distribution asymptotically Does this sound familiar? Detection • Graphical autocorrelation diagnostics – Residual plot

12.1. TESTING FOR SERIAL CORRELATION

219

– Scatterplot of et against et−τ

12.1

Testing for Serial Correlation

If a model has extracted all the systematic information from the data, then what’s left – the residual – should be iid random noise. Hence the usefulness of various residual-based tests of the hypothesis that regression disturbances are white noise. • Formal autocorrelation tests and analyses – Durbin-Watson – Breusch-Godfrey – Residual correlogram Liquor Sales Regression on Trend and Seasonals Graphical Diagnostics - Residual Plot Graphical Diagnostics - Scatterplot of et against et−1

12.1.1

The Durbin-Watson Test

Formal Tests and Analyses: Durbin-Watson (0.59!) The Durbin-Watson test (discussed in Chapter 3) is the most popular. Simple paradigm (AR(1)): yt = x0t β + εt εt = φεt−1 + vt

220

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

12.1. TESTING FOR SERIAL CORRELATION

221

vt ∼ iid N (0, σ 2 ) We want to test H0 : φ = 0 against H1 : φ 6= 0 Regress y → X and obtain the residuals et PT DW =

2 t=2 (et − et−1 ) PT 2 t=1 et

Understanding the Durbin-Watson Statistic

PT

2 t=2 (et − et−1 ) = PT 2 e t=1 t

DW =

=

1 T

PT

2 t=2 et +

1 T

1 T

PT

2 t=2 (et − et−1 ) PT 2 1 t=1 et T

PT

2 t=2 et−1 − 2 PT 2 1 t=1 et T

1 T

PT

t=2 et et−1

Hence as T → ∞: σ 2 + σ 2 − 2cov(et , et−1 ) DW ≈ = 2(1 − corr(et , et−1 )) | {z } σ2 ρe (1)

222

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

=⇒ DW ∈ [0, 4], DW → 2 as φ → 0, and DW → 0 as φ → 1 Note that the Durbin-Watson test is effectively based only on the first sample autocorrelation and really only tests whether the first autocorrelation is zero. We say therefore that the Durbin-Watson is a test for first-order serial correlation. In addition, the Durbin-Watson test is not valid if the regressors include lagged dependent variables.1 (See EPC ***) On both counts, we’d like more general and flexible approaches for diagnosing serial correlation. 12.1.2

The Breusch-Godfrey Test

The Breusch-Godfrey test is an alternative to the Durbin-Watson test. It’s designed to detect pth -order serial correlation, where p is selected by the user, and is also valid in the presence of lagged dependent variables. General AR(p) environment: yt = x0t β + εt εt = φ1 εt−1 + ... + φp εt−p + vt vt ∼ iidN (0, σ 2 ) We want to test H0 : (φ1 , ..., φp ) = 0 against H1 : (φ1 , ..., φp ) 6= 0 • Regress yt → xt and obtain the residuals et • Regress et → xt , et−1 , ..., et−p 1

Following standard, if not strictly appropriate, practice, in this book we often report and examine the Durbin-Watson statistic even when lagged dependent variables are included. We always supplement the Durbin-Watson statistic, however, with other diagnostics such as the residual correlogram, which remain valid in the presence of lagged dependent variables, and which almost always produce the same inference as the Durbin-Watson statistic.

12.1. TESTING FOR SERIAL CORRELATION

223

• Examine T R2 . In large samples T R2 ∼ χ2p under the null. Does this sound familiar? BG for AR(1) Disturbances (T R2 = 168.5, p = 0.0000) BG for AR(4) Disturbances (T R2 = 216.7, p = 0.0000) BG for AR(8) Disturbances (T R2 = 219.0, p = 0.0000)

12.1.3

The Residual Correlogram

When we earlier introduced the correlogram in Chapter ***, we focused on the case of an observed time series, in which case we showed that the Q statistics are distributed as χ2m . Now, however, we want to assess whether unobserved model disturbances are white noise. To do so, we use the model

224

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

12.2. ESTIMATION WITH SERIAL CORRELATION

225

residuals, which are estimates of the unobserved disturbances. Because we fit a model to get the residuals, we need to account for the degrees of freedom used. The upshot is that the distribution of the Q statistics under the white noise hypothesis is better approximated by a χ2m−k random variable, where k is the number of parameters estimated.

cov(e c t , et−τ ) ρˆe (τ ) = = vd ar(et )

1 T

P t et et−τ P 1 2 t et T

pˆe (τ ) is the coefficient on et−τ in the regression et → c, et−1 , ..., et−(τ −1) , et−τ Approximate 95% “Bartlett bands” under the iid N null: 0 ±

QBP = T

m X

ρˆ2e (τ ) ∼ χ2m−K under iid N

τ =1

QLB = T (T + 2)

m  X τ =1

1 T −τ



ρˆ2e (τ ) ∼ χ2m−K

Residual Correlogram for Trend + Seasonal Model

12.2

Estimation with Serial Correlation

“Correcting for Autocorrelation” 12.2.1

Regression with Serially-Correlated Disturbances

GLS quasi-differencing, Cochrane-Orcutt iteration

√2 T

226

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

• Generalized least squares – Transform the data such that the classical conditions hold • Heteroskedasticity and autocorrelation consistent (HAC) s.e.’s – Use OLS, but calculate standard errors robustly ********************* Heteroskedasticity and Autocorrelation Robust Standard Errors – Perhaps you’re ultimately interested in making forecasts, but as a preliminary step you want to do credible inference regarding the contributions of the various x variables to a point prediction based only on the x’s. Then use “heteroskedasticity and autocorrelation robust standard errors” “HAC standard errors”, “Newey-West standard errors” – Mechanically, just a simple regression option e.g., in EViews, instead of “ls y,c,x”, use “ls(cov=hac) y,c,x” —-

12.2. ESTIMATION WITH SERIAL CORRELATION

227

Trend + Seasonal Liquor Sales Regression with HAC Standard Errors —————********************** Recall Generalized Least Squares (GLS) Consider the FIC except that we now let: ε ∼ N (0, Ω)

The GLS estimator is: βˆGLS = (X 0 Ω−1 X)−1 X 0 Ω−1 y Under the remaining full ideal conditions it is consistent, normally distributed with covariance matrix (X 0 Ω−1 X)−1 , and MVUE: βˆGLS ∼ N β, (X 0 Ω−1 X)−1



Infeasible GLS (Illustrated in the Durbin-Watson AR(1) Environment)

228

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

yt = x0t β + εt (1a) εt = φεt−1 + vt (1b) vt ∼ iid N (0, σ 2 ) (1c) Suppose that you know φ. Then you could form: φyt−1 = φx0t−1 β + φεt−1 (1a∗) =⇒ (yt − φyt−1 ) = (x0t − φx0t−1 )β + (εt − φεt−1 ) (just (1a) − (1a∗)) =⇒ yt = φyt−1 + x0t β − x0t−1 (φβ) + vt – Satisfies the classical conditions! Note the restriction.

12.2.2

Serially-Correlated Disturbances vs. Lagged Dependent Variables

Closely related. Inclusion of lagged dependent variables is the more general (and simple!) approach. OLS estimation. So, two key closely-related regressions: yt → xt (with AR(1) disturbances) yt → yt−1 , xt , xt−1 (with W N disturbances and a coef. restr.) Feasible GLS (1) Replace the unknown φ value with an estimate and run the OLS regression: ˆ t−1 ) → (x0 − φx ˆ 0 ) (yt − φy t t−1

12.2. ESTIMATION WITH SERIAL CORRELATION

Figure 12.1: ***. ***.

– Iterate if desired: βˆ1 , φˆ1 , βˆ2 , φˆ2 , ... (2) Run the OLS Regression yt → yt−1 , xt , xt−1 subject to the constraint noted earlier (or not) – Generalizes trivially to AR(p): yt → yt−1 , ..., yt−p , xt , xt−1 , ..., xt−p (Select p using the usual AIC, SIC, etc.) Trend + Seasonal Model with AR(4) Disturbances Trend + Seasonal Model with AR(4) Disturbances Residual Plot Trend + Seasonal Model with AR(4) Disturbances Residual Correlogram Trend + Seasonal Model with Four Lags of Dep. Var. How Did we Arrive at AR(4) Dynamics?

229

230

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

Figure 12.2: ***. ***.

Figure 12.3: ***. ***.

12.2. ESTIMATION WITH SERIAL CORRELATION

231

Everything points there: – Supported by original trend + seasonal residual correlogram – Supported by DW – Supported by BG – Supported by SIC pattern: AR(1) = −3.797 AR(2) = −3.941 AR(3) = −4.080 AR(4) = −4.086 AR(5) = −4.071 AR(6) = −4.058 AR(7) = −4.057 AR(8) = −4.040 Heteroskedasticity-and-Autocorrelation Consistent (HAC) Standard Errors Using advanced methods, one can obtain consistent ˆ under minimal assumptions standard errors (if not an efficient β), • “HAC standard errors”

232

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

Figure 12.4: ***. ***.

• “Robust standard errors” • “Newey-West standard errors” • βˆ remains unchanged at its OLS value. Is that a problem? Trend + Seasonal Model with HAC Standard Errors 12.2.3

A Full Model of Liquor Sales

We’ll model monthly U.S. liquor sales. We graphed a short span of the series in Chapter *** and noted its pronounced seasonality – sales skyrocket during the Christmas season. In Figure ***, we show a longer history of liquor sales, 1968.01 - 1993.12. In Figure *** we show log liquor sales; we take logs to stabilize the variance, which grows over time.2 The variance of log liquor sales is more stable, and it’s the series for which we’ll build models.3 2

The nature of the logarithmic transformation is such that it “compresses” an increasing variance. Make a graph of log(x) as a function of x, and you’ll see why. 3 From this point onward, for brevity we’ll simply refer to “liquor sales,” but remember that we’ve taken logs.

12.2. ESTIMATION WITH SERIAL CORRELATION

233

Liquor sales dynamics also feature prominent trend and cyclical effects. Liquor sales trend upward, and the trend appears nonlinear in spite of the fact that we’re working in logs. To handle the nonlinear trend, we adopt a quadratic trend model (in logs). The estimation results are in Table 1. The residual plot (Figure ***) shows that the fitted trend increases at a decreasing rate; both the linear and quadratic terms are highly significant. The adjusted R2 is 89%, reflecting the fact that trend is responsible for a large part of the variation in liquor sales. The standard error of the regression is .125; it’s an estimate of the standard deviation of the error we’d expect to make in forecasting liquor sales if we accounted for trend but ignored seasonality and serial correlation. The Durbin-Watson statistic provides no evidence against the hypothesis that the regression disturbance is white noise. The residual plot, however, shows obvious residual seasonality. The DurbinWatson statistic missed it, evidently because it’s not designed to have power against seasonal dynamics.4 The residual plot also suggests that there may be a cycle in the residual, although it’s hard to tell (hard for the Durbin-Watson statistic as well), because the pervasive seasonality swamps the picture and makes it hard to infer much of anything. The residual correlogram (Table 2) and its graph (Figure ***) confirm the importance of the neglected seasonality. The residual sample autocorrelation function has large spikes, far exceeding the Bartlett bands, at the seasonal displacements, 12, 24, and 36. It indicates some cyclical dynamics as well; apart from the seasonal spikes, the residual sample autocorrelation and partial autocorrelation functions oscillate, and the Ljung-Box statistic rejects the white noise null hypothesis even at very small, non-seasonal, displacements. In Table 3 we show the results of regression on quadratic trend and a full set of seasonal dummies. The quadratic trend remains highly significant. The 4

Recall that the Durbin-Watson test is designed to detect simple AR(1) dynamics. It also has the ability to detect other sorts of dynamics, but evidently not those relevant to the present application, which are very different from a simple AR(1).

234

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

adjusted R2 rises to 99%, and the standard error of the regression falls to .046, which is an estimate of the standard deviation of the forecast error we expect to make if we account for trend and seasonality but ignore serial correlation. The Durbin-Watson statistic, however, has greater ability to detect serial correlation now that the residual seasonality has been accounted for, and it sounds a loud alarm. The residual plot of Figure *** shows no seasonality, as that’s now picked up by the model, but it confirms the Durbin-Watson’s warning of serial correlation. The residuals are highly persistent, and hence predictable. We show the residual correlogram in tabular and graphical form in Table *** and Figure ***. The residual sample autocorrelations oscillate and decay slowly, and they exceed the Bartlett standard errors throughout. The LjungBox test strongly rejects the white noise null at all displacements. Finally, the residual sample partial autocorrelations cut off at displacement 3. All of this suggests that an AR(3) would provide a good approximation to the disturbance’s Wold representation. In Table 5, then, we report the results of estimating a liquor sales model with quadratic trend, seasonal dummies, and AR(3) disturbances. The R2 is now 100%, and the Durbin-Watson is fine. One inverse root of the AR(3) disturbance process is estimated to be real and close to the unit circle (.95), and the other two inverse roots are a complex conjugate pair farther from the unit circle. The standard error of this regression is an estimate of the standard deviation of the forecast error we’d expect to make after modeling the residual serial correlation, as we’ve now done; that is, it’s an estimate of the standard deviation of v.5 It’s a very small .027, roughly half that obtained when we ignored serial correlation. We show the residual plot in Figure *** and the residual correlogram in Table *** and Figure ***. The residual plot reveals no patterns; instead, the 5

Recall that v is the innovation that drives the AR process for the regression disturbance, ε.

12.3. EXERCISES, PROBLEMS AND COMPLEMENTS

235

residuals look like white noise, as they should. The residual sample autocorrelations and partial autocorrelations display no patterns and are mostly inside the Bartlett bands. The Ljung-Box statistics also look good for small and moderate displacements, although their p-values decrease for longer displacements. All things considered, the quadratic trend, seasonal dummy, AR(3) specification seems tentatively adequate. We also perform a number of additional checks. In Figure ***, we show a histogram and normality test applied to the residuals. The histogram looks symmetric, as confirmed by the skewness near zero. The residual kurtosis is a bit higher then three and causes Jarque-Bera test to reject the normality hypothesis with a p-value of .02, but the residuals nevertheless appear to be fairly well approximated by a normal distribution, even if they may have slightly fatter tails.

12.3

Exercises, Problems and Complements

1. (Serially correlated disturbances vs. lagged dependent variables) Estimate the quadratic trend model for log liquor sales with seasonal dummies and three lags of the dependent variable included directly. Discuss your results and compare them to those we obtained when we instead allowed for AR(3) disturbances in the regression. 2. (Liquor sales model selection using AIC and SIC) Use the AIC and SIC to assess the necessity and desirability of including trend and seasonal components in the liquor sales model. a. Display the AIC and SIC for a variety of specifications of trend and seasonality. Which would you select using the AIC? SIC? Do the AIC and SIC select the same model? If not, which do you prefer?

236

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

b. Discuss the estimation results and residual plot from your preferred model, and perform a correlogram analysis of the residuals. Discuss, in particular, the patterns of the sample autocorrelations and partial autocorrelations, and their statistical significance. c. How, if at all, are your results different from those reported in the text? Are the differences important? Why or why not? 3. (Diagnostic checking of model residuals) The Durbin-Watson test is invalid in the presence of lagged dependent variables. Breusch-Godfrey remains valid. a. Durbin’s h test is an alternative to the Durbin-Watson test. As with the Durbin-Watson test, it’s designed to detect first-order serial correlation, but it’s valid in the presence of lagged dependent variables. Do some background reading as well on Durbin’s h test and report what you learned. b. Which do you think is likely to be most useful to you in assessing the properties of residuals from time-series models: the residual correlogram, Durbin’s h test, or the Breusch-Godfrey test? Why? 4. (Assessing the adequacy of the liquor sales model trend specification) Critique the liquor sales model that we adopted (log liquor sales with quadratic trend, seasonal dummies, and AR(3) disturbances). a. If the trend is not a good approximation to the actual trend in the series, would it greatly affect short-run forecasts? Long-run forecasts? b. Fit and assess the adequacy of a model with log-linear trend. c. How might you fit and assess the adequacy of a broken linear trend? How might you decide on the location of the break point?

12.4.

NOTES

237

d. Recall our assertion that best practice requires using a χ2m−k distribution rather than a χ2m distribution to assess the significance of Q-statistics for model residuals, where m is the number of autocorrelations included in the Box-Pierce statistic and k is the number of parameters estimated. In several places in this chapter, we failed to heed this advice when evaluating the liquor sales model. If we were instead to compare the residual Q-statistic p-values to a χ2m−k distribution, how, if at all, would our assessment of the model’s adequacy change? e. Return to the log-quadratic trend model with seasonal dummies, allow for AR(p) disturbances, and do a systematic selection of p and q using the AIC and SIC. Do AIC and SIC select the same model? If not, which do you prefer? If your preferred model differs from the AR(3) that we used, replicate the analysis in the text using your preferred model, and discuss your results. 5. Fixed X and lagged dependent variables. In section ?? we claimed that it’s logically impossible to maintain the assumption of fixed X in the presence of a lagged dependent variable. Why?

12.4

Notes

The idea that regression models with serially correlated disturbances are more restrictive than other sorts of transfer function models has a long history in econometrics and engineering and is highlighted in a memorably-titled paper, ”Serial Correlation as a Convenient Simplification, not a Nuisance,” by Hendry and Mizon (1978)***.

238

CHAPTER 12. SERIAL CORRELATION IN TIME SERIES REGRESSION

Chapter 13 Heteroskedasticity in Time Series Recall the full ideal conditions. The celebrated Wold decomposition makes clear that every covariance stationary series may be viewed as ultimately driven by underlying weak white noise innovations. Hence it is no surprise that every model discussed in this book is driven by underlying white noise. To take a simple example, if the series yt follows an AR(1) process, then yt = φyt−1 + εt , where εt is white noise. In some situations it is inconsequential whether εt is weak or strong white noise, that is, whether εt is independent, as opposed to merely serially uncorrelated. Hence, to simplify matters we sometimes assume strong white noise, εt ∼ iid(0, σ 2 ). Throughout this book, we have thus far taken that approach, sometimes explicitly and sometimes implicitly. When εt is independent, there is no distinction between the unconditional distribution of εt and the distribution of εt conditional upon its past, by definition of independence. Hence σ 2 is both the unconditional and conditional variance of εt . The Wold decomposition, however, does not require that εt be serially independent; rather it requires only that εt be serially uncorrelated. If εt is dependent, then its unconditional and conditional distributions will differ. We denote the unconditional innovation distribution by εt ∼ (0, σ 2 ). We are particularly interested in conditional dynamics characterized by heteroskedasticity, or time-varying volatility. Hence we denote the conditional 239

240

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

distribution by εt |Ωt−1 ∼ (0, σt2 ), where Ωt−1 = εt−1 , εt−2 , .... The conditional variance σt2 will in general evolve as Ωt−1 evolves, which focuses attention on the possibility of time-varying innovation volatility.1 Allowing for time-varying volatility is crucially important in certain economic and financial contexts. The volatility of financial asset returns, for example, is often time-varying. That is, markets are sometimes tranquil and sometimes turbulent, as can readily be seen by examining the time series of stock market returns in Figure 1, to which we shall return in detail. Timevarying volatility has important implications for financial risk management, asset allocation and asset pricing, and it has therefore become a central part of the emerging field of financial econometrics. Quite apart from financial applications, however, time-varying volatility also has direct implications for interval and density forecasting in a wide variety of applications: correct confidence intervals and density forecasts in the presence of volatility fluctuations require time-varying confidence interval widths and time-varying density forecast spreads. The models that we have considered thus far, however, do not allow for that possibility. In this chapter we do so.

13.1

The Basic ARCH Process

Consider the general linear process, yt = B(L)εt B(L) =

∞ X

bi Li

i=0 ∞ X

b2i < ∞

i=0 1

In principle, aspects of the conditional distribution other than the variance, such as conditional skewness, could also fluctuate. Conditional variance fluctuations are by far the most important in practice, however, so we assume that fluctuations in the conditional distribution of ε are due exclusively to fluctuations in σt2 .

13.1. THE BASIC ARCH PROCESS

241

b0 = 1 εt ∼ W N (0, σ 2 ). We will work with various cases of this process. Suppose first that εt is strong white noise, εt ∼ iid(0, σ 2 ). Let us review some results already discussed for the general linear process, which will prove useful in what follows. The unconditional mean and variance of y are E(yt ) = 0 and E(yt2 )



2

∞ X

b2i ,

i=0

which are both time-invariant, as must be the case under covariance stationarity. However, the conditional mean of y is time-varying: E(yt |Ωt−1 ) =

∞ X

bi εt−i ,

i=1

where the information set is Ωt−1 = εt−1 , εt−2 , .... The ability of the general linear process to capture covariance stationary conditional mean dynamics is the source of its power. Because the volatility of many economic time series varies, one would hope that the general linear process could capture conditional variance dynamics as well, but such is not the case for the model as presently specified: the conditional variance of y is constant at  E (yt − E(yt |Ωt−1 ))2 |Ωt−1 = σ 2 .

242

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

This potentially unfortunate restriction manifests itself in the properties of the h-step-ahead conditional prediction error variance. The minimum mean squared error forecast is the conditional mean, E(yt+h |Ωt ) =

∞ X

bh+i εt−i ,

i=0

and so the associated prediction error is

yt+h − E(yt+h |Ωt ) =

h−1 X

bi εt+h−i ,

i=0

which has a conditional prediction error variance of 



2

E (yt+h − E(yt+h |Ωt )) |Ωt = σ

2

h−1 X

b2i .

i=0

The conditional prediction error variance is different from the unconditional variance, but it is not time-varying: it depends only on h, not on the conditioning information Ωt . In the process as presently specified, the conditional variance is not allowed to adapt to readily available and potentially useful conditioning information. So much for the general linear process with iid innovations. Now we extend it by allowing εt to be weak rather than strong white noise, with a particular nonlinear dependence structure. In particular, suppose that, as before, yt = B(L)εt B(L) =

∞ X

bi Li

i=0 ∞ X

b2i < ∞

i=0

b0 = 1,

13.1. THE BASIC ARCH PROCESS

243

but now suppose as well that εt |Ωt−1 ∼ N (0, σt2 ) σt2 = ω + γ(L)ε2t ω > 0γ(L) =

p X

γi Li γi ≥ 0f oralli

X

γi < 1.

i=1

Note that we parameterize the innovation process in terms of its conditional density, εt |Ωt−1 , which we assume to be normal with a zero conditional mean and a conditional variance that depends linearly on p past squared innovations. εt is serially uncorrelated but not serially independent, because the current conditional variance σt2 depends on the history of εt .2 The stated regularity conditions are sufficient to ensure that the conditional and unconditional variances are positive and finite, and that yt is covariance stationary. The unconditional moments of εt are constant and are given by E(εt ) = 0 and E(εt − E(εt ))2 =

1−

ω P

γi

.

The important result is not the particular formulae for the unconditional mean and variance, but the fact that they are fixed, as required for covariance stationarity. As for the conditional moments of εt , its conditional variance 2

In particular, σt2 depends on the previous p values of εt via the distributed lag γ(L)ε2t .

244

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

is time-varying,  E (εt − E(εt |Ωt−1 ))2 |Ωt−1 = ω + γ(L)ε2t , and of course its conditional mean is zero by construction. Assembling the results to move to the unconditional and conditional moments of y as opposed to εt , it is easy to see that both the unconditional mean and variance of y are constant (again, as required by covariance stationarity), but that both the conditional mean and variance are time-varying: E(yt |Ωt−1 ) =

∞ X

bi εt−i

i=1

 E (yt − E(yt |Ωt−1 ))2 |Ωt−1 = ω + γ(L)ε2t . Thus, we now treat conditional mean and variance dynamics in a symmetric fashion by allowing for movement in each, as determined by the evolving information set Ωt−1 . In the above development, εt is called an ARCH(p) process, and the full model sketched is an infinite-ordered moving average with ARCH(p) innovations, where ARCH stands for autoregressive conditional heteroskedasticity. Clearly εt is conditionally heteroskedastic, because its conditional variance fluctuates. There are many models of conditional heteroskedasticity, but most are designed for cross-sectional contexts, such as when the variance of a cross-sectional regression disturbance depends on one or more of the regressors.3 However, heteroskedasticity is often present as well in the time-series contexts relevant for forecasting, particularly in financial markets. The particular conditional variance function associated with the ARCH process, σt2 = ω + γ(L)ε2t , 3

The variance of the disturbance in a model of household expenditure, for example, may depend on income.

13.2. THE GARCH PROCESS

245

is tailor-made for time-series environments, in which one often sees volatility clustering, such that large changes tend to be followed by large changes, and small by small, of either sign. That is, one may see persistence, or serial correlation, in volatility dynamics (conditional variance dynamics), quite apart from persistence (or lack thereof) in conditional mean dynamics. The ARCH process approximates volatility dynamics in an autoregressive fashion; hence the name autoregressiveconditional heteroskedasticity. To understand why, note that the ARCH conditional variance function links today’s conditional variance positively to earlier lagged ε2t ’s, so that large ε2t ’s in the recent past produce a large conditional variance today, thereby increasing the likelihood of a large ε2t today. Hence ARCH processes are to conditional variance dynamics precisely as standard autoregressive processes are to conditional mean dynamics. The ARCH process may be viewed as a model for the disturbance in a broader model, as was the case when we introduced it above as a model for the innovation in a general linear process. Alternatively, if there are no conditional mean dynamics of interest, the ARCH process may be used for an observed series. It turns out that financial asset returns often have negligible conditional mean dynamics but strong conditional variance dynamics; hence in much of what follows we will view the ARCH process as a model for an observed series, which for convenience we will sometimes call a “return.”

13.2

The GARCH Process

Thus far we have used an ARCH(p) process to model conditional variance dynamics. We now introduce the GARCH(p,q) process (GARCH stands for generalized ARCH), which we shall subsequently use almost exclusively. As we shall see, GARCH is to ARCH (for conditional variance dynamics) as ARMA is to AR (for conditional mean dynamics).

246

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

The pure GARCH(p,q) process is given by4 yt = εt εt |Ωt−1 ∼ N (0, σt2 ) σt2 = ω + α(L)ε2t + β(L)σt2 α(L) =

p X

αi Li , β(L) =

i=1

q X

βi Li

i=1

ω > 0, αi ≥ 0, βi ≥ 0,

X

αi +

X

βi < 1.

The stated conditions ensure that the conditional variance is positive and that yt is covariance stationary. Back substitution on σt2 reveals that the GARCH(p,q) process can be represented as a restricted infinite-ordered ARCH process, σt2

=

1−

ω P



X α(L) 2 ω P δi ε2t−i , + εt = + βi 1 − β(L) 1 − βi i=1

which precisely parallels writing an ARMA process as a restricted infiniteordered AR. Hence the GARCH(p,q) process is a parsimonious approximation to what may truly be infinite-ordered ARCH volatility dynamics. It is important to note a number of special cases of the GARCH(p,q) process. First, of course, the ARCH(p) process emerges when β(L) = 0. Second, if both α(L) and β(L) are zero, then the process is simply iid Gaussian noise with variance ω. Hence, although ARCH and GARCH processes may at first appear unfamiliar and potentially ad hoc, they are in fact much more general than standard iid white noise, which emerges as a potentially 4

By “pure” we mean that we have allowed only for conditional variance dynamics, by setting yt = εt . We could of course also introduce conditional mean dynamics, but doing so would only clutter the discussion while adding nothing new.

13.2. THE GARCH PROCESS

247

highly-restrictive special case. Here we highlight some important properties of GARCH processes. All of the discussion of course applies as well to ARCH processes, which are special cases of GARCH processes. First, consider the second-order moment structure of GARCH processes. The first two unconditional moments of the pure GARCH process are constant and given by E(εt ) = 0 and E(εt − E(εt ))2 =

1−

P

ω P , αi − β i

while the conditional moments are E(εt |Ωt−1 ) = 0 and of course  E (εt − E(εt |Ωt−1 ))2 |Ωt−1 = ω + α(L)ε2t + β(L)σt2 . In particular, the unconditional variance is fixed, as must be the case under covariance stationarity, while the conditional variance is time-varying. It is no surprise that the conditional variance is time-varying – the GARCH process was of course designed to allow for a time-varying conditional variance – but it is certainly worth emphasizing: the conditional variance is itself a serially correlated time series process. Second, consider the unconditional higher-order (third and fourth) moment structure of GARCH processes. Real-world financial asset returns, which are often modeled as GARCH processes, are typically unconditionally symmetric but leptokurtic (that is, more peaked in the center and with fatter tails than a normal distribution). It turns out that the implied uncondi-

248

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

tional distribution of the conditionally Gaussian GARCH process introduced above is also symmetric and leptokurtic. The unconditional leptokurtosis of GARCH processes follows from the persistence in conditional variance, which produces clusters of “low volatility” and “high volatility” episodes associated with observations in the center and in the tails of the unconditional distribution, respectively. Both the unconditional symmetry and unconditional leptokurtosis agree nicely with a variety of financial market data. Third, consider the conditional prediction error variance of a GARCH process, and its dependence on the conditioning information set. Because the conditional variance of a GARCH process is a serially correlated random variable, it is of interest to examine the optimal h-step-ahead prediction, prediction error, and conditional prediction error variance. Immediately, the h-step-ahead prediction is E(εt+h |Ωt ) = 0, and the corresponding prediction error is εt+h − E(εt+h |Ωt ) = εt+h . This implies that the conditional variance of the prediction error,  E (εt+h − E(εt+h |Ωt ))2 |Ωt = E(ε2t+h |Ωt ), depends on both h and Ωt , because of the dynamics in the conditional variance. Simple calculations

13.2. THE GARCH PROCESS

249

reveal that the expression for the GARCH(p, q) process is given by ! h−2 X 2 E(ε2t+h |Ωt ) = ω (α(1) + β(1))i + (α(1) + β(1))h−1 σt+1 . i=0

In the limit, this conditional variance reduces to the unconditional variance of the process, lim E(ε2t+h |Ωt ) =

h→∞

ω . 1 − α(1) − β(1)

For finite h, the dependence of the prediction error variance on the current information set Ωt can be exploited to improve interval and density forecasts. Fourth, consider the relationship between ε2t and σt2 . The relationship is important: GARCH dynamics in σt2 turn out to introduce ARMA dynamics in ε2t .5 More precisely, if εt is a GARCH(p,q) process, then ε2t has the ARMA representation ε2t = ω + (α(L) + β(L))ε2t − β(L)νt + νt , where νt = ε2t − σt2 is the difference between the squared innovation and the conditional variance at time t. To see this, note that if εt is GARCH(p,q), then σt2 = ω + α(L)ε2t + β(L)σt2 . Adding and subtracting β(L)ε2t 5

Put differently, the GARCH process approximates conditional variance dynamics in the same way that an ARMA process approximates conditional mean dynamics.

250

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

from the right side gives σt2 = ω + α(L)ε2t + β(L)ε2t − β(L)ε2t + β(L)σt2 = ω + (α(L) + β(L))ε2t − β(L)(ε2t − σt2 ). Adding ε2t to each side then gives σt2 + ε2t = ω + (α(L) + β(L))ε2t − β(L)(ε2t − σt2 ) + ε2t , so that ε2t = ω + (α(L) + β(L))ε2t − β(L)(ε2t − σt2 ) + (ε2t − σt2 ), = ω + (α(L) + β(L))ε2t − β(L)νt + νt. Thus, ε2t is an ARMA((max(p,q)), p) process with innovation νt , where νt ∈ [−σt2 , ∞). ε2t is covariance stationary if the roots of α(L)+β(L)=1 are outside the unit circle. Fifth, consider in greater depth the similarities and differences between σt2 and ε2t . It is worth studying closely the key expression, νt = ε2t − σt2 ,

13.2. THE GARCH PROCESS

251

which makes clear that ε2t is effectively a “proxy” for σt2 , behaving similarly but not identically, with νt being the difference, or error. In particular, ε2t is a noisy proxy: ε2t is an unbiased estimator of σt2 , but it is more volatile. It seems reasonable, then, that reconciling the noisy proxy ε2t and the true underlying σt2 should involve some sort of smoothing of ε2t . Indeed, in the GARCH(1,1) case σt2 is precisely obtained by exponentially smoothing ε2t . To see why, consider the exponential smoothing recursion, which gives the current smoothed value as a convex combination of the current unsmoothed value and the lagged smoothed value, ε¯2t = γε2t + (1 − γ)¯ ε2t−1 . Back substitution yields an expression for the current smoothed value as an exponentially weighted moving average of past actual values: ε¯2t =

X

wj ε2t−j ,

where wj = γ(1 − γ)j . Now compare this result to the GARCH(1,1) model, which gives the current volatility as a linear combination of lagged volatility and the lagged 2 squared return, σt2 = ω + αε2t−1 + βσt−1 .

Back substitution yields σt2 =

ω 1−β



P

β j−1 ε2t−j , so that the GARCH(1,1)

process gives current volatility as an exponentially weighted moving average of past squared returns. Sixth, consider the temporal aggregation of GARCH processes. By temporal aggregation we mean aggregation over time, as for example when we convert a series of daily returns to weekly returns, and then to monthly returns, then quarterly, and so on. It turns out that convergence toward

252

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

normality under temporal aggregation is a feature of real-world financial asset returns. That is, although high-frequency (e.g., daily) returns tend to be fat-tailed relative to the normal, the fat tails tend to get thinner under temporal aggregation, and normality is approached. Convergence to normality under temporal aggregation is also a property of covariance stationary GARCH processes. The key insight is that a low-frequency change is simply the sum of the corresponding high-frequency changes; for example, an annual change is the sum of the internal quarterly changes, each of which is the sum of its internal monthly changes, and so on. Thus, if a Gaussian central limit theorem can be invoked for sums of GARCH processes, convergence to normality under temporal aggregation is assured. Such theorems can be invoked if the process is covariance stationary. In closing this section, it is worth noting that the symmetry and leptokurtosis of the unconditional distribution of the GARCH process, as well as the disappearance of the leptokurtosis under temporal aggregation, provide nice independent confirmation of the accuracy of GARCH approximations to asset return volatility dynamics, insofar as GARCH was certainly not invented with the intent of explaining those features of financial asset return data. On the contrary, the unconditional distributional results emerged as unanticipated byproducts of allowing for conditional variance dynamics, thereby providing a unified explanation of phenomena that were previously believed unrelated.

13.3

Extensions of ARCH and GARCH Models

There are numerous extensions of the basic GARCH model. In this section, we highlight several of the most important. One important class of extensions allows for asymmetric response; that is, it allows for last period’s squared

13.3. EXTENSIONS OF ARCH AND GARCH MODELS

253

return to have different effects on today’s volatility, depending on its sign.6 Asymmetric response is often present, for example, in stock returns. 13.3.1

Asymmetric Response

The simplest GARCH model allowing for asymmetric response is the threshold GARCH, or TGARCH, model.7 We replace the standard GARCH con-

2 ditional variance function, σt2 = ω + αε2t−1 + βσt−1 , with σt2 = ω + αε2t−1 + γε2t−1 Dt−1 + βσ 1, if εt < 0 where Dt = . 0otherwise. The dummy variable D keeps track of whether the lagged return is posi-

tive or negative. When the lagged return is positive (good news yesterday), D=0, so the effect of the lagged squared return on the current conditional variance is simply α. In contrast, when the lagged return is negative (bad news yesterday), D=1, so the effect of the lagged squared return on the current conditional variance is α + γ. If γ = 0, the response is symmetric and we have a standard GARCH model, but if γ 6=0 we have asymmetric response of volatility to news. Allowance for asymmetric response has proved useful for modeling “leverage effects” in stock returns, which occur when γ <0.8 Asymmetric response may also be introduced via the exponential GARCH (EGARCH) model, ln(σt2 )

2 ). = ω + α ε σt−1 + γε σt−1 + β ln(σt−1 t−1

t−1

Note that volatility is driven by both size and sign of shocks; hence the model allows for an asymmetric response depending on the sign of news.9 The 6

In the GARCH model studied thus far, only the square of last period’s return affects the current conditional variance; hence its sign is irrelevant. 7 For expositional convenience, we will introduce all GARCH extensions in the context of GARCH(1,1), which is by far the most important case for practical applications. Extensions to the GARCH(p,q) case are immediate but notationally cumbersome. 8 Negative shocks appear to contribute more to stock market volatility than do positive shocks. This is called the leverage effect, because a negative shock to the market value of equity increases the aggregate debt/equity ratio (other things the same), thereby increasing leverage. 9 The absolute “size” of news is captured by |rt−1 /σt−1 | , and the sign is captured by rt−1 /σt−1 .

254

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

log specification also ensures that the conditional variance is automatically positive, because σt2 is obtained by exponentiating ln(σt2 ) ; hence the name “exponential GARCH.” 13.3.2

Exogenous Variables in the Volatility Function

Just as ARMA models of conditional mean dynamics can be augmented to include the effects of exogenous variables, so too can GARCH models of conditional variance dynamics. We simply modify the standard GARCH volatility function in the obvious way, writing 2 σt2 = ω + αε2t−1 + βσt−1 + γxt ,

where γ is a parameter and x is a positive exogenous variable.10 Allowance for exogenous variables in the conditional variance function is sometimes useful. Financial market volume, for example, often helps to explain market volatility. 13.3.3

Regression with GARCH disturbances and GARCH-M

Just as ARMA models may be viewed as models for disturbances in regressions, so too may GARCH models. We write yt = β0 + β1 xt + εt εt |Ωt−1 ∼ N (0, σt2 ) 2 σt2 = ω + αε2t−1 + βσt−1 . Consider now a regression model with GARCH

disturbances of the usual sort, with one additional twist: the conditional variance enters as a regressor, thereby affecting the conditional mean. We 10

Extension to allow multiple exogenous variables is straightforward.

13.3. EXTENSIONS OF ARCH AND GARCH MODELS

255

write yt = β0 + β1 xt + γσt2 + εt εt |Ωt−1 ∼ N (0, σt2 ) 2 σt2 = ω + αε2t−1 + βσt−1 . This model, which is a special case of the gen-

eral regression model with GARCH disturbances, is called GARCH-in-Mean (GARCH-M). It is sometimes useful in modeling the relationship between risks and returns on financial assets when risk, as measured by the conditional variance, varies.11 13.3.4

Component GARCH

¯) + ¯ ) = α(ε2t−1 − ω Note that the standard GARCH(1,1) process may be written as (σt2 − ω where ω ¯=

ω 1−α−β

is the unconditional variance.12 This is precisely the GARCH(1,1)

model introduced earlier, rewritten in a slightly different but equivalent form. In this model, short-run volatility dynamics are governed by the parameters α and β, and there are no long-run volatility dynamics, because ω ¯ is constant. Sometimes we might want to allow for both long-run and short-run, or persistent and transient, volatility dynamics in addition to the short-run volatility dynamics already incorporated. To do this, we replace ω ¯ with a time-varying 2 − qt−1 ), where the timeprocess, yielding (σt2 − qt ) = α(ε2t−1 − qt−1 ) + β(σt−1 2 varying long-run volatility, qt , is given by qt = ω + ρ(qt−1 − ω) + φ(ε2t−1 − σt−1 ).

This “component GARCH” model effectively lets us decompose volatility dynamics into long-run (persistent) and short-run (transitory) components, which sometimes yields useful insights. The persistent dynamics are governed by ρ , and the transitory dynamics are governed by α and β.13 11

One may also allow the conditional standard deviation, rather than the conditional variance, to enter the regression. 12 ω ¯ is sometimes called the “long-run” variance, referring to the fact that the unconditional variance is the long-run average of the conditional variance. 13 It turns out, moreover, that under suitable conditions the component GARCH model introduced here is covariance stationary, and equivalent to a GARCH(2,2) process subject to certain nonlinear restrictions on its parameters.

256

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

13.3.5

Mixing and Matching

In closing this section, we note that the different variations and extensions of the GARCH process may of course be mixed. As an example, consider the fol-

lowing conditional variance function: (σt2 − qt ) = α(ε2t−1 − qt−1 ) + γ(ε2t−1 − qt−1 )Dt−1 + β( This is a component GARCH specification, generalized to allow for asymmetric response of volatility to news via the sign dummy D, as well as effects from the exogenous variable x.

13.4

Estimating, Forecasting and Diagnosing GARCH Models

Recall that the likelihood function is the joint density function of the data, viewed as a function of the model parameters, and that maximum likelihood estimation finds the parameter values that maximize the likelihood function. This makes good sense: we choose those parameter values that maximize the likelihood of obtaining the data that were actually obtained. It turns out that construction and evaluation of the likelihood function is easily done for GARCH models, and maximum likelihood has emerged as the estimation method of choice.14 No closed-form expression exists for the GARCH maximum likelihood estimator, so we must maximize the likelihood numerically.15 Construction of optimal forecasts of GARCH processes is simple. In fact, we derived the key formula earlier but did not comment extensively on it. Recall, in particular, that 2 σt+h,t

h−1 X  2  = E εt+h |Ωt = ω [α(1) + β(1)]i

! 2 + [α(1) + β(1)]h−1 σt+1 .

i=1 14

The precise form of the likelihood is complicated, and we will not give an explicit expression here, but it may be found in various of the surveys mentioned in the Notes at the end of the chapter. 15 Routines for maximizing the GARCH likelihood are available in a number of modern software packages such as Eviews. As with any numerical optimization, care must be taken with startup values and convergence criteria to help insure convergence to a global, as opposed to merely local, maximum.

13.4. ESTIMATING, FORECASTING AND DIAGNOSING GARCH MODELS

257

In words, the optimal h-step-ahead forecast is proportional to the optimal 1-step-ahead forecast. The optimal 1-step-ahead forecast, moreover, is easily 2 calculated: all of the determinants of σt+1 are lagged by at least one period,

so that there is no problem of forecasting the right-hand side variables. In practice, of course, the underlying GARCH parameters α and β are unknown 2 and so must be estimated, resulting in the feasible forecast σ ˆt+h,t formed in

the obvious way. In financial applications, volatility forecasts are often of direct interest, and the GARCH model delivers the optimal h-step-ahead point 2 . Alternatively, and more generally, we might not be intrinforecast, σt+h,t

sically interested in volatility; rather, we may simply want to use GARCH volatility forecasts to improve h-step-ahead interval or density forecasts of εt , which are crucially dependent on the h-step-ahead prediction error variance, 2 σt+h,t . Consider, for example, the case of interval forecasting. In the case

of constant volatility, we earlier worked with Gaussian ninety-five percent interval forecasts of the form yt+h,t ± 1.96σh , where σh denotes the unconditional h-step-ahead standard deviation (which also equals the conditional h-step-ahead standard deviation in the absence of volatility dynamics). Now, however, in the presence of volatility dynamics we use yt+h,t ± 1.96σt+h,t . The ability of the conditional prediction interval to adapt to changes in volatility is natural and desirable: when volatility is low, the intervals are naturally tighter, and conversely. In the presence of volatility dynamics, the unconditional interval forecast is correct on average but likely incorrect at any given time, whereas the conditional interval forecast is correct at all times. The issue arises as to how to detect GARCH effects in observed returns, and

258

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

related, how to assess the adequacy of a fitted GARCH model. A key and simple device is the correlogram of squared returns, ε2t . As discussed earlier, ε2t is a proxy for the latent conditional variance; if the conditional variance displays persistence, so too will ε2t .16 Once can of course also fit a GARCH model, and assess significance of the GARCH coefficients in the usual way. Note that we can write the GARCH process for returns as εt = σt vt , 2 where vt ∼ iidN (0, 1), σt2 = ω + αε2t−1 + βσt−1 . Equivalently, the standard-

ized return, v, is iid, ε σt = vt ∼ iidN (0, 1). t

This observation suggests a way to evaluate the adequacy of a fitted GARCH model: standardize returns by the conditional standard deviation from the fitted GARCH model, σ ˆ , and then check for volatility dynamics missed by the fitted model by examining the correlogram of the squared standardized return, (εt /ˆ σt )2 . This is routinely done in practice.

13.5

Exercises, Problems and Complements

1. (Graphical regression diagnostic: time series plot of e2t or |et |) Plots of e2t or |et | reveal patterns (most notably serial correlation) in the squared or absolute residuals, which correspond to non-constant volatility, or heteroskedasticity, in the levels of the residuals. As with the standard residual plot, the squared or absolute residual plot is always a simple univariate plot, even when there are many right-hand side variables. Such plots feature prominently, for example, in tracking and forecasting time-varying volatility. 2. (Removing conditional mean dynamics before modeling volatility dy16 Note well, however, that the converse is not true. That is, if ε2t displays persistence, it does not necessarily follow that the conditional variance displays persistence. In particular, neglected serial correlation associated with conditional mean dynamics may cause serial correlation in εt and hence also in ε2t . Thus, before proceeding to examine and interpret the correlogram of ε2t as a check for volatility dynamics, it is important that any conditional mean effects be appropriately modeled, in which case εt should be interpreted as the disturbance in an appropriate conditional mean model.

13.5. EXERCISES, PROBLEMS AND COMPLEMENTS

259

namics) In the application in the text we noted that NYSE stock returns appeared to have some weak conditional mean dynamics, yet we ignored them and proceeded directly to model volatility. a. Instead, first fit autoregressive models using the SIC to guide order selection, and then fit GARCH models to the residuals. Redo the entire empirical analysis reported in the text in this way, and discuss any important differences in the results. b. Consider instead the simultaneous estimation of all parameters of AR(p)-GARCH models. That is, estimate regression models where the regressors are lagged dependent variables and the disturbances display GARCH. Redo the entire empirical analysis reported in the text in this way, and discuss any important differences in the results relative to those in the text and those obtained in part a above. 3. (Variations on the basic ARCH and GARCH models) Using the stock return data, consider richer models than the pure ARCH and GARCH models discussed in the text. a. Estimate, diagnose and discuss a threshold GARCH(1,1) model. b. Estimate, diagnose and discuss an EGARCH(1,1) model. c. Estimate, diagnose and discuss a component GARCH(1,1) model. d. Estimate, diagnose and discuss a GARCH-M model. 4. (Empirical performance of pure ARCH models as approximations to volatility dynamics) Here we will fit pure ARCH(p) models to the stock return data, including values of p larger than p=5 as done in the text, and contrast the results with those from fitting GARCH(p,q) models.

260

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

a. When fitting pure ARCH(p) models, what value of p seems adequate? b. When fitting GARCH(p,q) models, what values of p and q seem adequate? c. Which approach appears more parsimonious? 5. (Direct modeling of volatility proxies) In the text we fit an AR(5) directly to a subset of the squared NYSE stock returns. In this exercise, use the entire NYSE dataset. a. Construct, display and discuss the fitted volatility series from the AR(5) model. b. Construct, display and discuss an alternative fitted volatility series obtained by exponential smoothing, using a smoothing parameter of .10, corresponding to a large amount of smoothing, but less than done in the text. c. Construct, display and discuss the volatility series obtained by fitting an appropriate GARCH model. d. Contrast the results of parts a, b and c above. e. Why is fitting of a GARCH model preferable in principle to the AR(5) or exponential smoothing approaches? 6. (Assessing volatility dynamics in observed returns and in standardized returns) In the text we sketched the use of correlograms of squared observed returns for the detection of GARCH, and squared standardized returns for diagnosing the adequacy of a fitted GARCH model. Examination of Ljung-Box statistics is an important part of a correlogram analysis. It can be shown that the Ljung-Box statistics may be legitimately used on

13.5. EXERCISES, PROBLEMS AND COMPLEMENTS

261

squared observed returns, in which case it will have the usual χ2m distribution under the null hypothesis of independence. One may also use the Ljung-Box statistic on the squared standardized returns, but a better distributional approximation is obtained in that case by using a χ2m−k distribution, where k is the number of estimated GARCH parameters, to account for degrees of freedom used in model fitting. 7. (Allowing for leptokurtic conditional densities) Thus far we have worked exclusively with conditionally Gaussian GARCH models, which correspond to εt = σt vt vt ∼ iidN (0, 1), or equivalently, to normality of the standardized return, εt /σt . a. The conditional normality assumption may sometimes be violated. However, GARCH parameters are consistently estimated by Gaussian maximum likelihood even when the normality assumption is incorrect. Sketch some intuition for this result. b. Fit an appropriate conditionally Gaussian GARCH model to the stock return data. How might you use the histogram of the standardized returns to assess the validity of the conditional normality assumption? Do so and discuss your results. c. Sometimes the conditionally Gaussian GARCH model does indeed fail to explain all of the leptokurtosis in returns; that is, especially with very high-frequency data, we sometimes find that the conditional density is leptokurtic. Fortunately, leptokurtic conditional densities are easily incorporated into the GARCH model. For example, in the conditionally Student’s-t GARCH model, the conditional density is assumed to be Student’s t, with the degrees-of-freedom d treated as another parameter to be estimated. More precisely, we write vt ∼ iid

td . std(td )

262

CHAPTER 13. HETEROSKEDASTICITY IN TIME SERIES

εt = σt vt What is the reason for dividing the Student’s t variable, td , by its standard deviation, std(td ) ? How might such a model be estimated? 8. (Multivariate GARCH models) In the multivariate case, such as when modeling a set of returns rather than a single return, we need to model not only conditional variances, but also conditional covariances. a. Is the GARCH conditional variance specification introduced earlier, 2 say for the i − th return, σit2 = ω + αε2i,t−1 + βσi,t−1 , still appealing in

the multivariate case? Why or why not? b. Consider the following specification for the conditional covariance between i − th and j-th returns: σij,t = ω + αεi,t−1 εj,t−1 + βσij,t−1 . Is it appealing? Why or why not? c. Consider a fully general multivariate volatility model, in which every conditional variance and covariance may depend on lags of every conditional variance and covariance, as well as lags of every squared return and cross product of returns. What are the strengths and weaknesses of such a model? Would it be useful for modeling, say, a set of five hundred returns? If not, how might you proceed?

13.6

Notes

Chapter 14 Multivariate: Vector Autoregression The regression model is an explicitly multivariate model, in which variables are explained and forecast on the basis of their own history and the histories of other, related, variables. Exploiting such cross-variable linkages may lead to good and intuitive forecasting models, and to better forecasts than those obtained from univariate models. Regression models are often called causal, or explanatory, models. For example, in the linear regression model, yt = β0 + β1 xt + εt εt ∼ W N (0, σ 2 ), the presumption is that x helps determine, or cause, y, not the other way around. For this reason the left-hand-side variable is sometimes called the “endogenous” variable, and the right-hand side variables are called “exogenous” or “explanatory” variables. But ultimately regression models, like all statistical models, are models of correlation, not causation. Except in special cases, all variables are endogenous, and it’s best to admit as much from the outset. In this chapter we’ll explicitly do so; we’ll work with systems of regression equations called vector autoregressions (V ARs). 263

264

14.1

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Distributed Lag Models

An unconditional forecasting model like yt = β0 + δxt−1 + εt can be immediately generalized to the distributed lag model, yt = β0 +

Nx X

δi xt−i + εt .

i=1

We say that y depends on a distributed lag of past x’s. The coefficients on the lagged x’s are called lag weights, and their pattern is called the lag distribution. One way to estimate a distributed lag model is simply to include all Nx lags of x in the regression, which can be estimated by least squares in the usual way. In many situations, however, Nx might be quite a large number, in which case we’d have to use many degrees of freedom to estimate the model, violating the parsimony principle. Often we can recover many of those degrees of freedom without seriously worsening the model’s fit by constraining the lag weights to lie on a low-order polynomial. Such polynomial distributed lags promote smoothness in the lag distribution and may lead to sophisticatedly simple models with improved forecasting performance. Polynomial distributed lag models are estimated by minimizing the sum of squared residuals in the usual way, subject to the constraint that the lag weights follow a low-order polynomial whose degree must be specified. Suppose, for example, that we constrain the lag weights to follow a seconddegree polynomial. Then we find the parameter estimates by solving the

14.1. DISTRIBUTED LAG MODELS

265

problem min β0 , δi

T X

"

t=Nx +1

yt − β0 −

Nx X

#2 δi xt−i ,

i=1

subject to δi = P (i) = a + bi + ci2 , i = 1, ..., Nx . This converts the estimation problem from one of estimating 1 + Nx parameters, β0 , δ1 , ..., δNx , to one of estimating four parameters, β0 , a, b and c. Sometimes additional constraints are imposed on the shape of the polynomial, such as P (Nx ) = 0, which enforces the idea that the dynamics have been exhausted by lag Nx . Polynomial distributed lags produce aesthetically appealing, but basically ad hoc, lag distributions. After all, why should the lag weights necessarily follow a low-order polynomial? An alternative and often preferable approach makes use of the rational distributed lags that we introduced in Chapter ?? in the context of univariate ARM A modeling. Rational distributed lags promote parsimony, and hence smoothness in the lag distribution, but they do so in a way that’s potentially much less restrictive than requiring the lag weights to follow a low-order polynomial. We might, for example, use a model like yt =

A(L) x t + εt , B(L)

where A(L) and B(L) are low-order polynomials in the lag operator. Equivalently, we can write B(L)yt = A(L)xt + B(L)εt , which emphasizes that the rational distributed lag of x actually brings both lags of x and lags of y into the model. One way or another, it’s crucial to allow for lags of y, and we now study such models in greater depth.

266

14.2

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Regressions with Lagged Dependent Variables, and Regressions with ARM A Disturbances

There’s something missing in distributed lag models of the form yt = β0 +

Nx X

δi xt−i + εt .

i=1

A multivariate model (in this case, a regression model) should relate the current value y to its own past and to the past of x. But as presently written, we’ve left out the past of y! Even in distributed lag models, we always want to allow for the presence of the usual univariate dynamics. Put differently, the included regressors may not capture all the dynamics in y, which we need to model one way or another. Thus, for example, a preferable model includes lags of the dependent variable, yt = β0 +

Ny X i=1

αi yt−i +

Nx X

δj xt−j + εt .

j=1

This model, a distributed lag regression model with lagged dependent variables, is closely related to, but not exactly the same as, the rational distributed lag model introduced earlier. (Why?) You can think of it as arising by beginning with a univariate autoregressive model for y, and then introducing additional explanatory variables. If the lagged y’s don’t play a role, as assessed with the usual tests, we can always delete them, but we never want to eliminate from the outset the possibility that lagged dependent variables play a role. Lagged dependent variables absorb residual serial correlation and can dramatically enhance forecasting performance. Alternatively, we can capture own-variable dynamics in distributed-lag regression models by using a distributed-lag regression model with ARM A disturbances. Recall that our ARM A(p, q) models are equivalent to regression

14.2. REGRESSIONS WITH LAGGED DEPENDENT VARIABLES, AND REGRESSIONS WITH ARM

models, with only a constant regressor, and with ARM A(p, q) disturbances, yt = β0 + εt εt =

Θ(L) vt Φ(L)

vt ∼ W N (0, σ 2 ). We want to begin with the univariate model as a baseline, and then generalize it to allow for multivariate interaction, resulting in models such as yt = β0 +

Nx X

δi xt−i + εt

i=1

εt =

Θ(L) vt Φ(L)

vt ∼ W N (0, σ 2 ). Regressions with ARM A disturbances make clear that regression (a statistical and econometric tool with a long tradition) and the ARM A model of time-series dynamics (a more recent innovation) are not at all competitors; rather, when used appropriately they can be highly complementary. It turns out that the distributed-lag regression model with autoregressive disturbances – a great workhorse in econometrics – is a special case of the more general model with lags of both y and x and white noise disturbances. To see this, let’s take the simple example of an unconditional (1-step-ahead) regression forecasting model with AR(1) disturbances: yt = β0 + β1 xt−1 + εt εt = φεt−1 + vt vt ∼ W N (0, σ 2 ).

268

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

In lag operator notation, we write the AR(1) regression disturbance as (1 − φL)εt = vt , or εt =

1 vt . (1 − φL)

Thus we can rewrite the regression model as yt = β0 + β1 xt−1 +

1 vt . (1 − φ L)

Now multiply both sides by (1 − φL) to get (1 − φ L)yt = (1 − φ)β0 + β1 (1 − φ L)xt−1 + vt , or yt = φyt−1 + (1 − φ)β0 + β1 xt−1 − φβ1 xt−2 + vt . Thus a model with one lag of x on the right and AR(1) disturbances is equivalent to a model with yt−1 , xt−1 , and xt−2 on the right-hand side and white noise errors, subject to the restriction that the coefficient on the second lag of xt−2 is the negative of the product of the coefficients on yt−1 and xt−1 . Thus, distributed lag regressions with lagged dependent variables are more general than distributed lag regressions with dynamic disturbances. In practice, the important thing is to allow for own-variable dynamics somehow , in order to account for dynamics in y not explained by the right-hand-side variables. Whether we do so by including lagged dependent variables or by allowing for ARM A disturbances can occasionally be important, but usually it’s a comparatively minor issue.

14.3. VECTOR AUTOREGRESSIONS

14.3

269

Vector Autoregressions

A univariate autoregression involves one variable. In a univariate autoregression of order p, we regress a variable on p lags of itself. In contrast, a multivariate autoregression – that is, a vector autoregression, or V AR – involves N variables. In an N -variable vector autoregression of order p, or V AR(p), we estimate N different equations. In each equation, we regress the relevant left-hand-side variable on p lags of itself, and p lags of every other variable.1 Thus the right-hand-side variables are the same in every equation – p lags of every variable. The key point is that, in contrast to the univariate case, vector autoregressions allow for cross-variable dynamics. Each variable is related not only to its own past, but also to the past of all the other variables in the system. In a two-variable V AR(1), for example, we have two equations, one for each variable (y1 and y2 ) . We write y1,t = φ11 y1,t−1 + φ12 y2,t−1 + ε1,t y2,t = φ21 y1,t−1 + φ22 y2,t−1 + ε2,t . Each variable depends on one lag of the other variable in addition to one lag of itself; that’s one obvious source of multivariate interaction captured by the V AR that may be useful for forecasting. In addition, the disturbances may be correlated, so that when one equation is shocked, the other will typically be shocked as well, which is another type of multivariate interaction that univariate models miss. We summarize the disturbance variance-covariance structure as ε1,t ∼ W N (0, σ12 ) ε2,t ∼ W N (0, σ22 ) 1

Trends, seasonals, and other exogenous variables may also be included, as long as they’re all included in every equation.

270

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

cov(ε1,t , ε2,t ) = σ12 . The innovations could be uncorrelated, which occurs when σ12 = 0, but they needn’t be. You might guess that V ARs would be hard to estimate. After all, they’re fairly complicated models, with potentially many equations and many righthand-side variables in each equation. In fact, precisely the opposite is true. V ARs are very easy to estimate, because we need only run N linear regressions. That’s one reason why V ARs are so popular – OLS estimation of autoregressive models is simple and stable, in contrast to the numerical estimation required for models with moving-average components.2 Equationby-equation OLS estimation also turns out to have very good statistical properties when each equation has the same regressors, as is the case in standard V ARs. Otherwise, a more complicated estimation procedure called seemingly unrelated regression, which explicitly accounts for correlation across equation disturbances, would be required to obtain estimates with good statistical properties.3 When fitting V ARs to data, we use the Schwarz and Akaike criteria, just as in the univariate case. The formulas differ, however, because we’re now working with a multivariate system of equations rather than a single equation. To get an AIC or SIC value for a V AR system, we could add up the equationby-equation AICs or SICs, but unfortunately, doing so is appropriate only if the innovations are uncorrelated across equations, which is a very special and unusual situation. Instead, explicitly multivariate versions of the AIC and SIC – and more advanced formulas – are required that account for crossequation innovation correlation. It’s beyond the scope of this book to derive and present those formulas, because they involve unavoidable use of matrix 2

Estimation of M A and ARM A models is stable enough in the univariate case but rapidly becomes unwieldy in multivariate situations. Hence multivariate ARM A models are used infrequently in practice, in spite of the potential they hold for providing parsimonious approximations to the Wold representation. 3 For an exposition of seemingly unrelated regression, see Pindyck and Rubinfeld (1997).

14.4. PREDICTIVE CAUSALITY

271

algebra, but fortunately we don’t need to. They’re pre-programmed in many computer packages, and we interpret the AIC and SIC values computed for V ARs of various orders in exactly the same way as in the univariate case: we select that order p such that the AIC or SIC is minimized. We construct V AR forecasts in a way that precisely parallels the univariate case. We can construct 1-step-ahead point forecasts immediately, because all variables on the right-hand side are lagged by one period. Armed with the 1-step-ahead forecasts, we can construct the 2-step-ahead forecasts, from which we can construct the 3-step-ahead forecasts, and so on in the usual way, following the chain rule of forecasting. We construct interval and density forecasts in ways that also parallel the univariate case. The multivariate nature of V ARs makes the derivations more tedious, however, so we bypass them. As always, to construct practical forecasts we replace unknown parameters by estimates.

14.4

Predictive Causality

There’s an important statistical notion of causality that’s intimately related to forecasting and naturally introduced in the context of V ARs. It is based on two key principles: first, cause should occur before effect, and second, a causal series should contain information useful for forecasting that is not available in the other series (including the past history of the variable being forecast). In the unrestricted V ARs that we’ve studied thus far, everything causes everything else, because lags of every variable appear on the right of every equation. Cause precedes effect because the right-hand-side variables are lagged, and each variable is useful in forecasting every other variable. We stress from the outset that the notion of predictive causality contains little if any information about causality in the philosophical sense. Rather, the statement “yi causes yj ” is just shorthand for the more precise, but long-

272

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

winded, statement, “ yi contains useful information for predicting yj (in the linear least squares sense), over and above the past histories of the other variables in the system.” To save space, we simply say that yi causes yj . To understand what predictive causality means in the context of a V AR(p), consider the j-th equation of the N -equation system, which has yj on the left and p lags of each of the N variables on the right. If yi causes yj , then at least one of the lags of yi that appear on the right side of the yj equation must have a nonzero coefficient. It’s also useful to consider the opposite situation, in which yi does not cause yj . In that case, all of the lags of that yi that appear on the right side of the yj equation must have zero coefficients.4 Statistical causality tests are based on this formulation of non-causality. We use an F -test to assess whether all coefficients on lags of yi are jointly zero. Note that we’ve defined non-causality in terms of 1-step-ahead prediction errors. In the bivariate V AR, this implies non-causality in terms of h-stepahead prediction errors, for all h. (Why?) In higher dimensional cases, things are trickier; 1-step-ahead noncausality does not necessarily imply noncausality at other horizons. For example, variable i may 1-step cause variable j, and variable j may 1-step cause variable k. Thus, variable i 2-step causes variable k, but does not 1-step cause variable k. Causality tests are often used when building and assessing forecasting models, because they can inform us about those parts of the workings of complicated multivariate models that are particularly relevant for forecasting. Just staring at the coefficients of an estimated V AR (and in complicated systems there are many coefficients) rarely yields insights into its workings. Thus we need tools that help us to see through to the practical forecasting properties of the model that concern us. And we often have keen interest in the answers to questions such as “Does yi contribute toward improving 4

Note that in such a situation the error variance in forecasting yj using lags of all variables in the system will be the same as the error variance in forecasting yj using lags of all variables in the system except yi .

14.4. PREDICTIVE CAUSALITY

273

forecasts of yj ?,” and “Does yj contribute toward improving forecasts of yi ?” If the results violate intuition or theory, then we might scrutinize the model more closely. In a situation in which we can’t reject a certain noncausality hypothesis, and neither intuition nor theory makes us uncomfortable with it, we might want to impose it, by omitting certain lags of certain variables from certain equations. Various types of causality hypotheses are sometimes entertained. In any equation (the j-th, say), we’ve already discussed testing the simple noncausality hypothesis that: (a) No lags of variable i aid in one-step-ahead prediction of variable j. We can broaden the idea, however. Sometimes we test stronger noncausality hypotheses such as: (b) No lags of a set of other variables aid in one-step-ahead prediction of variable j. (b) No lags of any other variables aid in one-step-ahead prediction of variable j. All of hypotheses (a), (b) and (c) amount to assertions that various coefficients are zero. Finally, sometimes we test noncausality hypotheses that involve more than one equation, such as: (b) No variable in a set A causes any variable in a set B, in which case we say that the variables in A are block non-causal for those in B. This particular noncausality hypothesis corresponds to exclusion restrictions that hold simultaneously in a number of equations. Again, however, standard test procedures are applicable.

274

14.5

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Impulse-Response Functions

The impulse-response function is another device that helps us to learn about the dynamic properties of vector autoregressions of interest to forecasters. We’ll introduce it first in the univariate context, and then we’ll move to V ARs. The question of interest is simple and direct: How does a unit innovation to a series affect it, now and in the future? To answer the question, we simply read off the coefficients in the moving average representation of the process. We’re used to normalizing the coefficient on εt to unity in moving-average representations, but we don’t have to do so; more generally, we can write yt = b0 εt + b1 εt−1 + b2 εt−2 + ... εt ∼ W N (0, σ 2 ). The additional generality introduces ambiguity, however, because we can always multiply and divide every εt by an arbitrary constant m, yielding an equivalent model but with different parameters and innovations,       1 1 1 εt + (b1 m) εt−1 + (b2 m) εt−2 + ... yt = (b0 m) m m m εt ∼ W N (0, σ 2 ) or yt = b00 ε0t + b01 ε0t−1 + b02 ε0t−2 + ... ε0t ∼ W N (0, where b0i = bi m and ε0t =

σ2 ), m2

εt m.

To remove the ambiguity, we must set a value of m. Typically we set m = 1, which yields the standard form of the moving average representation. For impulse-response analysis, however, a different normalization turns out

14.5. IMPULSE-RESPONSE FUNCTIONS

275

to be particularly convenient; we choose m = σ, which yields       1 1 1 εt + (b1 σ) εt−1 + (b2 σ) εt−2 + ... yt = (b0 σ) σ σ σ εt ∼ W N (0, σ 2 ), or yt = b00 ε0t + b01 ε0t−1 + b02 ε0t−2 + ... ε0t ∼ W N (0, 1), where b0i = bi σ and ε0t =

εt σ.

Taking m = σ converts shocks to “standard de-

viation units,” because a unit shock to ε0t corresponds to a one standard deviation shock to εt . To make matters concrete, consider the univariate AR(1) process, yt = φyt−1 + εt εt ∼ W N (0, σ 2 ). The standard moving average form is yt = εt + φεt−1 + φ2 εt−2 + ... εt ∼ W N (0, σ 2 ), and the equivalent representation in standard deviation units is yt = b0 ε0t + b1 ε0t−1 + b2 ε0t−2 + ... ε0t ∼ W N (0, 1) where bi = φi σ and ε0t =

εt σ.

The impulse-response function is { b0 , b1 , ...

}. The parameter b0 is the contemporaneous effect of a unit shock to ε0t , or equivalently a one standard deviation shock to εt ; as must be the case,

276

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

then, b0 = σ. Note well that b0 gives the immediate effect of the shock at time t, when it hits. The parameter b1 , which multiplies ε0t−1 , gives the effect of the shock one period later, and so on. The full set of impulse-response coefficients, {b0 , b1 , ...}, tracks the complete dynamic response of y to the shock. Now we consider the multivariate case. The idea is the same, but there are more shocks to track. The key question is, “How does a unit shock to εi affect yj , now and in the future, for all the various combinations of i and j?” Consider, for example, the bivariate V AR(1), y1t = φ11 y1,t−1 + φ12 y2,t−1 + ε1t y2t = φ21 y1,t−1 + φ22 y2,t−1 + ε2t ε1,t ∼ W N (0, σ12 ) ε2,t ∼ W N (0, σ22 ) cov(ε1 , ε2 ) = σ12 . The standard moving average representation, obtained by back substitution, is y1t = ε1t + φ11 ε1,t−1 + φ12 ε2,t−1 + ... y2t = ε2t + φ21 ε1,t−1 + φ22 ε2,t−1 + ... ε1,t ∼ W N (0, σ12 ) ε2,t ∼ W N (0, σ22 ) cov(ε1 , ε2 ) = σ12 . Just as in the univariate case, it proves fruitful to adopt a different normalization of the moving average representation for impulse-response analysis. The multivariate analog of our univariate normalization by σ is called

14.5. IMPULSE-RESPONSE FUNCTIONS

277

normalization by the Cholesky factor.5 The resulting VAR moving average representation has a number of useful properties that parallel the univariate case precisely. First, the innovations of the transformed system are in standard deviation units. Second, although the current innovations in the standard representation have unit coefficients, the current innovations in the normalized representation have non-unit coefficients. In fact, the first equation has only one current innovation, ε1t . (The other has a zero coefficient.) The second equation has both current innovations. Thus, the ordering of the variables can matter.6 If y1 is ordered first, the normalized representation is y1,t = b011 ε01,t + b111 ε01,t−1 + b112 ε02,t−1 + ... y2,t = b021 ε01,t + b022 ε02,t + b121 ε01,t−1 + b122 ε02,t−1 + ... ε01,t ∼ W N (0, 1) ε02,t ∼ W N (0, 1) cov(ε01 , ε02 ) = 0. Alternatively, if y2 ordered first, the normalized representation is y2,t = b022 ε02,t + b121 ε01,t−1 + b122 ε02,t−1 + ... y1,t = b011 ε01,t + b012 ε2,t + b111 ε1,t−1 + b112 ε2,t−1 + ... ε01,t ∼ W N (0, 1) ε02,t ∼ W N (0, 1) cov(ε01 , ε02 ) = 0. 5

For detailed discussion and derivation of this advanced topic, see Hamilton (1994). In higher-dimensional V AR’s, the equation that’s first in the ordering has only one current innovation, ε01t . The equation that’s second has only current innovations ε01t and ε02t , the equation that’s third has only current innovations ε01t , ε02t and ε03t , and so on. 6

278

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Finally, the normalization adopted yields a zero covariance between the disturbances of the transformed system. This is crucial, because it lets us perform the experiment of interest – shocking one variable in isolation of the others, which we can do if the innovations are uncorrelated but can’t do if they’re correlated, as in the original unnormalized representation. After normalizing the system, for a given ordering, say y1 first, we compute four sets of impulse-response functions for the bivariate model: response of y1 to a unit normalized innovation to y1 , { b011 , b111 , b211 , ... }, response of y1 to a unit normalized innovation to y2 , { b112 , b212 , ... }, response of y2 to a unit normalized innovation to y2 , { b022 , b122 , b222 , ... }, and response of y2 to a unit normalized innovation to y1 , { b021 , b121 , b221 , ... }. Typically we examine the set of impulse-response functions graphically. Often it turns out that impulseresponse functions aren’t sensitive to ordering, but the only way to be sure is to check.7 In practical applications of impulse-response analysis, we simply replace unknown parameters by estimates, which immediately yields point estimates of the impulse-response functions. Getting confidence intervals for impulseresponse functions is trickier, however, and adequate procedures are still under development.

14.6

Variance Decompositions

Another way of characterizing the dynamics associated with V ARs, closely related to impulse-response functions, is the variance decomposition. Variance decompositions have an immediate link to forecasting – they answer the question, “How much of the h-step-ahead forecast error variance of variable i is explained by innovations to variable j, for h = 1, 2, ...” As with impulseresponse functions, we typically make a separate graph for every (i, j) pair. 7 Note well that the issues of normalization and ordering only affect impulse-response analysis; for forecasting we only need the unnormalized model.

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

279

Figure 14.1: Housing Starts and Completions, 1968 - 1996

Impulse-response functions and the variance decompositions present the same information (although they do so in different ways). For that reason it’s not strictly necessary to present both, and impulse-response analysis has gained greater popularity. Hence we offer only this brief discussion of variance decomposition. In the application to housing starts and completions that follows, however, we examine both impulse-response functions and variance decompositions. The two are highly complementary, as with information criteria and correlograms for model selection, and the variance decompositions have a nice forecasting motivation.

14.7

Application: Housing Starts and Completions

We estimate a bivariate V AR for U.S. seasonally-adjusted housing starts and completions, two widely-watched business cycle indicators, 1968.01-1996.06. We use the V AR to produce point extrapolation forecasts. We show housing starts and completions in Figure 14.1. Both are highly cyclical, increasing during business-cycle expansions and decreasing during contractions. Moreover, completions tend to lag behind starts, which makes sense because a house takes time to complete.

280

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Figure 14.2: Housing Starts Correlogram

We split the data into an estimation sample, 1968.01-1991.12, and a holdout sample, 1992.01-1996.06 for forecasting. We therefore perform all model specification analysis and estimation, to which we now turn, on the 1968.011991.12 data. We show the starts correlogram in Table 14.2 and Figure 14.3. The sample autocorrelation function decays slowly, whereas the sample partial autocorrelation function appears to cut off at displacement 2. The patterns in the sample autocorrelations and partial autocorrelations are highly statistically significant, as evidenced by both the Bartlett standard errors and the Ljung-Box Q-statistics. The completions correlogram, in Table 14.4 and Figure 14.5, behaves similarly. We’ve not yet introduced the cross correlation function. There’s been no need, because it’s not relevant for univariate modeling. It provides important information, however, in the multivariate environments that now concern us. Recall that the autocorrelation function is the correlation between a variable and lags of itself. The cross-correlation function is a natural multivariate analog; it’s simply the correlation between a variable and lags of another

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

281

Figure 14.3: Housing Starts Autocorrelations and Partial Autocorrelations

variable. We estimate those correlations using the usual estimator and graph them as a function of displacement along with the Bartlett two- standarderror bands, which apply just as in the univariate case. The cross-correlation function (Figure 14.6) for housing starts and completions is very revealing. Starts and completions are highly correlated at all displacements, and a clear pattern emerges as well: although the contemporaneous correlation is high (.78), completions are maximally correlated with starts lagged by roughly 6-12 months (around .90). Again, this makes good sense in light of the time it takes to build a house. Now we proceed to model starts and completions. We need to select the order, p, of our V AR(p). Based on exploration using multivariate versions of SIC and AIC, we adopt a V AR(4).

282

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Figure 14.4: Housing Completions Correlogram

First consider the starts equation (Table 14.7a), residual plot (Figure 14.7b), and residual correlogram (Table 14.8, Figure 14.9). The explanatory power of the model is good, as judged by the R2 as well as the plots of actual and fitted values, and the residuals appear white, as judged by the residual sample autocorrelations, partial autocorrelations, and Ljung-Box statistics. Note as well that no lag of completions has a significant effect on starts, which makes sense – we obviously expect starts to cause completions, but not conversely. The completions equation (Table 14.10a), residual plot (Figure 14.10b), and residual correlogram (Table 14.11, Figure 14.12) appear similarly good. Lagged starts, moreover, most definitely have a significant effect on completions. Table 14.13 shows the results of formal causality tests. The hypothesis that starts don’t cause completions is simply that the coefficients on the four lags of starts in the completions equation are all zero. The F -statistic is overwhelmingly significant, which is not surprising in light of the previously-

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

Figure 14.5: Housing Completions Autocorrelations and Partial Autocorrelations

Figure 14.6: Housing Starts and Completions Sample Cross Correlations

283

284

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

(a) VAR Starts Equation

(b) VAR Starts Equation - Residual Plot

Figure 14.7: VAR Starts Model

noticed highly-significant t-statistics. Thus we reject noncausality from starts to completions at any reasonable level. Perhaps more surprising, we also reject noncausality from completions to starts at roughly the 5% level. Thus the causality appears bi-directional, in which case we say there is feedback.

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

Figure 14.8: VAR Starts Residual Correlogram

285

286

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Figure 14.9: VAR Starts Equation - Sample Autocorrelation and Partial Autocorrelation

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

(a) VAR Completions Equation

(b) VAR Completions Equation - Residual Plot

Figure 14.10: VAR Completions Model

287

288

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Figure 14.11: VAR Completions Residual Correlogram

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

289

Figure 14.12: VAR Completions Equation - Sample Autocorrelation and Partial Autocorrelation

290

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Figure 14.13: Housing Starts and Completions - Causality Tests

In order to get a feel for the dynamics of the estimated V AR before producing forecasts, we compute impulse-response functions and variance decompositions. We present results for starts first in the ordering, so that a current innovation to starts affects only current starts, but the results are robust to reversal of the ordering. In Figure 14.14, we display the impulse-response functions. First let’s consider the own-variable impulse responses, that is, the effects of a starts innovation on subsequent starts or a completions innovation on subsequent completions; the effects are similar. In each case, the impulse response is large and decays in a slow, approximately monotonic fashion. In contrast, the cross-variable impulse responses are very different. An innovation to starts produces no movement in completions at first, but the effect gradually builds and becomes large, peaking at about fourteen months. (It takes time to build houses.) An innovation to completions, however, produces little movement in starts at any time. Figure 14.15 shows the variance decompositions. The fraction of the error variance in forecasting starts due to innovations in starts is close to 100 percent at all horizons. In contrast, the fraction of the error variance in forecasting completions due to innovations in starts is near zero at short horizons, but it rises steadily and is near 100 percent at long horizons, again reflecting time-to-build effects. Finally, we construct forecasts for the out-of-sample period, 1992.01-1996.06. The starts forecast appears in Figure 14.16. Starts begin their recovery before 1992.01, and the V AR projects continuation of the recovery. The V AR fore-

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

291

Figure 14.14: Housing Starts and Completions - VAR Impulse Response Functions. Response is to 1 SD innovation.

casts captures the general pattern quite well, but it forecasts quicker mean reversion than actually occurs, as is clear when comparing the forecast and realization in Figure 14.17. The figure also makes clear that the recovery of housing starts from the recession of 1990 was slower than the previous recoveries in the sample, which naturally makes for difficult forecasting. The completions forecast suffers the same fate, as shown in Figures 14.18 and 14.19. Interestingly, however, completions had not yet turned by 1991.12, but the forecast nevertheless correctly predicts the turning point. (Why?)

292

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Figure 14.15: Housing Starts and Completions - VAR Variance Decompositions

14.7. APPLICATION: HOUSING STARTS AND COMPLETIONS

Figure 14.16: Housing Starts Forecast

Figure 14.17: Housing Starts Forecast and Realization

293

294

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Figure 14.18: Housing Completions Forecast

Figure 14.19: Housing Completions Forecast and Realization

14.8. EXERCISES, PROBLEMS AND COMPLEMENTS

14.8

295

Exercises, Problems and Complements

1. Housing starts and completions, continued. Our VAR analysis of housing starts and completions, as always, involved many judgment calls. Using the starts and completions data, assess the adequacy of our models and forecasts. Among other things, you may want to consider the following questions: a. Should we allow for a trend in the forecasting model? b. How do the results change if, in light of the results of the causality tests, we exclude lags of completions from the starts equation, reestimate by seemingly-unrelated regression, and forecast? c. Are the VAR forecasts of starts and completions more accurate than univariate forecasts? 2. Forecasting crop yields. Consider the following dilemma in agricultural crop yield forecasting: The possibility of forecasting crop yields several years in advance would, of course, be of great value in the planning of agricultural production. However, the success of long-range crop forecasts is contingent not only on our knowledge of the weather factors determining yield, but also on our ability to predict the weather. Despite an abundant literature in this field, no firm basis for reliable long-range weather forecasts has yet been found. (Sanderson, 1953, p. 3) a. How is the situation related to our concerns in this chapter, and specifically, to the issue of conditional vs. unconditional forecasting? b. What variables other than weather might be useful for predicting crop yield? c. How would you suggest that the forecaster should proceed?

296

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

3. Econometrics, time series analysis, and forecasting. As recently as the early 1970s, time series analysis was mostly univariate and made little use of economic theory. Econometrics, in contrast, stressed the cross-variable dynamics associated with economic theory, with equations estimated using multiple regression. Econometrics, moreover, made use of simultaneous systems of such equations, requiring complicated estimation methods. Thus the econometric and time series approaches to forecasting were very different.8 As Klein (1981) notes, however, the complicated econometric system estimation methods had little payoff for practical forecasting and were therefore largely abandoned, whereas the rational distributed lag patterns associated with time-series models led to large improvements in practical forecast accuracy.9 Thus, in more recent times, the distinction between econometrics and time series analysis has largely vanished, with the union incorporating the best of both. In many respects the V AR is a modern embodiment of both econometric and time-series traditions. V ARs use economic considerations to determine which variables to include and which (if any) restrictions should be imposed, allow for rich multivariate dynamics, typically require only simple estimation techniques, and are explicit forecasting models. 4. Business cycle analysis and forecasting: expansions, contractions, turning points, and leading indicators10 . The use of anticipatory data is linked to business cycle analysis in general, and leading indicators in particular. During the first half of this 8

Klein and Young (1980) and Klein (1983) provide good discussions of the traditional econometric simultaneous equations paradigm, as well as the link between structural simultaneous equations models and reduced-form time series models. Wallis (1995) provides a good summary of modern large-scale macroeconometric modeling and forecasting, and Pagan and Robertson (2002) provide an intriguing discussion of the variety of macroeconomic forecasting approaches currently employed in central banks around the world. 9 For an acerbic assessment circa the mid-1970s, see Jenkins (1979). 10 This complement draws in part upon Diebold and Rudebusch (1996).

14.8. EXERCISES, PROBLEMS AND COMPLEMENTS

297

century, much research was devoted to obtaining an empirical characterization of the business cycle. The most prominent example of this work was Burns and Mitchell (1946), whose summary empirical definition was: Business cycles are a type of fluctuation found in the aggregate economic activity of nations that organize their work mainly in business enterprises: a cycle consists of expansions occurring at about the same time in many economic activities, followed by similarly general recessions, contractions, and revivals which merge into the expansion phase of the next cycle. (p. 3) The comovement among individual economic variables was a key feature of Burns and Mitchell’s definition of business cycles. Indeed, the comovement among series, taking into account possible leads and lags in timing, was the centerpiece of Burns and Mitchell’s methodology. In their analysis, Burns and Mitchell considered the historical concordance of hundreds of series, including those measuring commodity output, income, prices, interest rates, banking transactions, and transportation services, and they classified series as leading, lagging or coincident. One way to define a leading indicator is to say that a series x is a leading indicator for a series y if x causes y in the predictive sense. According to that definition, for example, our analysis of housing starts and completions indicates that starts are a leading indicator for completions. Leading indicators have the potential to be used in forecasting equations in the same way as anticipatory variables. Inclusion of a leading indicator, appropriately lagged, can improve forecasts. Zellner and Hong (1989) and Zellner, Hong and Min (1991), for example, make good use of that idea in their ARLI (autoregressive leading-indicator) models for forecasting aggregate output growth. In those models, Zellner et al . build forecasting models by regressing output on lagged output and lagged leading indicators; they also use shrinkage techniques to coax

298

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

the forecasted growth rates toward the international average, which improves forecast performance. Burns and Mitchell used the clusters of turning points in individual series to determine the monthly dates of the turning points in the overall business cycle, and to construct composite indexes of leading, coincident, and lagging indicators. Such indexes have been produced by the National Bureau of Economic Research (a think tank in Cambridge, Mass.), the Department of Commerce (a U.S. government agency in Washington, DC), and the Conference Board (a business membership organization based in New York).11 Composite indexes of leading indicators are often used to gauge likely future economic developments, but their usefulness is by no means uncontroversial and remains the subject of ongoing research. For example, leading indexes apparently cause aggregate output in analyses of ex post historical data (Auerbach, 1982), but they appear much less useful in real-time forecasting, which is what’s relevant (Diebold and Rudebusch, 1991). 5. Spurious regression. Consider two variables y and x, both of which are highly serially correlated, as are most series in business, finance and economics. Suppose in addition that y and x are completely unrelated, but that we don’t know they’re unrelated, and we regress y on x using ordinary least squares. a. If the usual regression diagnostics (e.g., R2 , t-statistics, F -statistic) were reliable, we’d expect to see small values of all of them. Why? b. In fact the opposite occurs; we tend to see large R2 , t-, and F statistics, and a very low Durbin-Watson statistic. Why the low 11

The indexes build on very early work, such as the Harvard “Index of General Business Conditions.” For a fascinating discussion of the early work, see Hardy (1923), Chapter 7.

14.8. EXERCISES, PROBLEMS AND COMPLEMENTS

299

Durbin-Watson? Why, given the low Durbin-Watson, might you expect misleading R2 , t-, and F -statistics? c. This situation, in which highly persistent series that are in fact unrelated nevertheless appear highly related, is called spurious regression. Study of the phenomenon dates to the early twentieth century, and a key study by Granger and Newbold (1974) drove home the prevalence and potential severity of the problem. How might you insure yourself against the spurious regression problem? (Hint: Consider allowing for lagged dependent variables, or dynamics in the regression disturbances, as we’ve advocated repeatedly.) 6. Comparative forecasting performance of V ARs and univariate models. Using the housing starts and completions data on the book’s website, compare the forecasting performance of the VAR used in this chapter to that of the obvious competitor: univariate autoregressions. Use the same in-sample and out-of-sample periods as in the chapter. Why might the forecasting performance of the V AR and univariate methods differ? Why might you expect the V AR completions forecast to outperform the univariate autoregression, but the V AR starts forecast to be no better than the univariate autoregression? Do your results support your conjectures? 7. V ARs as Reduced Forms of Simultaneous Equations Models. V ARs look restrictive in that only lagged values appear on the right. That is, the LHS variables are not contemporaneously affected by other variables – instead they are contemporaneously affected only by shocks. That appearance is deceptive, however, as simultaneous equations systems have V AR reduced forms. Consider, for example, the simultaneous system (A0 + A1 L + ... + Ap Lp )yt = vt

300

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

vt ∼ iid(0, Ω). Mutiplying through by A−1 0 yields −1 p (I + A−1 0 A1 L + ... + A0 Ap L )yt = εt 0

−1 εt ∼ iid(0, A−1 0 ΩA0 )

or (I + Φ1 L + ... + Φp Lp )yt = εt εt ∼ iid(0, Σ) 0

−1 Σ = A−1 0 ΩA0 ,

which is a standard V AR. The V AR structure, moreover, is needed for forecasting, as everything on the RHS is lagged by at least one period, making Wold’s chain rule immediately applicable. 8. Transfer Function Models. We saw that distributed lag regressions with lagged dependent variables are more general than distributed lag regressions with dynamic disturbances. Transfer function models are more general still, and include both as special cases.12 The basic idea is to exploit the power and parsimony of rational distributed lags in modeling both own-variable and cross-variable dynamics. Imagine beginning with a univariate ARM A model, yt =

C(L) εt , D(L)

which captures own-variable dynamics using a rational distributed lag. Now extend the model to capture cross-variable dynamics using a rational distributed lag of the other variable, which yields the general transfer 12

Table 1 displays a variety of important forecasting models, all of which are special cases of the transfer function model.

14.8. EXERCISES, PROBLEMS AND COMPLEMENTS

301

function model, yt =

A(L) C(L) xt + εt . B(L) D(L)

Distributed lag regression with lagged dependent variables is a potentially restrictive special case, which emerges when C(L) = 1 and B(L) = D(L). (Verify this for yourself.) Distributed lag regression with ARM A disturbances is also a special case, which emerges when B(L) = 1. (Verify this too.) In practice, the important thing is to allow for own-variable dynamics somehow , in order to account for dynamics in y not explained by the RHS variables. Whether we do so by including lagged dependent variables, or by allowing for ARM A disturbances, or by estimating general transfer function models, can occasionally be important, but usually it’s a comparatively minor issue. 9. Cholesky-Factor Identified V ARs in Matrix Notation. 10. Inflation Forecasting via “Structural” Phillps-Curve Models vs. TimeSeries Models. The literature started with Atkinson and Ohanian ****. The basic result is that Phillips curve information doesn’t improve on univariate time series, which is interesting. Also interesting is thinking about why. For example, the univariate time series used is often IM A(0, 1, 1) (i.e., exponential smoothing, or local level), which Hendry, Clements and others have argued is robust to shifts. Maybe that’s why exponential smoothing is still so powerful after all these years. 11. Multivariate point forecast evaluation. All univariate absolute standards continue to hold, appropriately interpreted. – Zero-mean error vector. – 1-step-ahead errors are vector white noise.

302

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

– h-step-ahead errors are at most vector M A(h − 1). – h-step-ahead error covariance matrices are non-decreasing in h. That is, Σh − Σh−1 is p.s.d. for all h > 1. – The error vector is orthogonal to all available information. Relative standards, however, need more thinking, as per Christoffersen and Diebold (1998) and Primiceri, Giannone and Lenza (2014). trace(M SE), e0 Ie is not necessarily adequate, and neither is e0 De for diagonal d; rather, we generally want e0 Σe , so as to reflect preferences regarding multivariate interactions. 12. Multivariate density forecast evaluation The principle that governs the univariate techniques in this paper extends to the multivariate case, as shown in Diebold, Hahn and Tay (1998). Suppose that the variable of interest y is now an (N × 1) vector, and that we have on hand m multivariate forecasts and their corresponding multivariate realizations. Further suppose that we are able to decompose each period’s forecasts into their conditionals, i.e., for each period’s forecasts we can write p(y1t , y2t , ..., yN t |Φt−1 ) = p(yN t |yN −1,t , ..., y1t , Φt−1 )...p(y2t |y1t , Φt−1 )p(y1t |Φt−1 ), where Φt−1 now refers to the past history of (y1t , y2t , ..., yN t ). Then for each period we can transform each element of the multivariate observation (y1t , y2t , ..., yN t ) by its corresponding conditional distribution. This procedure will produce a set of N z series that will be iid U (0, 1) individually, and also when taken as a whole, if the multivariate density forecasts are correct. Note that we will have N ! sets of z series, depending on how the joint density forecasts are decomposed, giving us a wealth of information with which to evaluate the forecasts. In addition, the univariate formula for the adjustment of forecasts, discussed above,

14.9. NOTES

303

can be applied to each individual conditional, yielding f (y1t , y2t , ..., yN t |Φt−1 ) =

QN

i=1 [p(yit |yi−1,t , ..., y1t , Φt−1 )q(P (yit |yi−1,t , ..., y1t , Φt−1 ))]

= p(y1t , y2t , ..., yN t |Φt−1 )q(z1t , z2t , ..., zN t |Φt−1 ) .

14.9

Notes

Some software, such as Eviews, automatically accounts for parameter uncertainty when forming conditional regression forecast intervals by using variants of the techniques we introduced in Section ***. Similar but advanced techniques are sometimes used to produce unconditional forecast intervals for dynamic models, such as autoregressions (see L¨ utkepohl, 1991), but bootstrap simulation techniques are becoming increasingly popular (Efron and Tibshirani, 1993). Chatfield (1993) argues that innovation uncertainty and parameter estimation uncertainty are likely of minor importance compared to specification uncertainty. We rarely acknowledge specification uncertainty, because we don’t know how to quantify “what we don’t know we don’t know.” Quantifying it is a major challenge for future research, and useful recent work in that direction includes Chatfield (1995). The idea that regression models with serially correlated disturbances are more restrictive than other sorts of transfer function models has a long history in econometrics and engineering and is highlighted in a memorably-titled paper, ”Serial Correlation as a Convenient Simplification, not a Nuisance,” by Hendry and Mizon (1978). Engineers have scolded econometricians for not using more general transfer function models, as for example in Jenkins (1979). But the fact is, as we’ve seen repeatedly, that generality for generality’s sake in business and economic forecasting is not necessarily helpful, and can be positively harmful. The shrinkage principle asserts that the imposition of

304

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

restrictions – even false restrictions – can be helpful in forecasting. Sims (1980) is an influential paper arguing the virtues of V ARs. The idea of predictive causality and associated tests in V ARs is due to Granger (1969) and Sims (1972), who build on earlier work by the mathematician Norbert Weiner. L¨ utkepohl (1991) is a good reference on V AR analysis and forecasting. Gershenfeld and Weigend (1993) provide a perspective on time series forecasting from the computer-science/engineering/nonlinear/neural-net perspective, and Swanson and White (1995) compare and contrast a variety of linear and nonlinear forecasting methods.

14.9. NOTES

305

Some slides that might be usefully incorporated: Univariate AR(p): yt = φ1 yt−1 + ... + φp yt−p + εt yt = φ1 Lyt + ... + φp Lp yt + εt (I − φ1 L − ... − φp Lp )yt = εt φ(L)yt = εt εt ∼ iid(0, σ 2 ) But what if we have more than 1 “y” variable? Cross-variable interactions? Leads? Lags? Causality? N -Variable V AR(p) y1t = φ111 y1,t−1 + ... + φ11N yN,t−1 + ... + φp11 y1,t−p + ... + φp1N yN,t−p + ε1t .. . yN t = φ1N 1 y1,t−1 + ... + φ1N N yN,t−1 + ... + φpN 1 y1,t−p + ... + φpN N yN,t−p + εN t







φ111

φ11N







φp11

φp1N



 



y ... y1,t−1 ... y1,t−p ε1t  .1t   .         ..   .. +...+ .. ..   .. + ..   ..  =  .. . . . .  .   .         p p 1 1 yN t φN 1 ... φN N yN,t−1 φN 1 ... φN N yN,t−p εN t yt = Φ1 yt−1 + ... + Φp yt−p + εt yt = Φ1 Lyt + ... + Φp Lp yt + εt (I − Φ1 L − ... − Φp Lp )yt = εt

306

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Φ(L)yt = εt εt ∼ iid(0, Σ) Estimation and Selection Estimation: Equation-by-equation OLS Selection: AIC, SIC

AIC =

−2lnL 2K + T T

−2lnL KlnT + T T The Cross-Correlation Function SIC =

Recall the univariate autocorrelation function: ρy (τ ) = corr(yt , yt−τ ) In multivariate environments we also have the cross-correlation function: ρyx (τ ) = corr(yt , xt−τ ) Granger-Sims Causality Bivariate case: yi Granger-Sims causes yj if yi has predictive content for yj , over and above the past history of yj . Testing: Are lags of yi significant in the yj equation?

14.9. NOTES

307

Impulse-Response Functions in AR(1) Case yt = φyt−1 + εt , εt ∼ iid(0, σ 2 ) =⇒ yt = B(L)εt = εt + b1 εt−1 + b2 εt−2 + ... = εt + φεt−1 + φ2 εt−2 + ... IRF is {1, φ, φ2 , ...} “dynamic response to a unit shock in ε” Alternatively write εt = σvt , vt ∼ iid(0, 1) =⇒ yt = σvt + (φσ)vt−1 + (φ2 σ)vt−2 + ... IRF is {σ, φσ, φ2 σ, ...} “dynamic response to a one-σ shock in ε” Impulse-Response Functions in V AR(p)Case yt = Φyt−1 + εt , εt ∼ iid(0, Σ) =⇒ yt = B(L)εt = εt + B1 εt−1 + B2 εt−2 + ... = εt + Φεt−1 + Φ2 εt−2 + ... But we need orthogonal shocks. Why? So write εt = P vt , vt ∼ iid(0, I), where P is Cholesky factor of Σ =⇒ yt = P vt + (ΦP )vt−1 + (Φ2 P )vt−2 + ... ij’th IRF is the sequence of ij’th elements of {P, ΦP Φ2 P, ...} “Dynamic response of yi to a one-σ shock in εj ”

308

CHAPTER 14. MULTIVARIATE: VECTOR AUTOREGRESSION

Part IV Appendices

309

Appendix A Probability and Statistics Review Here we review a few aspects of probability and statistics that we will rely upon at various times.

A.1

Populations: Random Variables, Distributions and Moments

A.1.1

Univariate

Consider an experiment with a set O of possible outcomes.

A random

variable Y is simply a mapping from O to the real numbers. For example, the experiment might be flipping a coin twice, in which case O = {(Heads, Heads), (T ails, T ails), (Heads, T ails), (T ails, Heads)}. We might define a random variable Y to be the number of heads observed in the two flips, in which case Y could assume three values, y = 0, y = 1 or y = 2.1 Discrete random variables, that is, random variables with discrete probability distributions, can assume only a countable number of values P yi , i = 1, 2, ..., each with positive probability pi such that i pi = 1 . The probability distribution f (y) assigns a probability pi to each such value yi . In the example at hand, Y is a discrete random variable, and f (y) = 0.25 for 1

Note that, in principle, we use capitals for random variables (Y ) and small letters for their realizations (y). We will often neglect this formalism, however, as the meaning will be clear from context.

311

312

APPENDIX A. PROBABILITY AND STATISTICS REVIEW

y = 0, f (y) = 0.50 for y = 1, f (y) = 0.25 for y = 2, and f (y) = 0 otherwise. In contrast, continuous random variables can assume a continuous range of values, and the probability density function f (y) is a nonnegative continuous function such that the area under f (y) between any points a and b is the probability that Y assumes a value between a and b.2 In what follows we will simply speak of a “distribution,” f (y). It will be clear from context whether we are in fact speaking of a discrete random variable with probability distribution f (y) or a continuous random variable with probability density f (y). Moments provide important summaries of various aspects of distributions. Roughly speaking, moments are simply expectations of powers of random variables, and expectations of different powers convey different sorts of information. You are already familiar with two crucially important moments, the mean and variance. In what follows we’ll consider the first four moments: mean, variance, skewness and kurtosis.3 The mean, or expected value, of a discrete random variable is a probabilityweighted average of the values it can assume,4 E(y) =

X

pi y i .

i

Often we use the Greek letter µ to denote the mean, which measures the location, or central tendency, of y. The variance of y is its expected squared deviation from its mean, var(y) = E(y − µ)2 . We use σ 2 to denote the variance, which measures the dispersion, or scale, of y around its mean. 2

In addition, the total area under f (y) must be 1. In principle, we could of course consider moments beyond the fourth, but in practice only the first four are typically examined. 4 A similar formula holds in the continuous case. 3

A.1. POPULATIONS: RANDOM VARIABLES, DISTRIBUTIONS AND MOMENTS 313

Often we assess dispersion using the square root of the variance, which is called the standard deviation, σ = std(y) =

p E(y − µ)2 .

The standard deviation is more easily interpreted than the variance, because it has the same units of measurement as y. That is, if y is measured in dollars (say), then so too is std(y). V ar(y), in contrast, would be measured in rather hard-to-grasp units of “dollars squared”. The skewness of y is its expected cubed deviation from its mean (scaled by σ 3 for technical reasons), E(y − µ)3 S= . σ3 Skewness measures the amount of asymmetry in a distribution. The larger the absolute size of the skewness, the more asymmetric is the distribution. A large positive value indicates a long right tail, and a large negative value indicates a long left tail. A zero value indicates symmetry around the mean. The kurtosis of y is the expected fourth power of the deviation of y from its mean (scaled by σ 4 , again for technical reasons), E(y − µ)4 K= . σ4 Kurtosis measures the thickness of the tails of a distribution. A kurtosis above three indicates “fat tails” or leptokurtosis, relative to the normal, or Gaussian distribution that you studied earlier. Hence a kurtosis above three indicates that extreme events (“tail events”) are more likely to occur than would be the case under normality.

314

A.1.2

APPENDIX A. PROBABILITY AND STATISTICS REVIEW

Multivariate

Suppose now that instead of a single random variable Y , we have two random variables Y and X.5 We can examine the distributions of Y or X in isolation, which are called marginal distributions. This is effectively what we’ve already studied. But now there’s more: Y and X may be related and therefore move together in various ways, characterization of which requires a joint distribution. In the discrete case the joint distribution f (y, x) gives the probability associated with each possible pair of y and x values, and in the continuous case the joint density f (y, x) is such that the area in any region under it gives the probability of (y, x) falling in that region. We can examine the moments of y or x in isolation, such as mean, variance, skewness and kurtosis. But again, now there’s more: to help assess the dependence between y and x, we often examine a key moment of relevance in multivariate environments, the covariance. The covariance between y and x is simply the expected product of the deviations of y and x from their respective means, cov(y, x) = E[(yt − µy )(xt − µx )]. A positive covariance means that y and x are positively related; that is, when y is above its mean x tends to be above its mean, and when y is below its mean x tends to be below its mean. Conversely, a negative covariance means that y and x are inversely related; that is, when y is below its mean x tends to be above its mean, and vice versa. The covariance can take any value in the real numbers. Frequently we convert the covariance to a correlation by standardizing 5

We could of course consider more than two variables, but for pedagogical reasons we presently limit ourselves to two.

A.2. SAMPLES: SAMPLE MOMENTS

315

by the product of σy and σx , corr(y, x) =

cov(y, x) . σy σx

The correlation takes values in [-1, 1]. Note that covariance depends on units of measurement (e.g., dollars, cents, billions of dollars), but correlation does not. Hence correlation is more immediately interpretable, which is the reason for its popularity. Note also that covariance and correlation measure only linear dependence; in particular, a zero covariance or correlation between y and x does not necessarily imply that y and x are independent. That is, they may be non-linearly related. If, however, two random variables are jointly normally distributed with zero covariance, then they are independent. Our multivariate discussion has focused on the joint distribution f (y, x). In various chapters we will also make heavy use of the conditional distribution f (y|x), that is, the distribution of the random variable Y conditional upon X = x. Conditional moments are similarly important. In particular, the conditional mean and conditional variance play key roles in econometrics, in which attention often centers on the mean or variance of a series conditional upon the past.

A.2 A.2.1

Samples: Sample Moments Univariate

Thus far we’ve reviewed aspects of known distributions of random variables, in population. Often, however, we have a sample of data drawn from an unknown population distribution f , {yi }N i=1 ∼ f (y),

316

APPENDIX A. PROBABILITY AND STATISTICS REVIEW

and we want to learn from the sample about various aspects of f , such as its moments. To do so we use various estimators.6 We can obtain estimators by replacing population expectations with sample averages, because the arithmetic average is the sample analog of the population expectation. Such “analog estimators” turn out to have good properties quite generally. The sample mean is simply the arithmetic average, N 1 X yi . y¯ = N i=1

It provides an empirical measure of the location of y. The sample variance is the average squared deviation from the sample mean, σ ˆ2 =

PN

i=1 (yi

− y¯)2

. N It provides an empirical measure of the dispersion of y around its mean. We commonly use a slightly different version of σ ˆ 2 , which corrects for the one degree of freedom used in the estimation of y¯, thereby producing an unbiased estimator of σ 2 , s2 =

PN

− y¯)2 . N −1

i=1 (yi

Similarly, the sample standard deviation is defined either as s PN √ ¯)2 i=1 (yi − y 2 σ ˆ= σ ˆ = N or √ s=

s s2

=

PN

− y¯)2 . N −1

i=1 (yi

It provides an empirical measure of dispersion in the same units as y. 6

An estimator is an example of a statistic, or sample statistic, which is simply a function of the sample observations.

A.2. SAMPLES: SAMPLE MOMENTS

317

The sample skewness is Sˆ =

1 N

PN

i=1 (yi σ ˆ3

− y¯)3

.

It provides an empirical measure of the amount of asymmetry in the distribution of y. The sample kurtosis is ˆ = K

1 N

PN

i=1 (yi σ ˆ4

− y¯)4

.

It provides an empirical measure of the fatness of the tails of the distribution of y relative to a normal distribution. Many of the most famous and important statistical sampling distributions arise in the context of sample moments, and the normal distribution is the father of them all. In particular, the celebrated central limit theorem establishes that under quite general conditions the sample mean y¯ will have a normal distribution as the sample size gets large. The χ2 distribution arises from squared normal random variables, the t distribution arises from ratios of normal and χ2 variables, and the F distribution arises from ratios of χ2 variables. Because of the fundamental nature of the normal distribution as established by the central limit theorem, it has been studied intensively, a great deal is known about it, and a variety of powerful tools have been developed for use in conjunction with it. A.2.2

Multivariate

We also have sample versions of moments of multivariate distributions. In particular, the sample covariance is N 1 X [(yi − y¯)(xi − x¯)], cov(y, c x) = N i=1

318

APPENDIX A. PROBABILITY AND STATISTICS REVIEW

and the sample correlation is corr(y, d x) =

A.3

cov(y, c x) . σ ˆy σ ˆx

Finite-Sample and Asymptotic Sampling Distributions of the Sample Mean

Here we refresh your memory on the sampling distribution of the most important sample moment, the sample mean. A.3.1

Exact Finite-Sample Results

In your earlier studies you learned about statistical inference, such as how to form confidence intervals for the population mean based on the sample mean, how to test hypotheses about the population mean, and so on. Here we partially refresh your memory. Consider the benchmark case of Gaussian simple random sampling, yi ∼ iid N (µ, σ 2 ), i = 1, ..., N, which corresponds to a special case of what we will later call the “full ideal conditions” for regression modeling. The sample mean y¯ is the natural estimator of the population mean µ. In this case, as you learned earlier, y¯ is unbiased, consistent, normally distributed with variance σ 2 /N , and indeed the minimum variance unbiased (MVUE) estimator. We write   σ2 y¯ ∼ N µ, , N or equivalently



N (¯ y − µ) ∼ N (0, σ 2 ).

A.3. FINITE-SAMPLE AND ASYMPTOTIC SAMPLING DISTRIBUTIONS OF THE SAMPLE MEAN3

We construct exact finite-sample confidence intervals for µ as   s µ ∈ y¯ ± t1− α2 (N − 1) √ w.p. 1 − α, N where t1− α2 (N − 1) is the 1 −

α 2

percentile of a t distribution with N − 1

degrees of freedom. Similarly, we construct exact finite-sample (likelihood ratio) hypothesis tests of H0 : µ = µ0 against the two-sided alternative H0 : µ 6= µ0 using

y¯ − µ0 √s N

A.3.2

∼ t1− α2 (N − 1).

Approximate Asymptotic Results (Under Weaker Assumptions)

Much of statistical inference is linked to large-sample considerations, such as the law of large numbers and the central limit theorem, which you also studied earlier. Here we again refresh your memory. Consider again a simple random sample, but without the normality assumption, yi ∼ iid(µ, σ 2 ), i = 1, ..., N. Despite our dropping the normality assumption we still have that y¯ is unbiased, consistent, asymptotically normally distributed with variance σ 2 /N , and best linear unbiased (BLUE). We write, a

  σ2 y¯ ∼ N µ, . N More precisely, as T → ∞, √

N (¯ y − µ) →d N (0, σ 2 ).

320

APPENDIX A. PROBABILITY AND STATISTICS REVIEW

This result forms the basis for asymptotic inference. It is a Gaussian central limit theorem, and it also has a law of large numbers (¯ y →p µ) imbedded within it. We construct asymptotically-valid confidence intervals for µ as   σ ˆ µ ∈ y¯ ± z1− α2 √ w.p. 1 − α, N where z1− α2 is the 1 −

α 2

percentile of a N (0, 1) distribution. Similarly, we

construct asymptotically-valid hypothesis tests of H0 : µ = µ0 against the two-sided alternative H0 : µ 6= µ0 using y¯ − µ0 ˆ √σ N

A.4

∼ N (0, 1).

Exercises, Problems and Complements

1. (Interpreting distributions and densities) The Sharpe Pencil Company has a strict quality control monitoring program. As part of that program, it has determined that the distribution of the amount of graphite in each batch of one hundred pencil leads produced is continuous and uniform between one and two grams. That is, f (y) = 1 for y in [1, 2], and zero otherwise, where y is the graphite content per batch of one hundred leads. a. Is y a discrete or continuous random variable? b. Is f (y) a probability distribution or a density? c. What is the probability that y is between 1 and 2? Between 1 and 1.3? Exactly equal to 1.67? d. For high-quality pencils, the desired graphite content per batch is 1.8 grams, with low variation across batches. With that in mind, discuss the nature of the density f (y).

A.4. EXERCISES, PROBLEMS AND COMPLEMENTS

321

2. (Covariance and correlation) Suppose that the annual revenues of world’s two top oil producers have a covariance of 1,735,492. a. Based on the covariance, the claim is made that the revenues are “very strongly positively related.” Evaluate the claim. b. Suppose instead that, again based on the covariance, the claim is made that the revenues are “positively related.” Evaluate the claim. c. Suppose you learn that the revenues have a correlation of 0.93. In light of that new information, re-evaluate the claims in parts a and b above. 3. (Simulation) You will often need to simulate data from various models. The simplest model is the iidN (µ, σ 2 ) (Gaussian simple random sampling) model. a. Using a random number generator, simulate a sample of size 30 for y, where y ∼ iidN (0, 1). b. What is the sample mean? Sample standard deviation? Sample skewness? Sample kurtosis? Discuss. c. Form an appropriate 95 percent confidence interval for E(y). d. Perform a t test of the hypothesis that E(y) = 0. e. Perform a t test of the hypothesis that E(y) = 1. 4. (Sample moments of the CPS wage data) Use the 1995 CPS wage dataset. a. Calculate the sample mean wage and test the hypothesis that it equals $9/hour. b. Calculate sample skewness.

322

APPENDIX A. PROBABILITY AND STATISTICS REVIEW

c. Calculate and discuss the sample correlation between wage and years of education.

A.5

Notes

Numerous good introductory probability and statistics books exist. Wonnacott and Wonnacott (1990) remains a time-honored classic, which you may wish to consult to refresh your memory on statistical distributions, estimation and hypothesis testing. Anderson et al. (2008) is a well-written recent text.

Appendix B Construction of the Wage Datasets We construct our datasets from randomly sampling the much-larger Current Population Survey (CPS) datasets.1 We extract the data from the March CPS for 1995, 2004 and 2012 respectively, using the National Bureau of Economic Research (NBER) front end (http://www.nber.org/data/cps.html) and NBER SAS, SPSS, and Stata data definition file statements (http://www.nber.org/data/cps_progs.html). We use both personal and family records. Here we focus our discussion on 1995. There are many CPS observations for which earnings data are completely missing. We drop those observations, as well as those that are not in the universe for the eligible CPS earning items ( ERNEL=0), leaving 14363 observations. From those, we draw a random unweighted subsample with ten percent selection probability. This results in 1348 observations. We use seven variables. From the CPS we obtain AGE (age), FEMALE (1 if female, 0 otherwise), NONWHITE (1 if nonwhite, 0 otherwise), and UNION (1 if union member, 0 otherwise). We also create EDUC (years of schooling) based on CPS variable PEEDUCA (educational attainment). Because the CPS does not ask about years of experience, we create EXPER 1

See http://aspe.hhs.gov/hsp/06/catalog-ai-an-na/cps.htm for a brief and clear introduction to the CPS datasets.

323

324

APPENDIX B. CONSTRUCTION OF THE WAGE DATASETS

(potential working experience) as AGE minus EDUC minus 6. We construct the variable WAGE as follows. WAGE equals PRERNHLY (earnings per hour) in dollars for those paid hourly. For those not paid hourly (PRERNHLY=0), we use PRERNWA (gross earnings last week) divided by PEHRUSL1 (usual working hours per week). That sometimes produces missing values, which we treat as missing earnings and drop from the sample. The final dataset contains 1323 observations with AGE, FEMALE, NONWHITE, UNION, EDUC, EXPER and WAGE.

325 Variable Age Labor force status

Name (95) Name (04,12) PEAGE A AGE A LFSR

Class of worker

A CLSWKR

Selection Criteria 18-65 1 working (we exclude armed forces) 1,2,3,4 (we exclude selfemployed and pro bono)

CPS Personal Data Selection Criteria

326

APPENDIX B. CONSTRUCTION OF THE WAGE DATASETS

Variable PEAGE (A AGE) A LFSR A CLSWKR PEEDUCA (A HGA) PERACE (PRDTRACE) PESEX (A SEX) PEERNLAB (A UNMEM) PRERNWA (A GRSWK) PEHRUSL1 (A USLHRS) PEHRACTT (A HRS1) PRERNHLY (A HRSPAY)

Description Age Labor force status Class of worker Educational attainment RACE SEX UNION Usual earnings per week Usual hours worked weekly Hours worked last week Earnings per hour

AGE Equals PEAGE FEMALE Equals 1 if PESEX=2, 0 otherwise NONWHITE Equals 0 if PERACE=1, 0 otherwise UNION Equals 1 if PEERNLAB=1, 0 otherwise EDUC Refers to the Table EXPER Equals AGE-EDUC-6 WAGE Equals PRERNHLY or PRERNWA/ PEHRUSL1 NOTE: Variable names in parentheses are for 2004 and 2012.

Variable List

327 EDUC 0 1 5 7 9 10 11 12 12 12 14 14 16 18

PEEDUCA (A HGA) 31 32 33 34 35 36 37 38 39 40 41 42 43 44

20

45

20

46

Description Less than first grade Frist, second, third or four grade Fifth or sixth grade Seventh or eighth grade Ninth grade Tenth grade Eleventh grade Twelfth grade no diploma High school graduate Some college but no degree Associate degree-occupational/vocational Associate degree-academic program Bachelor’ degree (B.A., A.B., B.S.) Master’ degree (M.A., M.S., M.Eng., M.Ed., M.S.W., M.B.A.) Professional school degree (M.D., D.D.S., D.V.M., L.L.B., J.D.) Doctorate degree (Ph.D., Ed.D.)

Definition of EDUC

328

APPENDIX B. CONSTRUCTION OF THE WAGE DATASETS

Appendix C Some Popular Books Worth Encountering I have cited many of these books elsewhere, typically in various end-of-chapter complements. Here I list them collectively. Lewis (2003) [Michael Lewis, Moneyball ]. “Appearances may lie, but the numbers don’t, so pay attention to the numbers.” Gladwell (2000) [Malcolm Gladwell, The Tipping Point]. “Nonlinear phenomena are everywhere.” Gladwell pieces together an answer to the puzzling question of why certain things “take off” whereas others languish (products, fashions, epidemics, etc.) More generally, he provides deep insights into nonlinear environments, in which small changes in inputs can lead to small changes in outputs under some conditions, and to huge changes in outputs under other conditions. Taleb (2007) [Nassim Nicholas Taleb, The Black Swan] “Warnings, and more warnings, and still more warnings, about non-normality and much else.” See Chapter 5 EPC 1. Angrist and Pischke (2009) [Joshua Angrist and Jorn-Steffen Pischke, Mostly Harmless Econometrics]. “Natural and quasi-natural experiments suggesting instruments.” This is a fun and insightful treatment of instrumental-variables and related 329

330

APPENDIX C. SOME POPULAR BOOKS WORTH ENCOUNTERING

methods. Just don’t be fooled by the book’s attempted landgrab, as discussed in a 2015 No Hesitations post. Silver (2012) [Nate Silver, The Signal and the Noise]. “Pitfalls and opportunities in predictive modeling.”

Bibliography Anderson, D.R., D.J. Sweeney, and T.A. Williams (2008), Statistics for Business and Economics, South-Western. Angrist, J.D. and J.-S. Pischke (2009), Mostly Harmless Econometrics, Princeton University Press. Gladwell, M. (2000), The Tipping Point, Little, Brown and Company. Harvey, A.C. (1991), Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge University Press. Lewis, M. (2003), Moneyball, Norton. Nerlove, M., D.M. Grether, and J.L. Carvalho (1979), Analysis of Economic Time Series: A Synthesis. New York: Academic Press. Second Edition. Silver, N.. (2012), The Signal and the Noise, Penguin Press. Taleb, N.N. (2007), The Black Swan, Random House. Tufte, E.R. (1983), The Visual Display of Quantatative Information, Chesire: Graphics Press. Wonnacott, T.H. and R.J. Wonnacott (1990), Introductory Statistics. New York: John Wiley and Sons, Fifth Edition.

331

Index F distribution, 34

Banking to 45 degrees, 19

F -statistic, 54

Binary data, 5

R-squared, 55

binomial logit, 336

s-squared, 55

Box-Cox transformation, 109

t distribution, 34

Box-Pierce Q-statistic, 163

t-statistic, 51

Breusch-Godfrey test, 178

χ2 distribution, 34

Calendar effects, 76

Fitted values, 44

Central tendency, 30

Holiday variation, 77

Chartjunk, 19

Seasonal dummy variables, 75

Cointegration, 280

Adjusted R-squared, 56 Akaike information criterion, 56 Analog principle, 161 Analysis of variance, 80 AR(p) process, 171 ARCH(p) process, 289 Aspect ratio, 19 Asymmetric response, 295 Asymmetry, 31 Asymptototic, 35 Autocorrelation function, 156 Autocovariance function, 154 Autoregressions, 156 Autoregressive (AR) model, 166

Common scales, 19 Conditional distribution, 32 Conditional expectation, 47 Conditional mean, 32 Conditional mean and variance, 160 Conditional mean function, 97 Conditional moment, 32 Conditional variance, 32 Constant term, 51 Continuous data, 5 Continuous random variable, 30 Correlation, 32 Correlogram, 162 Correlogram analysis, 164 332

INDEX

Covariance, 31 Covariance stationary, 154 Cross correlation function, 242 Cross sectional data, 5 Cross sections, 6 Cross-variable dynamics, 235 CUSUM, 137 CUSUM plot, 137 Cycles, 153

333

Durbin-Watson statistic, 57 Econometric modeling, 3 Error-correction, 281 Estimator, 33 Ex post smoothing, 119 Expected value, 30 Exploratory data analysis, 23 Exponential GARCH, 296 Exponential smoothing, 261

Data mining, 57

Exponential trend, 115

Data-generating process (DGP), 48

Exponentially weighted moving aver-

De-trending, 81 Deterministic seasonality, 74 Deterministic trend, 72, 249 Dickey-Fuller distribution, 255 Discrete probability distribution, 29 Discrete random variable, 29

age, 261 Feedback, 243 Financial econometrics, 286 First-order serial correlation, 178 Fourier series expansions, 123 Functional form, 107

Dispersion, 30 Distributed lag, 165

GARCH(p,q) process, 290

Distributed lag model, 222

Gaussian distribution, 31

Distributed lag regression model with Gaussian white noise, 158 lagged dependent variables, 233 Generalized linear model, 109, 337 Distributed-lag regression model with GLM, 109, 337 Golden ratio, 23 AR disturbances, 233 Disturbance, 47

Goodness of fit, 56

Dummy left-hand-side variable, 331

Heteroskedasticity, 285

Dummy right-hand-side variable, 331 Histogram, 15 Dummy variable, 69 Hodrick-Prescott filtering, 119 Durbin’s h test, 191 Holt-Winters Smoothing, 263

334

INDEX

Holt-Winters Smoothing with Season- Location, 30 ality, 264 Impulse-response function, 238 In-sample overfitting, 57 Independent white noise, 158 Indicator variable, 69, 331 Innovation outliers, 190 Instrumental variables, 349

Log-lin regression, 108, 109 Log-linear trend, 115 Log-log regression, 107 Logistic function, 332 Logistic model, 110 Logistic trend, 130 Logit model, 332

Integrated, 248

Marginal distribution, 31

Interaction effects, 111, 122

Markov-switching model, 140

Intercept, 72

Maximum likelihood estimation, 53

Intercept dummies, 70

Mean, 30

Interval data, 9

Measurement outliers, 190

Intrinsically non-linear models, 110

Model selection, 319

Jarque-Bera test, 100 Joint distribution, 31 Kurtosis, 31 Lag operator, 165 Least absolute deviations, 103 Least squares, 41

Moments, 30, 160 Multinomial logit, 338 Multiple comparisons, 18 Multiple linear regression, 44 Multivariate, 13 Multivariate GARCH, 317 Multiway scatterplot, 16

Leptokurtosis, 31

Neural networks, 126

Likelihood function, 53

Nominal data, 9

Limited dependent variable, 331

Non-data ink, 19

Linear probability model, 332

non-linear least squares (NLS), 110

Linear projection, 97, 343

Non-linearity, 97

Linear trend, 72

Non-normality, 97

Link function, 109, 338

Normal distribution, 31

Ljung-Box Q-statistic, 163

Normal white noise, 158

INDEX

335

Odds, 336

QQ plots, 99

Off-line smoothing, 119

Quadratic trend, 117

On-line smoothing, 119 One-sided moving average, 119 One-sided weighted moving average, 119 Ordered logit, 334 Ordered outcomes, 333 Ordinal data, 9 Outliers, 97

Random number generator, 189 Random walk, 137, 248 Random walk with drift, 249 Ratio data, 9 Real-time smoothing, 119 Realization, 154 Recursive residuals, 136 Regime switching, 139

Panel data, 5

Regression function, 47

Panels, 6

Regression intercept, 47

Parameter instability, 136

Regression on seasonal dummies, 75

Parameters, 47

Regression slope coefficients, 47

Partial autocorrelation function, 156

Relational graphics, 15

Partial correlation, 21

RESET test, 112

Polynomial distributed lag, 222

Residual plot, 59

Polynomial in the lag operator, 165

Residual scatter, 58

Polynomial trend, 117

Residuals, 44

Population, 32

Robustness iteration, 151

Population model, 47

Sample, 32

Population regression, 156

Sample autocorrelation function, 162

Positive serial correlation, 57

Sample correlation, 34

Predictive causality, 237

Sample covariance, 34

Prob(F -statistic), 54

Sample kurtosis, 33

Probability density function, 30

Sample mean, 33, 162

Probability value, 52

Sample mean of the dependent vari-

Probit model, 337 Proportional odds, 334

able, 53 Sample partial autocorrelation, 164

336

INDEX

Sample path, 154

Statistic, 33

Sample skewness, 33

Stochastic processes, 166

Sample standard deviation, 33

Stochastic seasonality, 74

Sample standard deviation of the de- Stochastic trend, 72, 249 pendent variable, 53

Strong white noise, 158

Sample statistic, 33

Student’s-t GARCH, 317

Sample variance, 33

Sum of squared residuals, 53

Scale, 30

Superconsistency, 253

Scatterplot matrix, 16 Schwarz information criterion, 57 Seasonal adjustment, 81 Seasonality, 72, 74 Second-order stationarity, 155 Serial correlation, 57 Serially uncorrelated, 158 Simple correlation, 21 Simple exponential smoothing, 261 Simple random sampling, 34 Simulating time series processes, 189 Single exponential smoothing, 261 Skewness, 31 Slope, 72 Slope dummies, 78

Taylor series expansions, 121 Threshold GARCH, 295 Threshold model, 139 Time dummy, 72 Time series, 6, 154 Time series data, 5 Time series of cross sections, 5 Time series plot, 13 Time series process, 157 Time-varying volatility, 285 Tobit model, 338 Trading-day variation, 77 Trend, 72 Two-sided moving average, 119

Smoothing, 118

Unconditional mean and variance, 160

Spurious regression, 282

Unit autoregressive root, 247

Standard deviation, 30

Unit root, 247

Standard error of the regression, 55

Univariate, 13

Standard errors, 51 Standardized recursive residuals, 137

Variance, 30 Vector autoregression of order p, 235

INDEX

Volatility clustering, 289 Volatility dynamics , 289 Weak stationarity, 155 Weak white noise, 158 White noise, 158 Yule-Walker equation, 169 Zero-mean white noise, 158

337

Econometrics - UPenn School of Arts and Sciences - University of ...

Apr 22, 2018 - Francis X. Diebold is Professor of Economics, Finance and Statistics at the. University of Pennsylvania. He has won both undergraduate and graduate economics “teacher of the year” awards, and his academic “family” includes thousands of undergraduate students and nearly 75 Ph.D. students. Diebold.

15MB Sizes 3 Downloads 321 Views

Recommend Documents

SIGNOR.CHP:Corel VENTURA - School of Arts and Sciences
develop an alternative measure of similarity, S, which is generalizable to a larger foreign policy space. ... data with information from other data sources. .... power alliance portfolios, while those of Britain and France were completely dissimilar.

SIGNOR.CHP:Corel VENTURA - School of Arts and Sciences
extent to which states have common or conflicting security interests. For the past .... reliance on alliance data to measure similarity of foreign policy positions. ...... ranging from the redrawing of European borders to the management of atomic.

Econometrics - Social Sciences Computing - University of Pennsylvania
Oct 12, 2017 - force management, production planning, new market entry, and so on. .... program, evaluate and apply new tools and techniques. R is one very ...

Econometrics - Social Sciences Computing - University of Pennsylvania
Oct 12, 2017 - chine learning”), business, finance, public policy, and even engineering. It is directly ... Governments, central banks and policy organizations use econometric mod- els to guide ...... Bachelor' degree (B.A., A.B., B.S.). 18. 44.

CURRICULUM VITAE jimi adams - College of Liberal Arts and Sciences
Apr 30, 2016 - Special Issue of SOCIAL NETWORKS 34(1). ARTICLES .... DEMOGRAPHIC RESEARCH 21(10): 255-288. (doi:10.4054/DemRes.2009.21.10).

CURRICULUM VITAE jimi adams - College of Liberal Arts and Sciences
Apr 30, 2016 - Special Issue of SOCIAL NETWORKS 34(1). ARTICLES, CHAPTERS & .... analyses of attrition.” DEMOGRAPHIC RESEARCH 20(21): 503-540.

Notification-Maharashtra-University-of-Health-Sciences-Professor ...
MEDICAL FACULTY. MIMER MEDICAL COLLEGE AND HOSPITAL. ,Talegaon Dabhade, Tal. Maval, Dist. Pune - 410 507. Phone : 91-8087099040 / 41 / 42 / 43; Fax : 02114-223916; Email : [email protected]. ( Affiliated to Maharashtra University of Health Sciences,

The School of Forestry and Wildlife Sciences at Auburn University is ...
be working with Dr. Todd Steury and Dr. Robert Gitzen, but will also be a critical part of a large, comprehensive effort to understand the effects of wild pigs on the ...

The School of Forestry and Wildlife Sciences at Auburn University is ...
The School of Forestry and Wildlife Sciences at Auburn University is recruiting ... a cover letter outlining their interests, career goals, and qualifications for the ...

University of Hyderabad Ph.D - Earth and Space Sciences - 2014 ...
(C) fault unconformity (D) angular unconformity. 2. ... 4. The coefficient of transmission of a perfectly black body is. (A) Zerc (B) one (C) 0.5 ... Cours & Exercices ... Displaying University of Hyderabad Ph.D - Earth and Space Sciences - 2014.pdf.

University of Hyderabad Ph.D - Earth and Space Sciences - 2013 ...
... (B)permo-Triassic (c)ordovician-silurian (D)Jurrassic-cretaceous. 24'The tropical cyclones often follow the direction of movement from. (A) south to North (B) East to west (c) west to East (D) North to south. Page 3 of 8. Main menu. Displaying Un

Combinatorial Nullstellensatz - School of Mathematical Sciences
Aviv, Israel and Institute for Advanced Study, Princeton, NJ 08540, USA. Research ... the Hermann Minkowski Minerva Center for Geometry at Tel Aviv University. 1 ...... Call an orientation D of G even if the number of its directed edges (i, j).

Combinatorial Nullstellensatz - School of Mathematical Sciences
of residue classes follow as simple consequences. We proceed to ...... Mathematical and Computer Modelling 17 (1993), 61-63. ... [28] H. Fleischner and M. Stiebitz, A solution to a coloring problem of P. Erd˝os, Discrete Math. 101. (1992) ...

Community School of Music and Arts Presents: Irene ... -
May 29, 2015 - will be held at CSMA's Tateuchi Hall, located at 230 San Antonio Circle in ... Founded in 1968, CSMA is Northern California's largest non-profit ...

To Conquer or Compel - Division of Social Sciences - University of ...
Sep 24, 2010 - Wallensteen et al. note that “it has been repeatedly observed that the onset of internal wars is related to the level of economic development” (2001, page 21). Fearon. & Laitin find that GDP/pop. “is strongly significant in both

Psychological Science - Division of Social Sciences - University of ...
Psychological Science http://pss.sagepub.com/content/early/2011/08/09/0956797611417007. The online version of this article can be found at: DOI: 10.1177/0956797611417007 published online 12 August 2011. Psychological Science. Jonathan D. Leavitt and

school of accounting sciences department of financial ...
5 Mar 2007 - successful. The Department of Financial Accounting is situated on the main campus on the fourth floor of the ... The Management of the University has taken a decision to introduce a compulsory assignment in ..... him with all the relevan

Morehead State University Caudill College of Arts ... Accounts
Aug 12, 2010 - Tabs are not intended to indent blocks of text; the ENTER key is for beginning new paragraphs. Surface Correctness. ENG 690 is an advanced ...

VIT UNIVERSITY SCHOOL OF ELECTRONICS ...
Which of the figures above is the best representation of the channel in the schematic on the ... (a) Calculate VOH, VOL, VM of the above inveter. (b) Find VIH, VIL, ...

CAVENDISH UNIVERSITY SCHOOL OF MEDICINE ...
SCHOOL OF MEDICINE. Foundation Physics Tutorial sheet 1. (January 2017 intake). 1. A warplane moving at the same altitude makes three successive ...

Fordham University School of Law
Sep 30, 2003 - ... without charge from the Social Science Research Network electronic library: ... Both were twenty-eight.1 Over the next seven years, Andrea2 ..... was designing computer systems for NASA.53 Andrea approached him first in ...... nati

BOSTON UNIVERSITY GRADUATE SCHOOL OF ...
grammar for my conference abstracts, term papers, manuscripts, and this dissertation, ...... For example, in (21), the antecedent of the elided VP go to the ball.

stirling management school scholarship ... - University of Stirling
MBA Only. MBA Leadership for Sustainable. Futures Scholarships. MBA Only ... Title of Masters course applied for (e.g. MSc Marketing). Type of offer received from the ... help you, and detailing your motivations, expectations and educational or profe