Ei saatavilla suomeksi
Lutz Kilian
- 1 April 2003
- WORKING PAPER SERIES - No. 226Details
- Abstract
- In deciding the monetary policy stance, central bankers need to evaluate carefully the risks the current economic situation poses to price stability. We propose to regard the central banker as a risk manager who aims to contain inflation within pre-specified bounds. We develop formal tools of risk management that may be used to quantify and forecast the risks of failing to attain that objective. We illustrate the use of these risk measures in practice. First, we show how to construct genuine real time forecasts of year-on-year risks that may be used in policy-making. We demonstrate the usefulness of these risk forecasts in understanding the Fed's decision to tighten monetary policy in 1984, 1988, and 1994. Second, we forecast the risks of worldwide deflation for horizons of up to two years. Although recently fears of worldwide deflation have increased, we find that, as of September 2002, with the exception of Japan there is no evidence of substantial deflation risks. We also put the estimates of deflation risk for the United States, Germany and Japan into historical perspective. We find that only for Japan there is evidence of deflation risks that are unusually high by historical standards.
- JEL Code
- E31 : Macroeconomics and Monetary Economics→Prices, Business Fluctuations, and Cycles→Price Level, Inflation, Deflation
E37 : Macroeconomics and Monetary Economics→Prices, Business Fluctuations, and Cycles→Forecasting and Simulation: Models and Applications
E52 : Macroeconomics and Monetary Economics→Monetary Policy, Central Banking, and the Supply of Money and Credit→Monetary Policy
E58 : Macroeconomics and Monetary Economics→Monetary Policy, Central Banking, and the Supply of Money and Credit→Central Banks and Their Policies
C22 : Mathematical and Quantitative Methods→Single Equation Models, Single Variables→Time-Series Models, Dynamic Quantile Regressions, Dynamic Treatment Effect Models &bull Diffusion Processes
- 1 February 2003
- WORKING PAPER SERIES - No. 214Details
- Abstract
- It is standard in applied work to select forecasting models by ranking candidate models by their prediction mean square error (PMSE) in simulated ou-of-sample (SOOS) forecasts. Alternatively, forecast models may be selected using information criteria (IC). We compare the asymptotic and finite-sample properties of these methods in terms of their ability to minimize the true out-of-sample PMSE, allowing for possible misspecification of the forecast models under consideration. We first study a covariance stationary environment. We show that under suitable conditions the IC method will be consistent for the best approximating models among the candidate models. In contrast, under standard assumptions the SOOS method will select overparameterized models with positive probability, resulting in excessive finite-sample PMSEs. We also show that in the presence of unmodelled structural change both methods will be inadmissible in the sense that they may select a model with strictly higher PMSE than the best approximating models among the candidate models.
- JEL Code
- C22 : Mathematical and Quantitative Methods→Single Equation Models, Single Variables→Time-Series Models, Dynamic Quantile Regressions, Dynamic Treatment Effect Models &bull Diffusion Processes
C52 : Mathematical and Quantitative Methods→Econometric Modeling→Model Evaluation, Validation, and Selection
C53 : Mathematical and Quantitative Methods→Econometric Modeling→Forecasting and Prediction Methods, Simulation Methods
- 1 November 2002
- WORKING PAPER SERIES - No. 196Details
- Abstract
- Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression eroor as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-to-implement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures based on the i.i.d. error assumption.
- JEL Code
- C15 : Mathematical and Quantitative Methods→Econometric and Statistical Methods and Methodology: General→Statistical Simulation Methods: General
C22 : Mathematical and Quantitative Methods→Single Equation Models, Single Variables→Time-Series Models, Dynamic Quantile Regressions, Dynamic Treatment Effect Models &bull Diffusion Processes
C52 : Mathematical and Quantitative Methods→Econometric Modeling→Model Evaluation, Validation, and Selection
- 1 November 2002
- WORKING PAPER SERIES - No. 195Details
- Abstract
- It is widely known that significant in-sample evidence of predictability does not garantuee significant out-of-sample predictability. This is often interpreted as an indiciation that in-sample evidence is likely to be spurious and should be discounted. In this paper we question this conventional wisdom. Our analysis shows that neither data mining nor parameter instability is a plausible explanation of the observed tendency of in-smaple tests to reject the no predictability null more often than out-of-sample tests. We provide an alternative explanation based on the higher power of in-sample tests of predictability. We conclude that results of in-sample tests of predictability will typically be more credible than results of out-of-sample tests.
- JEL Code
- C12 : Mathematical and Quantitative Methods→Econometric and Statistical Methods and Methodology: General→Hypothesis Testing: General
C22 : Mathematical and Quantitative Methods→Single Equation Models, Single Variables→Time-Series Models, Dynamic Quantile Regressions, Dynamic Treatment Effect Models &bull Diffusion Processes
C52 : Mathematical and Quantitative Methods→Econometric Modeling→Model Evaluation, Validation, and Selection
- 1 November 2001
- WORKING PAPER SERIES - No. 88Details
- Abstract
- We propose a nonlinear econometric model that can explain both the observed volatility and the persistence of real and nominal exchange rates. The model implies that near equilibrium, the nominal exchange rate will be well approximated by a random walk process. Large departures from fundamentals, in contrast, imply mean-reverting behavior toward fundamentals. Moreover, the predictability of the nominal exchange rate relative to the random walk benchmark tends to improve at longer horizons. We test the implications of the model and find strong evidence of exchange rate predictability at horizons of two to three years, but not at shorter horizons
- JEL Code
- F31 : International Economics→International Finance→Foreign Exchange
F47 : International Economics→Macroeconomic Aspects of International Trade and Finance→Forecasting and Simulation: Models and Applications
C53 : Mathematical and Quantitative Methods→Econometric Modeling→Forecasting and Prediction Methods, Simulation Methods