Show simple item record

dc.contributor.authorPerron, Pierreen_US
dc.contributor.authorYamamoto, Yoheien_US
dc.date2019-07-08
dc.date.accessioned2019-12-19T14:06:44Z
dc.date.accessioned2020-01-06T15:55:21Z
dc.date.available2019-12-19T14:06:44Z
dc.date.available2020-01-06T15:55:21Z
dc.date.issued2019-07-08
dc.identifierhttps://www.tandfonline.com/doi/full/10.1080/07350015.2019.1641410
dc.identifier.citationPierre Perron, Yohei Yamamoto. 2019. "Testing for Changes in Forecasting Performance." Journal of Business and Economic Statistics, https://doi.org/10.1080/07350015.2019.1641410
dc.identifier.issn0735-0015
dc.identifier.urihttps://hdl.handle.net/2144/39042
dc.description.abstractWe consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of-sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossi’s (2009) (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over values of m within a pre-specified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss sup-Wald (TLSW) and Total Loss UDmax (TLUD) tests. Using theoretical analyses and simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all data-generating processes considered are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and empirical applications illustrate the relevance of our findings in practice.en_US
dc.publisherTaylor & Francisen_US
dc.relation.ispartofJournal of Business and Economic Statistics
dc.relation.replaceshttps://hdl.handle.net/2144/39011
dc.relation.replaces2144/39011
dc.subjectForecast breakdownen_US
dc.subjectNon-monotonic poweren_US
dc.subjectStructural changeen_US
dc.subjectOut-of-sample forecasten_US
dc.subjectEconometricsen_US
dc.subjectMathematical sciencesen_US
dc.subjectEconomicsen_US
dc.subjectCommerce, management, tourism and servicesen_US
dc.titleTesting for changes in forecasting performanceen_US
dc.typeArticleen_US
dc.description.versionFirst author draften_US
dc.identifier.doi10.1080/07350015.2019.1641410
pubs.elements-sourcemanual-entryen_US
pubs.notes(R&R at JBES)en_US
pubs.notesEmbargo: Not knownen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Arts & Sciencesen_US
pubs.organisational-groupBoston University, College of Arts & Sciences, Department of Economicsen_US
pubs.publication-statusPublisheden_US
dc.date.online2019-07-08
dc.date.online2019-07-08
dc.date.online2019-07-08
dc.identifier.mycv418268


This item appears in the following Collection(s)

Show simple item record