On the use of inconsistent normalizers for statistical inference on dependent data
OA Version
Citation
Abstract
Statistical inference, such as confidence interval construction, change point detection and nonparametric regression estimation, has been widely explored in many fields including climate science, economics, finance, industrial engineering and many others. The inference has been well developed in the literature under independent settings, while dependent data, especially time series data, is not uncommon to be observed in these areas. Self-normalization is then proposed to analyze statistical inference for time series data. This thesis first explores asymptotic behavior of optimal weighting in generalized self‑normalization, then proposes self‑normalized simultaneous confidence regions for high‑dimensional time series, and lastly explores unsupervised self‑normalized break test for correlation matrix.
The basic idea of self-normalization is that it uses an inconsistent variance estimator as studentizer. The original self-normalizer only considered forward estimators and recently it is generalized to involve both forward and backward estimators with deterministic weights. In the first project, we propose a data-driven weight that corresponds to confidence intervals with minimal lengths and study the asymptotic behavior of such a data-driven weight choice. An interesting dichotomy is found between linear and nonlinear quantities.
In the second project, we would like to overcome the dimension limitation of self-normalization and propose a different perspective to make statistical inference of general quantities of high-dimensional time series. Taking the advantage of data with sparse signals, we develop an asymptotic theory on the maximal modulus of self-normalized statistics. We further establish a thresholded self-normalization method to produce simultaneous confidence regions. The method is able to detect uncommon signals among NASDAQ100 in 2016‑2019 in terms of mean and median log returns.
In the last project, we move on to unsupervised test for correlation matrix breaks. We develop a self-normalized test tailored to detect correlation matrix breaks. This method is unsupervised and directly compares the estimated correlation before and after the hypothesized change point. We apply the test to the stock log returns of 10 companies and volatility indexes of 5 options on individual equities to show its power of detecting correlation matrix breaks.
Description
License
Attribution 4.0 International