-
作者:Gronneberg, Steffen; Holcblat, Benjamin
作者单位:BI Norwegian Business School; University of Luxembourg
摘要:We establish general and versatile results regarding the limit behavior of the partial-sum process of ARMAX residuals. Illustrations include ARMA with seasonal dummies, misspecified ARMAX models with autocorrelated errors, nonlinear ARMAX models, ARMA with a structural break, a wide range of ARMAX models with infinite-variance errors, weak GARCH models and the consistency of kernel estimation of the density of ARMAX errors. Our results identify the limit distributions, and provide a general al...
-
作者:Li, Zeng; Lam, Clifford; Yao, Jianfeng; Yao, Qiwei
作者单位:University of London; London School Economics & Political Science; Pennsylvania Commonwealth System of Higher Education (PCSHE); Pennsylvania State University; Pennsylvania State University - University Park; University of London; London School Economics & Political Science; University of Hong Kong
摘要:Testing for white noise is a classical yet important problem in statistics, especially for diagnostic checks in time series modeling and linear regression. For high-dimensional time series in the sense that the dimension p is large in relation to the sample size T, the popular omnibus tests including the multivariate Hosking and Li-McLeod tests are extremely conservative, leading to substantial power loss. To develop more relevant tests for high-dimensional cases, we propose a portmanteau-type...
-
作者:Sadhanala, Veeranjaneyulu; Tibshirani, Ryan J.
作者单位:Carnegie Mellon University; Carnegie Mellon University
摘要:We study additive models built with trend filtering, that is, additive models whose components are each regularized by the (discrete) total variation of their kth (discrete) derivative, for a chosen integer k >= 0. This results in kth degree piecewise polynomial components, (e.g., k = 0 gives piecewise constant components, k = 1 gives piecewise linear, k = 2 gives piecewise quadratic, etc.). Analogous to its advantages in the univariate case, additive trend filtering has favorable theoretical ...
-
作者:Chen, Xiaohui; Kato, Kengo
作者单位:University of Illinois System; University of Illinois Urbana-Champaign; Cornell University
摘要:This paper studies inference for the mean vector of a high-dimensional U -statistic. In the era of big data, the dimension d of the U-statistic and the sample size n of the observations tend to be both large, and the computation of the U -statistic is prohibitively demanding. Data-dependent inferential procedures such as the empirical bootstrap for U -statistics is even more computationally expensive. To overcome such a computational bottleneck, incomplete U-statistics obtained by sampling few...
-
作者:Dette, Holger; Wu, Weichi
作者单位:Ruhr University Bochum; Ruhr University Bochum; Tsinghua University
摘要:This paper considers the problem of testing if a sequence of means (mu(t))(t=1, ...,n) of a nonstationary time series (X-t)(t=1, )(...,n) is stable in the sense that the difference of the means mu(1) and mu(t )between the initial time t = 1 and any other time is smaller than a given threshold, that is vertical bar mu(1) - mu(t)vertical bar <= c for all t = 1, ..., n. A test for hypotheses of this type is developed using a bias corrected monotone rearranged local linear estimator and asymptotic...
-
作者:Veitch, Victor; Roy, Daniel M.
作者单位:Columbia University; University of Toronto
摘要:Sparse exchangeable graphs on R+, and the associated graphex framework for sparse graphs, generalize exchangeable graphs on N, and the associated graphon framework for dense graphs. We develop the graphex framework as a tool for statistical network analysis by identifying the sampling scheme that is naturally associated with the models of the framework, formalizing two natural notions of consistent estimation of the parameter (the graphex) underlying these models, and identifying general consi...
-
作者:Bing, Xin; Wegkamp, Marten H.
作者单位:Cornell University; Cornell University
摘要:We consider the multivariate response regression problem with a regression coefficient matrix of low, unknown rank. In this setting, we analyze a new criterion for selecting the optimal reduced rank. This criterion differs notably from the one proposed in Bunea, She and Wegkamp (Ann. Statist. 39 (2011) 1282-1309) in that it does not require estimation of the unknown variance of the noise, nor does it depend on a delicate choice of a tuning parameter. We develop an iterative, fully data-driven ...
-
作者:Feng, Long; Zhang, Cun-Hui
作者单位:City University of Hong Kong; Rutgers University System; Rutgers University New Brunswick
摘要:The Lasso is biased. Concave penalized least squares estimation (PLSE) takes advantage of signal strength to reduce this bias, leading to sharper error bounds in prediction, coefficient estimation and variable selection. For prediction and estimation, the bias of the Lasso can be also reduced by taking a smaller penalty level than what selection consistency requires, but such smaller penalty level depends on the sparsity of the true coefficient vector. The sorted l(1) penalized estimation (Slo...
-
作者:Fan, Jianqing; Wang, Dong; Wang, Kaizheng; Zhu, Ziwei
作者单位:Princeton University; University of Michigan System; University of Michigan
摘要:Principal component analysis (PCA) is fundamental to statistical machine learning. It extracts latent principal factors that contribute to the most variation of the data. When data are stored across multiple machines, however, communication cost can prohibit the computation of PCA in a central location and distributed algorithms for PCA are thus needed. This paper proposes and studies a distributed PCA algorithm: each node machine computes the top K eigenvectors and transmits them to the centr...
-
作者:Rinaldo, Alessandro; Wasserman, Larry; G'Sell, Max
作者单位:Carnegie Mellon University
摘要:Several new methods have been recently proposed for performing valid inference after model selection. An older method is sample splitting: use part of the data for model selection and the rest for inference. In this paper, we revisit sample splitting combined with the bootstrap (or the Normal approximation). We show that this leads to a simple, assumption-lean approach to inference and we establish results on the accuracy of the method. In fact, we find new bounds on the accuracy of the bootst...