-
作者:Kim, Yongdai; Jeon, Jong-June
作者单位:Seoul National University (SNU); University of Seoul
摘要:In this paper, we study asymptotic properties of model selection criteria for high-dimensional regression models where the number of covariates is much larger than the sample size. In particular, we consider a class of loss functions calIed the class of quadratically supported risks which is large enough to include the quadratic loss, Huber loss, quantile loss and logistic loss. We provide sufficient conditions for the model selection criteria, which are applicable to the class of quadraticall...
-
作者:Paul, Debashis; Peng, Jie; Burman, Prabir
作者单位:University of California System; University of California Davis
摘要:We study a class of nonlinear nonparametric inverse problems. Specifically, we propose a nonparametric estimator of the dynamics of a monotonically increasing trajectory defined on a finite time interval. Under suitable regularity conditions, we show that in terms of L-2-loss, the optimal rate of convergence for the proposed estimator is the same as that for the estimation of the derivative of a function. We conduct simulation studies to examine the finite sample behavior of the proposed estim...
-
作者:Fan, Yingying; Lv, Jinchi
作者单位:University of Southern California
摘要:Large-scale precision matrix estimation is of fundamental importance yet challenging in many contemporary applications for recovering Gaussian graphical models. In this paper, we suggest a new approach of innovated scalable efficient estimation (ISEE) for estimating large precision matrix. Motivated by the innovated transformation, we convert the original problem into that of large covariance matrix estimation. The suggested method combines the strengths of recent advances in high-dimensional ...
-
作者:Hsing, Tailen; Brown, Thomas; Thelen, Brian
作者单位:University of Michigan System; University of Michigan; Exponent
摘要:Dense spatial data are commonplace nowadays, and they provide the impetus for addressing nonstationarity in a general way. This paper extends the notion of intrinsic random function by allowing the stationary component of the covariance to vary with spatial location. A nonparametric estimation procedure based on gridded data is introduced for the case where the covariance function is regularly varying at any location. An asymptotic theory is developed for the procedure on a fixed domain by let...
-
作者:Lei, Huang; Xia, Yingcun; Qin, Xu
作者单位:Southwest Jiaotong University; National University of Singapore; University of Electronic Science & Technology of China
摘要:Serial correlation in the residuals of time series models can cause bias in both model estimation and prediction. However, models with such serially correlated residuals are difficult to estimate, especially when the regression function is nonlinear. Existing estimation methods require strong assumption for the relation between the residuals and the regressors, which excludes the commonly used autoregressive models in time series analysis. By extending the Whittle likelihood estimation, this p...
-
作者:Panaretos, Victor M.; Zemel, Yoav
作者单位:Swiss Federal Institutes of Technology Domain; Ecole Polytechnique Federale de Lausanne
摘要:We develop a canonical framework for the study of the problem of registration of multiple point processes subjected to warping, known as the problem of separation of amplitude and phase variation. The amplitude variation of a real random function {Y(x) : x is an element of [0, 1]} corresponds to its random oscillations in the y-axis, typically encapsulated by its (co) variation around a mean level. In contrast, its phase variation refers to fluctuations in the x-axis, often caused by random ti...
-
作者:Xu, Min; Chen, Minhua; Lafferty, John
作者单位:University of Pennsylvania; Amazon.com; University of Chicago
摘要:We study the problem of variable selection in convex nonparametric regression. Under the assumption that the true regression function is convex and sparse, we develop a screening procedure to select a subset of variables that contains the relevant variables. Our approach is a two-stage quadratic programming method that estimates a sum of one-dimensional convex functions, followed by one-dimensional concave regression fits on the residuals. In contrast to previous methods for sparse additive mo...
-
作者:Huang, Shiqiong; Jin, Jiashun; Yao, Zhigang
作者单位:Carnegie Mellon University; National University of Singapore
摘要:Given n samples X-1, X-2,...,X-n from N(0, Sigma), we are interested in estimating the p x p precision matrix Omega = Sigma(-)1; we assume Omega is sparse in that each row has relatively few nonzeros. We propose Partial Correlation Screening (PCS) as a new row -by -row approach. To estimate the ith row of Omega, 1 <= i <= p, PCS uses a Screen step and a Clean step. In the Screen step, PCS recruits a (small) subset of indices using a stage -wise algorithm, where in each stage, the algorithm upd...
-
作者:Taylor, Jonathan E.; Loftus, Joshua R.; Tibshirani, Ryan J.
作者单位:Stanford University; Carnegie Mellon University
摘要:We derive an exact p-value for testing a global null hypothesis in a general adaptive regression setting. Our approach uses the Kac-Rice formula [as described in Random Fields and Geometry (2007) Springer, New York] applied to the problem of maximizing a Gaussian process. The resulting test statistic has a known distribution in finite samples, assuming Gaussian errors. We examine this test statistic in the case of the lasso, group lasso, principal components and matrix completion problems. For...
-
作者:Kim, Arlene K. H.; Samworth, Richard J.
作者单位:University of Cambridge
摘要:The estimation of a log-concave density on R-d represents a central problem in the area of nonparametric inference under shape constraints. In this paper, we study the performance of log-concave density estimators with respect to global loss functions, and adopt a minimax approach. We first show that no statistical procedure based on a sample of size n can estimate a log-concave density with respect to the squared Hellinger loss function with supremum risk smaller than order n(-4/5), when d = ...