-
作者:Kim, Mi-Ok
作者单位:Cincinnati Children's Hospital Medical Center; University System of Ohio; University of Cincinnati; University of Kentucky
摘要:Quantile regression provides a framework for modeling statistical quantities of interest other than the conditional mean. The regression methodology is well developed for linear models, but less so for nonparametric models. We consider conditional quantiles with varying coefficients and propose a methodology for their estimation and assessment using polynomial splines. The proposed estimators are easy to compute via standard quantile regression algorithms and a stepwise knot selection algorith...
-
作者:Cai, T. Tony; Lv, Jinchi
作者单位:University of Pennsylvania; Princeton University
-
作者:Rudin, Cynthia; Schapire, Robert E.; Daubechies, Ingrid
作者单位:Columbia University; Princeton University; Princeton University
摘要:We introduce a useful tool for analyzing boosting algorithms called the smooth margin function, a differentiable approximation of the usual margin for boosting algorithms. We present two boosting algorithms based on this smooth margin, coordinate ascent boosting and approximate coordinate ascent boosting, which are similar to Freund and Schapire's AdaBoost algorithm and Breiman's arc-gv algorithm. We give convergence rates to the maximum margin solution for both of our algorithms and for arc-g...
-
作者:Li, Bing; Yin, Xiangrong
作者单位:Pennsylvania Commonwealth System of Higher Education (PCSHE); Pennsylvania State University; Pennsylvania State University - University Park; University System of Georgia; University of Georgia
摘要:We consider a general nonlinear regression problem where the predictors contain measurement error. It has been recently discovered that several well-known dimension reduction methods, such as OLS, SIR and pHd, can be performed on the surrogate regression problem to produce consistent estimates for the original regression problem involving the unobserved true predictor. In this paper we establish a general invariance law between the surrogate and the original dimension reduction spaces, which i...
-
作者:Finner, Helmut; Dickhaus, Thorsten; Roters, Markus
作者单位:Leibniz Association; Deutsches Diabetes-Zentrum (DDZ); Heinrich Heine University Dusseldorf
摘要:Some effort has been undertaken over the last decade to provide conditions for the control of the false discovery rate by the linear step-up procedure (LSU) for testing n hypotheses when test statistics are dependent. In this paper we investigate the expected error rate (EER) and the false discovery rate (FDR) in some extreme parameter configurations when n tends to infinity for test statistics being exchangeable under null hypotheses. All results are derived in terms of p-values. In a general...
-
作者:Jiang, Wenxin
作者单位:Northwestern University
摘要:Bayesian variable selection has gained much empirical success recently in a variety of applications when the number K of explanatory variables (x(1),..., x(K)) is possibly much larger than the sample size a. For generalized linear models, if most of the x(j)'s have very small effects on the response y, we show that it is possible to use Bayesian variable selection to reduce overtitting caused by the curse of dimensionality K >> n. In this approach a suitable prior can be used to choose a few o...
-
作者:Romano, Joseph P.; Wolf, Michael
作者单位:Stanford University; University of Zurich
摘要:Consider the problem of testing s hypotheses simultaneously. The usual approach restricts attention to procedures that control the probability of even one false rejection, the familywise error rate (FWER). If s is large, one might be willing to tolerate more than one false rejection, thereby increasing the ability of the procedure to correctly reject false null hypotheses. One possibility is to replace control of the FWER by control of the probability of k or more false rejections, which is ca...
-
作者:Steinwart, Ingo; Scovel, Clint
作者单位:United States Department of Energy (DOE); Los Alamos National Laboratory
摘要:For binary classification we establish learning rates up to the order of n(-1) for support vector machines (SVMs) with hinge loss and Gaussian RBF kernels. These rates are in terms of two assumptions on the considered distributions: Tsybakov's noise assumption to establish a small estimation error, and a new geometric noise condition which is used to bound the approximation error. Unlike previously proposed concepts for bounding the approximation error, the geometric noise assumption does not ...
-
作者:Bickel, Peter J.
作者单位:University of California System; University of California Berkeley
-
作者:Meinshausen, N.; Rocha, G.; Yu, B.
作者单位:University of Oxford; University of California System; University of California Berkeley