-
作者:Wang, QH; Rao, JNK
作者单位:Chinese Academy of Sciences; Academy of Mathematics & System Sciences, CAS; Peking University; Carleton University
摘要:Inference under kernel regression imputation for missing response data is considered. An adjusted empirical likelihood approach to inference for the mean of the response variable is developed. A nonparametric version of Wilks' theorem is proved for the adjusted empirical log-likelihood ratio by showing that it has an asymptotic standard chi-squared distribution, and the corresponding empirical likelihood confidence interval for the mean is constructed. With auxiliary information, an empirical ...
-
作者:[Anonymous]
-
作者:Cohen, A; Kemperman, JHB; Sackrowitz, H
作者单位:Rutgers University System; Rutgers University New Brunswick
摘要:The genetic distance between two loci on a chromosome is defined as the mean number of crossovers between the loci. The parameters of the crossover distribution are constrained by the parameters of the distribution of chiasmata. Ott (1996) derived the maximum likelihood estimator (MLE) of the parameters of the crossover distribution and the MLE of the mean. We demonstrate that the MLE of the mean is pointwise less than or equal to the empirical mean number of crossovers. It follows that the ML...
-
作者:Carter, AV
作者单位:University of California System; University of California Santa Barbara
摘要:The deficiency distance between a multinomial and a multivariate normal experiment is bounded under a condition that the parameters are bounded away from zero. This result can be used as a key step in establishing asymptotic normal approximations to nonparametric density estimation experiments. The bound relies on the recursive construction of explicit Markov kernels that can be used to reproduce one experiment from the other. The distance is then bounded using classic local-limit bounds betwe...
-
作者:Hallin, M; Paindaveine, D
作者单位:Universite Libre de Bruxelles; Universite Libre de Bruxelles
摘要:We propose a family of tests. based on Randles' (1989) concept of interdirections and the ranks of pseudo-Mahalanobis distances computed with respect to a multivariate M-estimator of scatter due to Tyler (1987), for the multivariate one-sample problem under elliptical symmetry. These tests, which generalize the univariate signed-rank tests, are affine-invariant. Depending on the score function considered (van der Waerden, Laplace,...), they allow for locally asymptotically maximin tests at sel...
-
作者:Fan, J; Zhang, C; Zhang, J
作者单位:University of Wisconsin System; University of Wisconsin Madison
-
作者:Meng, XL; Zaslavsky, AM
作者单位:Harvard University; Harvard University; Harvard Medical School
摘要:This paper studies a class of default priors, which we call single observation unbiased priors (SOUP). A prior for a parameter is a SOUP if the corresponding posterior mean of the parameter based on a single observation is an unbiased estimator of the parameter. We prove that, under mild regularity conditions, a default prior for a convolution parameter is noninformative in the sense of yielding a posterior inference invariant under amalgamation only if it is a SOUP. Therefore, when amalgamati...
-
作者:Tjur, T
作者单位:Copenhagen Business School
-
作者:Bergsma, WP; Rudas, T
作者单位:Tilburg University; HUN-REN; HUN-REN Centre for Social Sciences; Institute for Sociology - HAS; Eotvos Lorand University; Central European University
摘要:Statistical models defined by imposing restrictions on marginal distributions of contingency tables have received considerable attention recently. This paper introduces a general definition of marginal log-linear parameters and describes conditions for a marginal log-linear parameter to be a smooth parameterization of the distribution and to be variation independent. Statistical models defined by imposing affine restrictions on the marginal log-linear parameters are investigated. These models ...
-
作者:Jiang, WX
作者单位:Northwestern University
摘要:When studying the training error and the prediction error for boosting, it is often assumed that the hypotheses returned by the base learner are weakly accurate, or are able to beat a random guesser by a certain amount of difference. It has been an open question how much this difference can be. whether it will eventually disappear in the boosting process or be bounded by a positive amount. This question is crucial for the behavior of both the training error and the prediction error. In this pa...