Mostrando recursos 1 - 20 de 4.236

  1. A multivariate central limit theorem for randomized orthogonal array sampling designs in computer experiments

    Loh, Wei-Liem
    Let f : [0, 1)d→ℝ be an integrable function. An objective of many computer experiments is to estimate ∫[0, 1)d f(x) dx by evaluating f at a finite number of points in [0, 1)d. There is a design issue in the choice of these points and a popular choice is via the use of randomized orthogonal arrays. This article proves a multivariate central limit theorem for a class of randomized orthogonal array sampling designs [Owen Statist. Sinica 2 (1992a) 439–452] as well as for a class of OA-based Latin hypercubes [Tang J. Amer. Statist. Assoc. 81 (1993) 1392–1397].

  2. Asymptotic equivalence for nonparametric regression with multivariate and random design

    Reiß, Markus
    We show that nonparametric regression is asymptotically equivalent, in Le Cam’s sense, to a sequence of Gaussian white noise experiments as the number of observations tends to infinity. We propose a general constructive framework, based on approximation spaces, which allows asymptotic equivalence to be achieved, even in the cases of multivariate and random design.

  3. A wavelet whittle estimator of the memory parameter of a nonstationary Gaussian time series

    Moulines, E.; Roueff, F.; Taqqu, M. S.
    We consider a time series X={Xk, k∈ℤ} with memory parameter d0∈ℝ. This time series is either stationary or can be made stationary after differencing a finite number of times. We study the “local Whittle wavelet estimator” of the memory parameter d0. This is a wavelet-based semiparametric pseudo-likelihood maximum method estimator. The estimator may depend on a given finite range of scales or on a range which becomes infinite with the sample size. We show that the estimator is consistent and rate optimal if X is a linear process, and is asymptotically normal if X is Gaussian.

  4. Locally adaptive estimation of evolutionary wavelet spectra

    Van Bellegem, Sébastien; von Sachs, Rainer
    We introduce a wavelet-based model of local stationarity. This model enlarges the class of locally stationary wavelet processes and contains processes whose spectral density function may change very suddenly in time. A notion of time-varying wavelet spectrum is uniquely defined as a wavelet-type transform of the autocovariance function with respect to so-called autocorrelation wavelets. This leads to a natural representation of the autocovariance which is localized on scales. We propose a pointwise adaptive estimator of the time-varying spectrum. The behavior of the estimator studied in homogeneous and inhomogeneous regions of the wavelet spectrum.

  5. Confidence bands in nonparametric time series regression

    Zhao, Zhibiao; Wu, Wei Biao
    We consider nonparametric estimation of mean regression and conditional variance (or volatility) functions in nonlinear stochastic regression models. Simultaneous confidence bands are constructed and the coverage probabilities are shown to be asymptotically correct. The imposed dependence structure allows applications in many linear and nonlinear auto-regressive processes. The results are applied to the S&P 500 Index data.

  6. General frequentist properties of the posterior profile distribution

    Cheng, Guang; Kosorok, Michael R.
    In this paper, inference for the parametric component of a semiparametric model based on sampling from the posterior profile distribution is thoroughly investigated from the frequentist viewpoint. The higher-order validity of the profile sampler obtained in Cheng and Kosorok [Ann. Statist. 36 (2008)] is extended to semiparametric models in which the infinite dimensional nuisance parameter may not have a root-n convergence rate. This is a nontrivial extension because it requires a delicate analysis of the entropy of the semiparametric models involved. We find that the accuracy of inferences based on the profile sampler improves as the convergence rate of the...

  7. Higher order semiparametric frequentist inference with the profile sampler

    Cheng, Guang; Kosorok, Michael R.
    We consider higher order frequentist inference for the parametric component of a semiparametric model based on sampling from the posterior profile distribution. The first order validity of this procedure established by Lee, Kosorok and Fine in [J. American Statist. Assoc. 100 (2005) 960–969] is extended to second-order validity in the setting where the infinite-dimensional nuisance parameter achieves the parametric rate. Specifically, we obtain higher order estimates of the maximum profile likelihood estimator and of the efficient Fisher information. Moreover, we prove that an exact frequentist confidence interval for the parametric component at level α can be estimated by the α-level...

  8. Multiscale inference about a density

    Dümbgen, Lutz; Walther, Günther
    We introduce a multiscale test statistic based on local order statistics and spacings that provides simultaneous confidence statements for the existence and location of local increases and decreases of a density or a failure rate. The procedure provides guaranteed finite-sample significance levels, is easy to implement and possesses certain asymptotic optimality and adaptivity properties.

  9. Searching for a trail of evidence in a maze

    Arias-Castro, Ery; Candès, Emmanuel J.; Helgason, Hannes; Zeitouni, Ofer
    Consider a graph with a set of vertices and oriented edges connecting pairs of vertices. Each vertex is associated with a random variable and these are assumed to be independent. In this setting, suppose we wish to solve the following hypothesis testing problem: under the null, the random variables have common distribution N(0, 1) while under the alternative, there is an unknown path along which random variables have distribution N(μ, 1), μ> 0, and distribution N(0, 1) away from it. For which values of the mean shift μ can one reliably detect and for which values is this impossible? ¶ Consider, for...

  10. Semiparametric detection of significant activation for brain fMRI

    Zhang, Chunming; Yu, Tao
    Functional magnetic resonance imaging (fMRI) aims to locate activated regions in human brains when specific tasks are performed. The conventional tool for analyzing fMRI data applies some variant of the linear model, which is restrictive in modeling assumptions. To yield more accurate prediction of the time-course behavior of neuronal responses, the semiparametric inference for the underlying hemodynamic response function is developed to identify significantly activated voxels. Under mild regularity conditions, we demonstrate that a class of the proposed semiparametric test statistics, based on the local linear estimation technique, follow χ2 distributions under null hypotheses for a number of useful hypotheses....

  11. Fence methods for mixed model selection

    Jiang, Jiming; Rao, J. Sunil; Gu, Zhonghua; Nguyen, Thuan
    Many model search strategies involve trading off model fit with model complexity in a penalized goodness of fit measure. Asymptotic properties for these types of procedures in settings like linear regression and ARMA time series have been studied, but these do not naturally extend to nonstandard situations such as mixed effects models, where simple definition of the sample size is not meaningful. This paper introduces a new class of strategies, known as fence methods, for mixed model selection, which includes linear and generalized linear mixed models. The idea involves a procedure to isolate a subgroup of what are known as...

  12. Dimension reduction based on constrained canonical correlation and variable filtering

    Zhou, Jianhui; He, Xuming
    The “curse of dimensionality” has remained a challenge for high-dimensional data analysis in statistics. The sliced inverse regression (SIR) and canonical correlation (CANCOR) methods aim to reduce the dimensionality of data by replacing the explanatory variables with a small number of composite directions without losing much information. However, the estimated composite directions generally involve all of the variables, making their interpretation difficult. To simplify the direction estimates, Ni, Cook and Tsai [Biometrika 92 (2005) 242–247] proposed the shrinkage sliced inverse regression (SSIR) based on SIR. In this paper, we propose the constrained canonical correlation (C3) method based on CANCOR, followed...

  13. Statistics of extremes by oracle estimation

    Grama, Ion; Spokoiny, Vladimir
    We use the fitted Pareto law to construct an accompanying approximation of the excess distribution function. A selection rule of the location of the excess distribution function is proposed based on a stagewise lack-of-fit testing procedure. Our main result is an oracle type inequality for the Kullback–Leibler loss.

  14. “Preconditioning” for feature selection and regression in high-dimensional problems

    Paul, Debashis; Bair, Eric; Hastie, Trevor; Tibshirani, Robert
    We consider regression problems where the number of predictors greatly exceeds the number of observations. We propose a method for variable selection that first estimates the regression function, yielding a “preconditioned” response variable. The primary method used for this initial regression is supervised principal components. Then we apply a standard procedure such as forward stepwise selection or the LASSO to the preconditioned response variable. In a number of simulated and real data examples, this two-step procedure outperforms forward stepwise selection or the usual LASSO (applied directly to the raw outcome). We also show that under a certain Gaussian latent variable...

  15. The sparsity and bias of the Lasso selection in high-dimensional linear regression

    Zhang, Cun-Hui; Huang, Jian
    Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436–1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541–2567] formalized the neighborhood stability condition in the context of linear regression as a strong irrepresentable condition. That paper showed that under this condition, the LASSO selects exactly the set of nonzero regression coefficients, provided that these coefficients are bounded away from zero at a certain rate. In this paper, the...

  16. Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models

    Zou, Hui; Li, Runze
    We would like to take this opportunity to thank the discussants for their thoughtful comments and encouragements on our work. The discussants raised a number of issues from theoretical as well as computational perspectives. Our rejoinder will try to provide some insights into these issues and address specific questions asked by the discussants.

  17. Discussion: One-step sparse estimates in nonconcave penalized likelihood models

    Zhang, Cun-Hui

  18. Discussion: One-step sparse estimates in nonconcave penalized likelihood models: Who cares if it is a white cat or a black cat?

    Meng, Xiao-Li

  19. Discussion: One-step sparse estimates in nonconcave penalized likelihood models

    Bühlmann, Peter; Meier, Lukas

  20. One-step sparse estimates in nonconcave penalized likelihood models

    Zou, Hui; Li, Runze
    Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective function is nondifferentiable and nonconcave. In this article, we propose a new unified algorithm based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. Convergence and other theoretical properties of the LLA algorithm are established. A distinguished feature of the LLA algorithm is that at each LLA step, the LLA estimator can...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.