Mostrando recursos 1 - 20 de 200

  1. Dimension reduction-based significance testing in nonparametric regression

    Zhu, Xuehu; Zhu, Lixing
    A dimension reduction-based adaptive-to-model test is proposed for significance of a subset of covariates in the context of a nonparametric regression model. Unlike existing locally smoothing significance tests, the new test behaves like a locally smoothing test as if the number of covariates was just that under the null hypothesis and it can detect local alternative hypotheses distinct from the null hypothesis at the rate that is only related to the number of covariates under the null hypothesis. Thus, the curse of dimensionality is largely alleviated when nonparametric estimation is inevitably required. In the cases where there are many insignificant...

  2. High-dimensional robust precision matrix estimation: Cellwise corruption under $\epsilon $-contamination

    Loh, Po-Ling; Tan, Xin Lu
    We analyze the statistical consistency of robust estimators for precision matrices in high dimensions. We focus on a contamination mechanism acting cellwise on the data matrix. The estimators we analyze are formed by plugging appropriately chosen robust covariance matrix estimators into the graphical Lasso and CLIME. Such estimators were recently proposed in the robust statistics literature, but only analyzed mathematically from the point of view of the breakdown point. This paper provides complementary high-dimensional error bounds for the precision matrix estimators that reveal the interplay between the dimensionality of the problem and the degree of contamination permitted in the observed...

  3. A two stage $k$-monotone B-spline regression estimator: Uniform Lipschitz property and optimal convergence rate

    Lebair, Teresa M.; Shen, Jinglai
    This paper considers $k$-monotone estimation and the related asymptotic performance analysis over a suitable Hölder class for general $k$. A novel two stage $k$-monotone B-spline estimator is proposed: in the first stage, an unconstrained estimator with optimal asymptotic performance is considered; in the second stage, a $k$-monotone B-spline estimator is constructed (roughly) by projecting the unconstrained estimator onto a cone of $k$-monotone splines. To study the asymptotic performance of the second stage estimator under the sup-norm and other risks, a critical uniform Lipschitz property for the $k$-monotone B-spline estimator is established under the $\ell_{\infty }$-norm. This property uniformly bounds the...

  4. Uniformly valid confidence sets based on the Lasso

    Ewald, Karl; Schneider, Ulrike
    In a linear regression model of fixed dimension $p\leq n$, we construct confidence regions for the unknown parameter vector based on the Lasso estimator that uniformly and exactly hold the prescribed in finite samples as well as in an asymptotic setup. We thereby quantify estimation uncertainty as well as the “post-model selection error” of this estimator. More concretely, in finite samples with Gaussian errors and asymptotically in the case where the Lasso estimator is tuned to perform conservative model selection, we derive exact formulas for computing the minimal coverage probability over the entire parameter space for a large class of...

  5. Bayesian nonparametric estimation of survival functions with multiple-samples information

    Riva Palacio, Alan; Leisen, Fabrizio
    In many real problems, dependence structures more general than exchangeability are required. For instance, in some settings partial exchangeability is a more reasonable assumption. For this reason, vectors of dependent Bayesian nonparametric priors have recently gained popularity. They provide flexible models which are tractable from a computational and theoretical point of view. In this paper, we focus on their use for estimating multivariate survival functions. Our model extends the work of Epifani and Lijoi (2010) to an arbitrary dimension and allows to model the dependence among survival times of different groups of observations. Theoretical results about the posterior behaviour of...

  6. Conditional kernel density estimation for some incomplete data models

    Yan, Ting; Qu, Liangqiang; Li, Zhaohai; Yuan, Ao
    A class of density estimators based on observed incomplete data are proposed. The method is to use a conditional kernel, defined as the expectation of a given kernel for the complete data conditioning on the observed data, to construct the density estimator. We study such kernel density estimators for several commonly used incomplete data models and establish their basic asymptotic properties. Some characteristics different from the classical kernel estimators are discovered. For instance, the asymptotic results of the proposed estimator do not depend on the choice of the kernel $k(\cdot )$. Simulation study is conducted to evaluate the performance of...

  7. Fast adaptive estimation of log-additive exponential models in Kullback-Leibler divergence

    Butucea, Cristina; Delmas, Jean-François; Dutfoy, Anne; Fischer, Richard
    We study the problem of nonparametric estimation of probability density functions (pdf) with a product form on the domain $\triangle =\{(x_{1},\ldots ,x_{d})\in{\mathbb{R}} ^{d},0\leq x_{1}\leq \dots\leq x_{d}\leq 1\}$. Such pdf’s appear in the random truncation model as the joint pdf of the observations. They are also obtained as maximum entropy distributions of order statistics with given marginals. We propose an estimation method based on the approximation of the logarithm of the density by a carefully chosen family of basis functions. We show that the method achieves a fast convergence rate in probability with respect to the Kullback-Leibler divergence for pdf’s whose...

  8. Convex and non-convex regularization methods for spatial point processes intensity estimation

    Choiruddin, Achmad; Coeurjolly, Jean-François; Letué, Frédérique
    This paper deals with feature selection procedures for spatial point processes intensity estimation. We consider regularized versions of estimating equations based on Campbell theorem. In particular, we consider two classical functions: the Poisson likelihood and the logistic regression likelihood. We provide general conditions on the spatial point processes and on penalty functions which ensure oracle property, consistency, and asymptotic normality under the increasing domain setting. We discuss the numerical implementation and assess finite sample properties in simulation studies. Finally, an application to tropical forestry datasets illustrates the use of the proposed method.

  9. An MM algorithm for estimation of a two component semiparametric density mixture with a known component

    Shen, Zhou; Levine, Michael; Shang, Zuofeng
    We consider a semiparametric mixture of two univariate density functions where one of them is known while the weight and the other function are unknown. We do not assume any additional structure on the unknown density function. For this mixture model, we derive a new sufficient identifiability condition and pinpoint a specific class of distributions describing the unknown component for which this condition is mostly satisfied. We also suggest a novel approach to estimation of this model that is based on an idea of applying a maximum smoothed likelihood to what would otherwise have been an ill-posed problem. We introduce...

  10. Supervised multiway factorization

    Lock, Eric F.; Li, Gen
    We describe a probabilistic PARAFAC/CANDECOMP (CP) factorization for multiway (i.e., tensor) data that incorporates auxiliary covariates, SupCP. SupCP generalizes the supervised singular value decomposition (SupSVD) for vector-valued observations, to allow for observations that have the form of a matrix or higher-order array. Such data are increasingly encountered in biomedical research and other fields. We use a novel likelihood-based latent variable representation of the CP factorization, in which the latent variables are informed by additional covariates. We give conditions for identifiability, and develop an EM algorithm for simultaneous estimation of all model parameters. SupCP can be used for dimension reduction, capturing...

  11. A strong converse bound for multiple hypothesis testing, with applications to high-dimensional estimation

    Venkataramanan, Ramji; Johnson, Oliver
    In statistical inference problems, we wish to obtain lower bounds on the minimax risk, that is to bound the performance of any possible estimator. A standard technique to do this involves the use of Fano’s inequality. However, recent work in an information-theoretic setting has shown that an argument based on binary hypothesis testing gives tighter converse results (error lower bounds) than Fano for channel coding problems. We adapt this technique to the statistical setting, and argue that Fano’s inequality can always be replaced by this approach to obtain tighter lower bounds that can be easily computed and are asymptotically sharp....

  12. Inference for heavy tailed stationary time series based on sliding blocks

    Bücher, Axel; Segers, Johan
    The block maxima method in extreme value theory consists of fitting an extreme value distribution to a sample of block maxima extracted from a time series. Traditionally, the maxima are taken over disjoint blocks of observations. Alternatively, the blocks can be chosen to slide through the observation period, yielding a larger number of overlapping blocks. Inference based on sliding blocks is found to be more efficient than inference based on disjoint blocks. The asymptotic variance of the maximum likelihood estimator of the Fréchet shape parameter is reduced by more than 18%. Interestingly, the amount of the efficiency gain is the...

  13. Exact post-selection inference for the generalized lasso path

    Hyun, Sangwon; G’Sell, Max; Tibshirani, Ryan J.
    We study tools for inference conditioned on model selection events that are defined by the generalized lasso regularization path. The generalized lasso estimate is given by the solution of a penalized least squares regression problem, where the penalty is the $\ell_{1}$ norm of a matrix $D$ times the coefficient vector. The generalized lasso path collects these estimates as the penalty parameter $\lambda$ varies (from $\infty$ down to 0). Leveraging a (sequential) characterization of this path from Tibshirani and Taylor [37], and recent advances in post-selection inference from Lee at al. [22], Tibshirani et al. [38], we develop exact hypothesis tests...

  14. Feasible invertibility conditions and maximum likelihood estimation for observation-driven models

    Blasques, Francisco; Gorgi, Paolo; Koopman, Siem Jan; Wintenberger, Olivier
    Invertibility conditions for observation-driven time series models often fail to be guaranteed in empirical applications. As a result, the asymptotic theory of maximum likelihood and quasi-maximum likelihood estimators may be compromised. We derive considerably weaker conditions that can be used in practice to ensure the consistency of the maximum likelihood estimator for a wide class of observation-driven time series models. Our consistency results hold for both correctly specified and misspecified models. We also obtain an asymptotic test and confidence bounds for the unfeasible “true” invertibility region of the parameter space. The practical relevance of the theory is highlighted in a...

  15. Ridge regression for the functional concurrent model

    Manrique, Tito; Crambes, Christophe; Hilgert, Nadine
    The aim of this paper is to propose estimators of the unknown functional coefficients in the Functional Concurrent Model (FCM). We extend the Ridge Regression method developed in the classical linear case to the functional data framework. Two distinct penalized estimators are obtained: one with a constant regularization parameter and the other with a functional one. We prove the probability convergence of these estimators with rate. Then we study the practical choice of both regularization parameters. Additionally, we present some simulations that show the accuracy of these estimators despite a very low signal-to-noise ratio.

  16. Supervised dimensionality reduction via distance correlation maximization

    Vepakomma, Praneeth; Tonde, Chetan; Elgammal, Ahmed
    In our work, we propose a novel formulation for supervised dimensionality reduction based on a nonlinear dependency criterion called Statistical Distance Correlation, (Székely et al., 2007). We propose an objective which is free of distributional assumptions on regression variables and regression model assumptions. Our proposed formulation is based on learning a low-dimensional feature representation $\mathbf{z}$, which maximizes the squared sum of Distance Correlations between low-dimensional features $\mathbf{z}$ and response $y$, and also between features $\mathbf{z}$ and covariates $\mathbf{x}$. We propose a novel algorithm to optimize our proposed objective using the Generalized Minimization Maximization method of (Parizi et al., 2015). We...

  17. Least tail-trimmed absolute deviation estimation for autoregressions with infinite/finite variance

    Wu, Rongning; Cui, Yunwei
    We propose least tail-trimmed absolute deviation estimation for autoregressive processes with infinite/finite variance. We explore the large sample properties of the resulting estimator and establish its asymptotic normality. Moreover, we study convergence rates of the estimator under different moment settings and show that it attains a super-$\sqrt{n}$ convergence rate when the innovation variance is infinite. Simulation studies are carried out to examine the finite-sample performance of the proposed method and that of relevant statistical inferences. A real example is also presented.

  18. Estimation of the asymptotic variance of univariate and multivariate random fields and statistical inference

    Prause, Annabel; Steland, Ansgar
    Correlated random fields are a common way to model dependence structures in high-dimensional data, especially for data collected in imaging. One important parameter characterizing the degree of dependence is the asymptotic variance which adds up all autocovariances in the temporal and spatial domain. Especially, it arises in the standardization of test statistics based on partial sums of random fields and thus the construction of tests requires its estimation. In this paper we propose consistent estimators for this parameter for strictly stationary $\varphi $-mixing random fields with arbitrary dimension of the domain and taking values in a Euclidean space of arbitrary...

  19. Normalizing constants of log-concave densities

    Brosse, Nicolas; Durmus, Alain; Moulines, Éric
    We derive explicit bounds for the computation of normalizing constants $Z$ for log-concave densities $\pi =\mathrm{e}^{-U}/Z$ w.r.t. the Lebesgue measure on $\mathbb{R}^{d}$. Our approach relies on a Gaussian annealing combined with recent and precise bounds on the Unadjusted Langevin Algorithm [15]. Polynomial bounds in the dimension $d$ are obtained with an exponent that depends on the assumptions made on $U$. The algorithm also provides a theoretically grounded choice of the annealing sequence of variances. A numerical experiment supports our findings. Results of independent interest on the mean squared error of the empirical average of locally Lipschitz functions are established.

  20. Efficient estimation in the partially linear quantile regression model for longitudinal data

    Kim, Seonjin; Cho, Hyunkeun Ryan
    The focus of this study is efficient estimation in a quantile regression model with partially linear coefficients for longitudinal data, where repeated measurements within each subject are likely to be correlated. We propose a weighted quantile regression approach for time-invariant and time-varying coefficient estimation. The proposed approach can employ two types of weights obtained from an empirical likelihood method to account for the within-subject correlation: the global weight using all observations and the local weight using observations in the neighborhood of the time point of interest. We investigate the influence of choice of weights on asymptotic estimation efficiency and find...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.