Mostrando recursos 1 - 20 de 31

  1. Adaptive posterior contraction rates for the horseshoe

    van der Pas, Stéphanie; Szabó, Botond; van der Vaart, Aad
    We investigate the frequentist properties of Bayesian procedures for estimation based on the horseshoe prior in the sparse multivariate normal means model. Previous theoretical results assumed that the sparsity level, that is, the number of signals, was known. We drop this assumption and characterize the behavior of the maximum marginal likelihood estimator (MMLE) of a key parameter of the horseshoe prior. We prove that the MMLE is an effective estimator of the sparsity level, in the sense that it leads to (near) minimax optimal estimation of the underlying mean vector generating the data. Besides this empirical Bayes procedure, we consider...

  2. Asymptotically minimax prediction in infinite sequence models

    Yano, Keisuke; Komaki, Fumiyasu
    We study asymptotically minimax predictive distributions in infinite sequence models. First, we discuss the connection between prediction in an infinite sequence model and prediction in a function model. Second, we construct an asymptotically minimax predictive distribution for the setting in which the parameter space is a known ellipsoid. We show that the Bayesian predictive distribution based on the Gaussian prior distribution is asymptotically minimax in the ellipsoid. Third, we construct an asymptotically minimax predictive distribution for any Sobolev ellipsoid. We show that the Bayesian predictive distribution based on the product of Stein’s priors is asymptotically minimax for any Sobolev ellipsoid....

  3. Asymptotically minimax prediction in infinite sequence models

    Yano, Keisuke; Komaki, Fumiyasu
    We study asymptotically minimax predictive distributions in infinite sequence models. First, we discuss the connection between prediction in an infinite sequence model and prediction in a function model. Second, we construct an asymptotically minimax predictive distribution for the setting in which the parameter space is a known ellipsoid. We show that the Bayesian predictive distribution based on the Gaussian prior distribution is asymptotically minimax in the ellipsoid. Third, we construct an asymptotically minimax predictive distribution for any Sobolev ellipsoid. We show that the Bayesian predictive distribution based on the product of Stein’s priors is asymptotically minimax for any Sobolev ellipsoid....

  4. Estimation of Kullback-Leibler losses for noisy recovery problems within the exponential family

    Deledalle, Charles-Alban
    We address the question of estimating Kullback-Leibler losses rather than squared losses in recovery problems where the noise is distributed within the exponential family. Inspired by Stein unbiased risk estimator (SURE), we exhibit conditions under which these losses can be unbiasedly estimated or estimated with a controlled bias. Simulations on parameter selection problems in applications to image denoising and variable selection with Gamma and Poisson noises illustrate the interest of Kullback-Leibler losses and the proposed estimators.

  5. Estimation of Kullback-Leibler losses for noisy recovery problems within the exponential family

    Deledalle, Charles-Alban
    We address the question of estimating Kullback-Leibler losses rather than squared losses in recovery problems where the noise is distributed within the exponential family. Inspired by Stein unbiased risk estimator (SURE), we exhibit conditions under which these losses can be unbiasedly estimated or estimated with a controlled bias. Simulations on parameter selection problems in applications to image denoising and variable selection with Gamma and Poisson noises illustrate the interest of Kullback-Leibler losses and the proposed estimators.

  6. Semiparametrically efficient estimation of constrained Euclidean parameters

    Susyanto, Nanang; Klaassen, Chris A. J.
    Consider a quite arbitrary (semi)parametric model for i.i.d. observations with a Euclidean parameter of interest and assume that an asymptotically (semi)parametrically efficient estimator of it is given. If the parameter of interest is known to lie on a general surface (image of a continuously differentiable vector valued function), we have a submodel in which this constrained Euclidean parameter may be rewritten in terms of a lower-dimensional Euclidean parameter of interest. An estimator of this underlying parameter is constructed based on the given estimator of the original Euclidean parameter, and it is shown to be (semi)parametrically efficient. It is proved that...

  7. Semiparametrically efficient estimation of constrained Euclidean parameters

    Susyanto, Nanang; Klaassen, Chris A. J.
    Consider a quite arbitrary (semi)parametric model for i.i.d. observations with a Euclidean parameter of interest and assume that an asymptotically (semi)parametrically efficient estimator of it is given. If the parameter of interest is known to lie on a general surface (image of a continuously differentiable vector valued function), we have a submodel in which this constrained Euclidean parameter may be rewritten in terms of a lower-dimensional Euclidean parameter of interest. An estimator of this underlying parameter is constructed based on the given estimator of the original Euclidean parameter, and it is shown to be (semi)parametrically efficient. It is proved that...

  8. Poincaré inequalities on intervals – application to sensitivity analysis

    Roustant, Olivier; Barthe, Franck; Iooss, Bertrand
    The development of global sensitivity analysis of numerical model outputs has recently raised new issues on 1-dimensional Poincaré inequalities. Typically two kinds of sensitivity indices are linked by a Poincaré type inequality, which provides upper bounds of the most interpretable index by using the other one, cheaper to compute. This allows performing a low-cost screening of unessential variables. The efficiency of this screening then highly depends on the accuracy of the upper bounds in Poincaré inequalities. ¶ The novelty in the questions concern the wide range of probability distributions involved, which are often truncated on intervals. After providing an overview of the...

  9. Poincaré inequalities on intervals – application to sensitivity analysis

    Roustant, Olivier; Barthe, Franck; Iooss, Bertrand
    The development of global sensitivity analysis of numerical model outputs has recently raised new issues on 1-dimensional Poincaré inequalities. Typically two kinds of sensitivity indices are linked by a Poincaré type inequality, which provides upper bounds of the most interpretable index by using the other one, cheaper to compute. This allows performing a low-cost screening of unessential variables. The efficiency of this screening then highly depends on the accuracy of the upper bounds in Poincaré inequalities. ¶ The novelty in the questions concern the wide range of probability distributions involved, which are often truncated on intervals. After providing an overview of the...

  10. Estimator augmentation with applications in high-dimensional group inference

    Zhou, Qing; Min, Seunghyun
    To make statistical inference about a group of parameters on high-dimensional data, we develop the method of estimator augmentation for the block lasso, which is defined via block norm regularization. By augmenting a block lasso estimator $\hat{\beta }$ with the subgradient $S$ of the block norm evaluated at $\hat{\beta }$, we derive a closed-form density for the joint distribution of $(\hat{\beta },S)$ under a high-dimensional setting. This allows us to draw from an estimated sampling distribution of $\hat{\beta }$, or more generally any function of $(\hat{\beta },S)$, by Monte Carlo algorithms. We demonstrate the application of estimator augmentation in group...

  11. Estimator augmentation with applications in high-dimensional group inference

    Zhou, Qing; Min, Seunghyun
    To make statistical inference about a group of parameters on high-dimensional data, we develop the method of estimator augmentation for the block lasso, which is defined via block norm regularization. By augmenting a block lasso estimator $\hat{\beta }$ with the subgradient $S$ of the block norm evaluated at $\hat{\beta }$, we derive a closed-form density for the joint distribution of $(\hat{\beta },S)$ under a high-dimensional setting. This allows us to draw from an estimated sampling distribution of $\hat{\beta }$, or more generally any function of $(\hat{\beta },S)$, by Monte Carlo algorithms. We demonstrate the application of estimator augmentation in group...

  12. Model selection in semiparametric expectile regression

    Spiegel, Elmar; Sobotka, Fabian; Kneib, Thomas
    Ordinary least squares regression focuses on the expected response and strongly depends on the assumption of normally distributed errors for inferences. An approach to overcome these restrictions is expectile regression, where no distributional assumption is made but rather the whole distribution of the response is described in terms of covariates. This is similar to quantile regression, but expectiles provide a convenient generalization of the arithmetic mean while quantiles are a generalization of the median. To analyze more complex data structures where purely linear predictors are no longer sufficient, semiparametric regression methods have been introduced for both ordinary least squares and...

  13. Model selection in semiparametric expectile regression

    Spiegel, Elmar; Sobotka, Fabian; Kneib, Thomas
    Ordinary least squares regression focuses on the expected response and strongly depends on the assumption of normally distributed errors for inferences. An approach to overcome these restrictions is expectile regression, where no distributional assumption is made but rather the whole distribution of the response is described in terms of covariates. This is similar to quantile regression, but expectiles provide a convenient generalization of the arithmetic mean while quantiles are a generalization of the median. To analyze more complex data structures where purely linear predictors are no longer sufficient, semiparametric regression methods have been introduced for both ordinary least squares and...

  14. Maximum likelihood estimation for a bivariate Gaussian process under fixed domain asymptotics

    Velandia, Daira; Bachoc, François; Bevilacqua, Moreno; Gendre, Xavier; Loubes, Jean-Michel
    We consider maximum likelihood estimation with data from a bivariate Gaussian process with a separable exponential covariance model under fixed domain asymptotics. We first characterize the equivalence of Gaussian measures under this model. Then consistency and asymptotic normality for the maximum likelihood estimator of the microergodic parameters are established. A simulation study is presented in order to compare the finite sample behavior of the maximum likelihood estimator with the given asymptotic distribution.

  15. Maximum likelihood estimation for a bivariate Gaussian process under fixed domain asymptotics

    Velandia, Daira; Bachoc, François; Bevilacqua, Moreno; Gendre, Xavier; Loubes, Jean-Michel
    We consider maximum likelihood estimation with data from a bivariate Gaussian process with a separable exponential covariance model under fixed domain asymptotics. We first characterize the equivalence of Gaussian measures under this model. Then consistency and asymptotic normality for the maximum likelihood estimator of the microergodic parameters are established. A simulation study is presented in order to compare the finite sample behavior of the maximum likelihood estimator with the given asymptotic distribution.

  16. Cox Markov models for estimating single cell growth

    Bassetti, Federico; Epifani, Ilenia; Ladelli, Lucia
    Recent experimental techniques produce thousands of data of single cell growth, consequently stochastic models of growth can be validated on true data and used to understand the main mechanisms that control the cell cycle. A sequence of growing cells is usually modeled by a suitable Markov chain. In this framework, the most interesting goal is to infer the distribution of the doubling time (or of the added size) of a cell given its initial size and its elongation rate. In the literature, these distributions are described in terms of the corresponding conditional hazard function, referred as division hazard rate. In...

  17. Cox Markov models for estimating single cell growth

    Bassetti, Federico; Epifani, Ilenia; Ladelli, Lucia
    Recent experimental techniques produce thousands of data of single cell growth, consequently stochastic models of growth can be validated on true data and used to understand the main mechanisms that control the cell cycle. A sequence of growing cells is usually modeled by a suitable Markov chain. In this framework, the most interesting goal is to infer the distribution of the doubling time (or of the added size) of a cell given its initial size and its elongation rate. In the literature, these distributions are described in terms of the corresponding conditional hazard function, referred as division hazard rate. In...

  18. Variable selection for partially linear models via learning gradients

    Yang, Lei; Fang, Yixin; Wang, Junhui; Shao, Yongzhao
    Partially linear models (PLMs) are important generalizations of linear models and are very useful for analyzing high-dimensional data. Compared to linear models, the PLMs possess desirable flexibility of non-parametric regression models because they have both linear and non-linear components. Variable selection for PLMs plays an important role in practical applications and has been extensively studied with respect to the linear component. However, for the non-linear component, variable selection has been well developed only for PLMs with extra structural assumptions such as additive PLMs and generalized additive PLMs. There is currently an unmet need for variable selection methods applicable to general...

  19. Variable selection for partially linear models via learning gradients

    Yang, Lei; Fang, Yixin; Wang, Junhui; Shao, Yongzhao
    Partially linear models (PLMs) are important generalizations of linear models and are very useful for analyzing high-dimensional data. Compared to linear models, the PLMs possess desirable flexibility of non-parametric regression models because they have both linear and non-linear components. Variable selection for PLMs plays an important role in practical applications and has been extensively studied with respect to the linear component. However, for the non-linear component, variable selection has been well developed only for PLMs with extra structural assumptions such as additive PLMs and generalized additive PLMs. There is currently an unmet need for variable selection methods applicable to general...

  20. Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation

    Zhu, Tingyi; Politis, Dimitris N.
    This paper considers a nonparametric functional autoregression model of order one. Existing contributions addressing the problem of functional time series prediction have focused on the linear model and literatures are rather lacking in the context of nonlinear functional time series. In our nonparametric setting, we define the functional version of kernel estimator for the autoregressive operator and develop its asymptotic theory under the assumption of a strong mixing condition on the sample. The results are general in the sense that high-order autoregression can be naturally written as a first-order AR model. In addition, a component-wise bootstrap procedure is proposed that...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.