## Recursos de colección

#### Project Euclid (Hosted at Cornell University Library) (204.172 recursos)

Electronic Journal of Statistics

1. #### Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies

Suzuki, Taiji
In this paper, we give a new generalization error bound of Multiple Kernel Learning (MKL) for a general class of regularizations, and discuss what kind of regularization gives a favorable predictive accuracy. Our main target in this paper is dense type regularizations including $\ell_{p}$-MKL. According to the numerical experiments, it is known that the sparse regularization does not necessarily show a good performance compared with dense type regularizations. Motivated by this fact, this paper gives a general theoretical tool to derive fast learning rates of MKL that is applicable to arbitrary mixed-norm-type regularizations in a unifying manner. This enables us...

2. #### Principal quantile regression for sufficient dimension reduction with heteroscedasticity

Wang, Chong; Shin, Seung Jun; Wu, Yichao
Sufficient dimension reduction (SDR) is a successful tool for reducing data dimensionality without stringent model assumptions. In practice, data often display heteroscedasticity which is of scientific importance in general but frequently overlooked since a primal goal of most existing statistical methods is to identify conditional mean relationship among variables. In this article, we propose a new SDR method called principal quantile regression (PQR) that efficiently tackles heteroscedasticity. PQR can naturally be extended to a nonlinear version via kernel trick. Asymptotic properties are established and an efficient solution path-based algorithm is provided. Numerical examples based on both simulated and real data...

3. #### Measuring distributional asymmetry with Wasserstein distance and Rademacher symmetrization

We propose of an improved version of the ubiquitous symmetrization inequality making use of the Wasserstein distance between a measure and its reflection in order to quantify the asymmetry of the given measure. An empirical bound on this asymmetric correction term is derived through a bootstrap procedure and shown to give tighter results in practical settings than the original uncorrected inequality. Lastly, a wide range of applications are detailed including testing for data symmetry, constructing nonasymptotic high dimensional confidence sets, bounding the variance of an empirical process, and improving constants in Nemirovski style inequalities for Banach space valued random variables.

4. #### High-dimensional inference for personalized treatment decision

Jeng, X. Jessie; Lu, Wenbin; Peng, Huimin
Recent development in statistical methodology for personalized treatment decision has utilized high-dimensional regression to take into account a large number of patients’ covariates and described personalized treatment decision through interactions between treatment and covariates. While a subset of interaction terms can be obtained by existing variable selection methods to indicate relevant covariates for making treatment decision, there often lacks statistical interpretation of the results. This paper proposes an asymptotically unbiased estimator based on Lasso solution for the interaction coefficients. We derive the limiting distribution of the estimator when baseline function of the regression model is unknown and possibly misspecified. Confidence...

5. #### Common price and volatility jumps in noisy high-frequency data

Bibinger, Markus; Winkelmann, Lars
We introduce a statistical test for simultaneous jumps in the price of a financial asset and its volatility process. The proposed test is based on high-frequency data and is robust to market microstructure frictions. For the test, local estimators of volatility jumps at price jump arrival times are designed using a nonparametric spectral estimator of the spot volatility process. A simulation study and an empirical example with NASDAQ order book data demonstrate the practicability of the proposed methods and highlight the important role played by price volatility co-jumps.

6. #### Selection by partitioning the solution paths

Liu, Yang; Wang, Peng
The performance of penalized likelihood approaches depends profoundly on the selection of the tuning parameter; however, there is no commonly agreed-upon criterion for choosing the tuning parameter. Moreover, penalized likelihood estimation based on a single value of the tuning parameter suffers from several drawbacks. This article introduces a novel approach for feature selection based on the entire solution paths rather than the choice of a single tuning parameter, which significantly improves the accuracy of the selection. Moreover, the approach allows for feature selection using ridge or other strictly convex penalties. The key idea is to classify variables as relevant or...

7. #### Bayesian inference for spectral projectors of the covariance matrix

Let $X_{1},\ldots ,X_{n}$ be an i.i.d. sample in $\mathbb{R}^{p}$ with zero mean and the covariance matrix ${\boldsymbol{\varSigma }^{*}}$. The classical PCA approach recovers the projector $\boldsymbol{P}^{*}_{\mathcal{J}}$ onto the principal eigenspace of ${\boldsymbol{\varSigma }^{*}}$ by its empirical counterpart $\widehat{\boldsymbol{P}}_{\mathcal{J}}$. Recent paper [24] investigated the asymptotic distribution of the Frobenius distance between the projectors $\|\widehat{\boldsymbol{P}}_{\mathcal{J}}-\boldsymbol{P}^{*}_{\mathcal{J}}\|_{2}$, while [27] offered a bootstrap procedure to measure uncertainty in recovering this subspace $\boldsymbol{P}^{*}_{\mathcal{J}}$ even in a finite sample setup. The present paper considers this problem from a Bayesian perspective and suggests to use the credible sets of the pseudo-posterior distribution on the space of covariance matrices...

8. #### High dimensional efficiency with applications to change point tests

Aston, John A.D.; Kirch, Claudia
This paper rigourously introduces the asymptotic concept of high dimensional efficiency which quantifies the detection power of different statistics in high dimensional multivariate settings. It allows for comparisons of different high dimensional methods with different null asymptotics and even different asymptotic behavior such as extremal-type asymptotics. The concept will be used to understand the power behavior of different test statistics as the performance will greatly depend on the assumptions made, such as sparseness or denseness of the signal. The effect of misspecification of the covariance on the power of the tests is also investigated, because in many high dimensional situations...

9. #### New FDR bounds for discrete and heterogeneous tests

Döhler, Sebastian; Durand, Guillermo; Roquain, Etienne
To find interesting items in genome-wide association studies or next generation sequencing data, a crucial point is to design powerful false discovery rate (FDR) controlling procedures that suitably combine discrete tests (typically binomial or Fisher tests). In particular, recent research has been striving for appropriate modifications of the classical Benjamini-Hochberg (BH) step-up procedure that accommodate discreteness and heterogeneity of the data. However, despite an important number of attempts, these procedures did not come with theoretical guarantees. In this paper, we provide new FDR bounds that allow us to fill this gap. More specifically, these bounds make it possible to construct...

10. #### Community detection by $L_{0}$-penalized graph Laplacian

Chen, Chong; Xi, Ruibin; Lin, Nan
Community detection in network analysis aims at partitioning nodes into disjoint communities. Real networks often contain outlier nodes that do not belong to any communities and often do not have a known number of communities. However, most current algorithms assume that the number of communities is known and even fewer algorithm can handle networks with outliers. In this paper, we propose detecting communities by maximizing a novel model free tightness criterion. We show that this tightness criterion is closely related with the $L_{0}$-penalized graph Laplacian and develop an efficient algorithm to extract communities based on the criterion. Unlike many other...

11. #### Solution of linear ill-posed problems by model selection and aggregation

Abramovich, Felix; De Canditiis, Daniela; Pensky, Marianna
We consider a general statistical linear inverse problem, where the solution is represented via a known (possibly overcomplete) dictionary that allows its sparse representation. We propose two different approaches. A model selection estimator selects a single model by minimizing the penalized empirical risk over all possible models. By contrast with direct problems, the penalty depends on the model itself rather than on its size only as for complexity penalties. A Q-aggregate estimator averages over the entire collection of estimators with properly chosen weights. Under mild conditions on the dictionary, we establish oracle inequalities both with high probability and in expectation...

12. #### Dimension reduction and estimation in the secondary analysis of case-control studies

Liang, Liang; Carroll, Raymond; Ma, Yanyuan
Studying the relationship between covariates based on retrospective data is the main purpose of secondary analysis, an area of increasing interest. We examine the secondary analysis problem when multiple covariates are available, while only a regression mean model is specified. Despite the completely parametric modeling of the regression mean function, the case-control nature of the data requires special treatment and semiparametric efficient estimation generates various nonparametric estimation problems with multivariate covariates. We devise a dimension reduction approach that fits with the specified primary and secondary models in the original problem setting, and use reweighting to adjust for the case-control nature...

13. #### Corrigendum to “Classification with asymmetric label noise: Consistency and maximal denoising”

Blanchard, Gilles; Scott, Clayton
We point out a flaw in Lemma 15 of [1]. We also indicate how the main results of that section are still valid using a modified argument.

14. #### A nearest neighbor estimate of the residual variance

Devroye, Luc; Györfi, László; Lugosi, Gábor; Walk, Harro
We study the problem of estimating the smallest achievable mean-squared error in regression function estimation. The problem is equivalent to estimating the second moment of the regression function of $Y$ on $X\in{\mathbb{R}} ^{d}$. We introduce a nearest-neighbor-based estimate and obtain a normal limit law for the estimate when $X$ has an absolutely continuous distribution, without any condition on the density. We also compute the asymptotic variance explicitly and derive a non-asymptotic bound on the variance that does not depend on the dimension $d$. The asymptotic variance does not depend on the smoothness of the density of $X$ or of the...

15. #### A deconvolution path for mixtures

We propose a class of estimators for deconvolution in mixture models based on a simple two-step “bin-and-smooth” procedure applied to histogram counts. The method is both statistically and computationally efficient: by exploiting recent advances in convex optimization, we are able to provide a full deconvolution path that shows the estimate for the mi-xing distribution across a range of plausible degrees of smoothness, at far less cost than a full Bayesian analysis. This enables practitioners to conduct a sensitivity analysis with minimal effort. This is especially important for applied data analysis, given the ill-posed nature of the deconvolution problem. Our results...

16. #### Heritability estimation in case-control studies

Bonnet, Anna
In the field of genetics, the concept of heritability refers to the proportion of variations of a biological trait or disease that can be explained by genetic factors. Quantifying the heritability of a disease is a fundamental challenge in human genetics, especially when the causes are plural and not clearly identified. Although the literature regarding heritability estimation for binary traits is less rich than for quantitative traits, several methods have been proposed to estimate the heritability of complex diseases. However, to the best of our knowledge, the existing methods are not supported by theoretical grounds. Moreover, most of the methodologies...

17. #### Bayesian pairwise estimation under dependent informative sampling

Williams, Matthew R.; Savitsky, Terrance D.
An informative sampling design leads to the selection of units whose inclusion probabilities are correlated with the response variable of interest. Inference under the population model performed on the resulting observed sample, without adjustment, will be biased for the population generative model. One approach that produces asymptotically unbiased inference employs marginal inclusion probabilities to form sampling weights used to exponentiate each likelihood contribution of a pseudo likelihood used to form a pseudo posterior distribution. Conditions for posterior consistency restrict applicable sampling designs to those under which pairwise inclusion dependencies asymptotically limit to $0$. There are many sampling designs excluded by...

18. #### On penalized estimation for dynamical systems with small noise

De Gregorio, Alessandro; Iacus, Stefano Maria
We consider a dynamical system with small noise for which the drift is parametrized by a finite dimensional parameter. For this model, we consider minimum distance estimation from continuous time observations under $l^{p}$-penalty imposed on the parameters in the spirit of the Lasso approach, with the aim of simultaneous estimation and model selection. We study the consistency and the asymptotic distribution of these Lasso-type estimators for different values of $p$. For $p=1,$ we also consider the adaptive version of the Lasso estimator and establish its oracle properties.

19. #### Modified sequential change point procedures based on estimating functions

Kirch, Claudia; Weber, Silke
A large class of sequential change point tests are based on estimating functions where estimation is computationally efficient as (possibly numeric) optimization is restricted to an initial estimation. This includes examples as diverse as mean changes, linear or non-linear autoregressive and binary models. While the standard cumulative-sum-detector (CUSUM) has recently been considered in this general setup, we consider several modifications that have faster detection rates in particular if changes do occur late in the monitoring period. More presicely, we use three different types of detector statistics based on partial sums of a monitoring function, namely the modified moving-sum-statistic (mMOSUM), Page’s...

20. #### An extended empirical saddlepoint approximation for intractable likelihoods

Fasiolo, Matteo; Wood, Simon N.; Hartig, Florian; Bravington, Mark V.
The challenges posed by complex stochastic models used in computational ecology, biology and genetics have stimulated the development of approximate approaches to statistical inference. Here we focus on Synthetic Likelihood (SL), a procedure that reduces the observed and simulated data to a set of summary statistics, and quantifies the discrepancy between them through a synthetic likelihood function. SL requires little tuning, but it relies on the approximate normality of the summary statistics. We relax this assumption by proposing a novel, more flexible, density estimator: the Extended Empirical Saddlepoint approximation. In addition to proving the consistency of SL, under either the...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.