Mostrando recursos 1 - 20 de 64

  1. Optimal sequential detection in multi-stream data

    Chan, Hock Peng
    Consider a large number of detectors each generating a data stream. The task is to detect online, distribution changes in a small fraction of the data streams. Previous approaches to this problem include the use of mixture likelihood ratios and sum of CUSUMs. We provide here extensions and modifications of these approaches that are optimal in detecting normal mean shifts. We show how the (optimal) detection delay depends on the fraction of data streams undergoing distribution changes as the number of detectors goes to infinity. There are three detection domains. In the first domain for moderately large fractions, immediate detection...

  2. Estimating a probability mass function with unknown labels

    Anevski, Dragi; Gill, Richard D.; Zohren, Stefan
    In the context of a species sampling problem, we discuss a nonparametric maximum likelihood estimator for the underlying probability mass function. The estimator is known in the computer science literature as the high profile estimator. We prove strong consistency and derive the rates of convergence, for an extended model version of the estimator. We also study a sieved estimator for which similar consistency results are derived. Numerical computation of the sieved estimator is of great interest for practical problems, such as forensic DNA analysis, and we present a computational algorithm based on the stochastic approximation of the expectation maximisation algorithm....

  3. Structural similarity and difference testing on multiple sparse Gaussian graphical models

    Liu, Weidong
    We present a new framework on inferring structural similarities and differences among multiple high-dimensional Gaussian graphical models (GGMs) corresponding to the same set of variables under distinct experimental conditions. The new framework adopts the partial correlation coefficients to characterize the potential changes of dependency strengths between two variables. A hierarchical method has been further developed to recover edges with different or similar dependency strengths across multiple GGMs. In particular, we first construct two-sample test statistics for testing the equality of partial correlation coefficients and conduct large-scale multiple tests to estimate the substructure of differential dependencies. After removing differential substructure from...

  4. A weight-relaxed model averaging approach for high-dimensional generalized linear models

    Ando, Tomohiro; Li, Ker-chau
    Model averaging has long been proposed as a powerful alternative to model selection in regression analysis. However, how well it performs in high-dimensional regression is still poorly understood. Recently, Ando and Li [J. Amer. Statist. Assoc. 109 (2014) 254–265] introduced a new method of model averaging that allows the number of predictors to increase as the sample size increases. One notable feature of Ando and Li’s method is the relaxation on the total model weights so that weak signals can be efficiently combined from high-dimensional linear models. It is natural to ask if Ando and Li’s method and results can...

  5. Extended conditional independence and applications in causal inference

    Constantinou, Panayiota; Dawid, A. Philip
    The goal of this paper is to integrate the notions of stochastic conditional independence and variation conditional independence under a more general notion of extended conditional independence. We show that under appropriate assumptions the calculus that applies for the two cases separately (axioms of a separoid) still applies for the extended case. These results provide a rigorous basis for a wide range of statistical concepts, including ancillarity and sufficiency, and, in particular, the Decision Theoretic framework for statistical causality, which uses the language and calculus of conditional independence in order to express causal properties and make causal inferences.

  6. Selecting the number of principal components: Estimation of the true rank of a noisy matrix

    Choi, Yunjin; Taylor, Jonathan; Tibshirani, Robert
    Principal component analysis (PCA) is a well-known tool in multivariate statistics. One significant challenge in using PCA is the choice of the number of principal components. In order to address this challenge, we propose distribution-based methods with exact type 1 error controls for hypothesis testing and construction of confidence intervals for signals in a noisy matrix with finite samples. Assuming Gaussian noise, we derive exact type 1 error controls based on the conditional distribution of the singular values of a Gaussian matrix by utilizing a post-selection inference framework, and extending the approach of [Taylor, Loftus and Tibshirani (2013)] in a...

  7. Nonparametric goodness-of-fit tests for uniform stochastic ordering

    Tang, Chuan-Fa; Wang, Dewei; Tebbs, Joshua M.
    We propose $L^{p}$ distance-based goodness-of-fit (GOF) tests for uniform stochastic ordering with two continuous distributions $F$ and $G$, both of which are unknown. Our tests are motivated by the fact that when $F$ and $G$ are uniformly stochastically ordered, the ordinal dominance curve $R=FG^{-1}$ is star-shaped. We derive asymptotic distributions and prove that our testing procedure has a unique least favorable configuration of $F$ and $G$ for $p\in [1,\infty]$. We use simulation to assess finite-sample performance and demonstrate that a modified, one-sample version of our procedure (e.g., with $G$ known) is more powerful than the one-sample GOF test suggested by...

  8. Targeted sequential design for targeted learning inference of the optimal treatment rule and its mean reward

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J.
    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the nonexceptional case, that is, assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. ¶ Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals...

  9. Adaptive Bernstein–von Mises theorems in Gaussian white noise

    Ray, Kolyan
    We investigate Bernstein–von Mises theorems for adaptive nonparametric Bayesian procedures in the canonical Gaussian white noise model. We consider both a Hilbert space and multiscale setting with applications in $L^{2}$ and $L^{\infty}$, respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets based on the posterior distribution. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the geometries involved.

  10. Optimal design of fMRI experiments using circulant (almost-)orthogonal arrays

    Lin, Yuan-Lung; Phoa, Frederick Kin Hing; Kao, Ming-Hung
    Functional magnetic resonance imaging (fMRI) is a pioneering technology for studying brain activity in response to mental stimuli. Although efficient designs on these fMRI experiments are important for rendering precise statistical inference on brain functions, they are not systematically constructed. Design with circulant property is crucial for estimating a hemodynamic response function (HRF) and discussing fMRI experimental optimality. In this paper, we develop a theory that not only successfully explains the structure of a circulant design, but also provides a method of constructing efficient fMRI designs systematically. We further provide a class of two-level circulant designs with good performance (statistically...

  11. Support recovery without incoherence: A case for nonconvex regularization

    Loh, Po-Ling; Wainwright, Martin J.
    We develop a new primal-dual witness proof framework that may be used to establish variable selection consistency and $\ell_{\infty}$-bounds for sparse regression problems, even when the loss function and regularizer are nonconvex. We use this method to prove two theorems concerning support recovery and $\ell_{\infty}$-guarantees for a regression estimator in a general setting. Notably, our theory applies to all potential stationary points of the objective and certifies that the stationary point is unique under mild conditions. Our results provide a strong theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin,...

  12. Consistent parameter estimation for LASSO and approximate message passing

    Mousavi, Ali; Maleki, Arian; Baraniuk, Richard G.
    This paper studies the optimal tuning of the regularization parameter in LASSO or the threshold parameters in approximate message passing (AMP). Considering a model in which the design matrix and noise are zero-mean i.i.d. Gaussian, we propose a data-driven approach for estimating the regularization parameter of LASSO and the threshold parameters in AMP. Our estimates are consistent, that is, they converge to their asymptotically optimal values in probability as $n$, the number of observations, and $p$, the ambient dimension of the sparse vector, grow to infinity, while $n/p$ converges to a fixed number $\delta$. As a byproduct of our analysis,...

  13. CoCoLasso for high-dimensional error-in-variables regression

    Datta, Abhirup; Zou, Hui
    Much theoretical and applied work has been devoted to high-dimensional regression with clean data. However, we often face corrupted data in many applications where missing data and measurement errors cannot be ignored. Loh and Wainwright [Ann. Statist. 40 (2012) 1637–1664] proposed a nonconvex modification of the Lasso for doing high-dimensional regression with noisy and missing data. It is generally agreed that the virtues of convexity contribute fundamentally the success and popularity of the Lasso. In light of this, we propose a new method named CoCoLasso that is convex and can handle a general class of corrupted datasets. We establish the...

  14. On the validity of resampling methods under long memory

    Bai, Shuyang; Taqqu, Murad S.
    For long-memory time series, inference based on resampling is of crucial importance, since the asymptotic distribution can often be non-Gaussian and is difficult to determine statistically. However, due to the strong dependence, establishing the asymptotic validity of resampling methods is nontrivial. In this paper, we derive an efficient bound for the canonical correlation between two finite blocks of a long-memory time series. We show how this bound can be applied to establish the asymptotic consistency of subsampling procedures for general statistics under long memory. It allows the subsample size $b$ to be $o(n)$, where $n$ is the sample size, irrespective...

  15. A new perspective on boosting in linear regression via subgradient optimization and relatives

    M. Freund, Robert; Grigas, Paul; Mazumder, Rahul
    We analyze boosting algorithms [Ann. Statist. 29 (2001) 1189–1232; Ann. Statist. 28 (2000) 337–407; Ann. Statist. 32 (2004) 407–499] in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm ($\text{FS}_{\varepsilon}$) and least squares boosting [LS-BOOST$(\varepsilon)$], can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a minor modification of $\text{FS}_{\varepsilon}$ that yields an algorithm for the LASSO, and that may be easily extended to an...

  16. A likelihood ratio framework for high-dimensional semiparametric regression

    Ning, Yang; Zhao, Tianqi; Liu, Han
    We propose a new inferential framework for high-dimensional semiparametric generalized linear models. This framework addresses a variety of challenging problems in high-dimensional data analysis, including incomplete data, selection bias and heterogeneity. Our work has three main contributions: (i) We develop a regularized statistical chromatography approach to infer the parameter of interest under the proposed semiparametric generalized linear model without the need of estimating the unknown base measure function. (ii) We propose a new likelihood ratio based framework to construct post-regularization confidence regions and tests for the low dimensional components of high-dimensional parameters. Unlike existing post-regularization inferential methods, our approach is...

  17. Nonasymptotic analysis of semiparametric regression models with high-dimensional parametric coefficients

    Zhu, Ying
    We consider a two-step projection based Lasso procedure for estimating a partially linear regression model where the number of coefficients in the linear component can exceed the sample size and these coefficients belong to the $l_{q}$-“balls” for $q\in[0,1]$. Our theoretical results regarding the properties of the estimators are nonasymptotic. In particular, we establish a new nonasymptotic “oracle” result: Although the error of the nonparametric projection per se (with respect to the prediction norm) has the scaling $t_{n}$ in the first step, it only contributes a scaling $t_{n}^{2}$ in the $l_{2}$-error of the second-step estimator for the linear coefficients. This new...

  18. Nonasymptotic analysis of semiparametric regression models with high-dimensional parametric coefficients

    Zhu, Ying
    We consider a two-step projection based Lasso procedure for estimating a partially linear regression model where the number of coefficients in the linear component can exceed the sample size and these coefficients belong to the $l_{q}$-“balls” for $q\in[0,1]$. Our theoretical results regarding the properties of the estimators are nonasymptotic. In particular, we establish a new nonasymptotic “oracle” result: Although the error of the nonparametric projection per se (with respect to the prediction norm) has the scaling $t_{n}$ in the first step, it only contributes a scaling $t_{n}^{2}$ in the $l_{2}$-error of the second-step estimator for the linear coefficients. This new...

  19. On the contraction properties of some high-dimensional quasi-posterior distributions

    Atchadé, Yves A.
    We study the contraction properties of a quasi-posterior distribution $\check{\Pi}_{n,d}$ obtained by combining a quasi-likelihood function and a sparsity inducing prior distribution on $\mathbb{R}^{d}$, as both $n$ (the sample size), and $d$ (the dimension of the parameter) increase. We derive some general results that highlight a set of sufficient conditions under which $\check{\Pi}_{n,d}$ puts increasingly high probability on sparse subsets of $\mathbb{R}^{d}$, and contracts toward the true value of the parameter. We apply these results to the analysis of logistic regression models, and binary graphical models, in high-dimensional settings. For the logistic regression model, we shows that for well-behaved design...

  20. On the contraction properties of some high-dimensional quasi-posterior distributions

    Atchadé, Yves A.
    We study the contraction properties of a quasi-posterior distribution $\check{\Pi}_{n,d}$ obtained by combining a quasi-likelihood function and a sparsity inducing prior distribution on $\mathbb{R}^{d}$, as both $n$ (the sample size), and $d$ (the dimension of the parameter) increase. We derive some general results that highlight a set of sufficient conditions under which $\check{\Pi}_{n,d}$ puts increasingly high probability on sparse subsets of $\mathbb{R}^{d}$, and contracts toward the true value of the parameter. We apply these results to the analysis of logistic regression models, and binary graphical models, in high-dimensional settings. For the logistic regression model, we shows that for well-behaved design...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.