1) La descarga del recurso depende de la página de origen
2) Para poder descargar el recurso, es necesario ser usuario registrado en Universia


Opción 1: Descargar recurso

Detalles del recurso

Descripción

For the problem of high-dimensional sparse linear regression, it is known that an $\ell_{0}$-based estimator can achieve a $1/n$ “fast” rate for prediction error without any conditions on the design matrix, whereas in the absence of restrictive conditions on the design matrix, popular polynomial-time methods only guarantee the $1/\sqrt{n}$ “slow” rate. In this paper, we show that the slow rate is intrinsic to a broad class of M-estimators. In particular, for estimators based on minimizing a least-squares cost function together with a (possibly nonconvex) coordinate-wise separable regularizer, there is always a “bad” local optimum such that the associated prediction error is lower bounded by a constant multiple of $1/\sqrt{n}$. For convex regularizers, this lower bound applies to all global optima. The theory is applicable to many popular estimators, including convex $\ell_{1}$-based methods as well as M-estimators based on nonconvex regularizers, including the SCAD penalty or the MCP regularizer. In addition, we show that bad local optima are very common, in that a broad class of local minimization algorithms with random initialization typically converge to a bad solution.

Pertenece a

Project Euclid (Hosted at Cornell University Library)  

Autor(es)

Zhang, Yuchen -  Wainwright, Martin J. -  Jordan, Michael I. - 

Id.: 69702381

Idioma: inglés  - 

Versión: 1.0

Estado: Final

Tipo:  application/pdf - 

Palabras claveSparse linear regression - 

Tipo de recurso: Text  - 

Tipo de Interactividad: Expositivo

Nivel de Interactividad: muy bajo

Audiencia: Estudiante  -  Profesor  -  Autor  - 

Estructura: Atomic

Coste: no

Copyright: sí

: Copyright 2017 The Institute of Mathematical Statistics and the Bernoulli Society

Formatos:  application/pdf - 

Requerimientos técnicos:  Browser: Any - 

Relación: [References] 1935-7524

Fecha de contribución: 30-may-2017

Contacto:

Localización:
* Electron. J. Statist. 11, no. 1 (2017), 752-799
* doi:10.1214/17-EJS1233

Otros recursos del mismo autor(es)

  1. Multiple-sequence functional annotation and the generalized hidden Markov phylogeny Motivation: Phylogenetic shadowing is a comparative genomics principle that allows for the discovery...
  2. Subtree power analysis finds optimal species for comparative genomics Sequence comparison across multiple organisms aids in the detection of regions under selection. Howe...
  3. Statistical guarantees for the EM algorithm: From population to sample-based analysis The EM algorithm is a widely used tool in maximum-likelihood estimation in incomplete data problems....
  4. Decoding from Pooled Data: Sharp Information-Theoretic Bounds Consider a population consisting of n individuals, each of whom has one of d types (e.g. their blood...
  5. On the computational complexity of high-dimensional Bayesian variable selection We study the computational complexity of Markov chain Monte Carlo (MCMC) methods for high-dimensiona...

Otros recursos de la mismacolección

  1. SiAM: A hybrid of single index models and additive models While popular, single index models and additive models have potential limitations, a fact that leads...
  2. Bayesian estimation of discretely observed multi-dimensional diffusion processes using guided proposals Estimation of parameters of a diffusion based on discrete time observations poses a difficult proble...
  3. Adaptive variable selection in nonparametric sparse additive models We consider the problem of recovery of an unknown multivariate signal $f$ observed in a $d$-dimensio...
  4. Local optimization-based statistical inference This paper introduces a local optimization-based approach to test statistical hypotheses and to cons...
  5. Optimal exponential bounds for aggregation of estimators for the Kullback-Leibler loss We study the problem of aggregation of estimators with respect to the Kullback-Leibler divergence fo...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.