1) La descarga del recurso depende de la página de origen
2) Para poder descargar el recurso, es necesario ser usuario registrado en Universia

Opción 1: Descargar recurso

Detalles del recurso


For the problem of high-dimensional sparse linear regression, it is known that an $\ell_{0}$-based estimator can achieve a $1/n$ “fast” rate for prediction error without any conditions on the design matrix, whereas in the absence of restrictive conditions on the design matrix, popular polynomial-time methods only guarantee the $1/\sqrt{n}$ “slow” rate. In this paper, we show that the slow rate is intrinsic to a broad class of M-estimators. In particular, for estimators based on minimizing a least-squares cost function together with a (possibly nonconvex) coordinate-wise separable regularizer, there is always a “bad” local optimum such that the associated prediction error is lower bounded by a constant multiple of $1/\sqrt{n}$. For convex regularizers, this lower bound applies to all global optima. The theory is applicable to many popular estimators, including convex $\ell_{1}$-based methods as well as M-estimators based on nonconvex regularizers, including the SCAD penalty or the MCP regularizer. In addition, we show that bad local optima are very common, in that a broad class of local minimization algorithms with random initialization typically converge to a bad solution.

Pertenece a

Project Euclid (Hosted at Cornell University Library)  


Zhang, Yuchen -  Wainwright, Martin J. -  Jordan, Michael I. - 

Id.: 69702381

Idioma: inglés  - 

Versión: 1.0

Estado: Final

Tipo:  application/pdf - 

Palabras claveSparse linear regression - 

Tipo de recurso: Text  - 

Tipo de Interactividad: Expositivo

Nivel de Interactividad: muy bajo

Audiencia: Estudiante  -  Profesor  -  Autor  - 

Estructura: Atomic

Coste: no

Copyright: sí

: Copyright 2017 The Institute of Mathematical Statistics and the Bernoulli Society

Formatos:  application/pdf - 

Requerimientos técnicos:  Browser: Any - 

Relación: [References] 1935-7524

Fecha de contribución: 28-ago-2017


* Electron. J. Statist. 11, no. 1 (2017), 752-799
* doi:10.1214/17-EJS1233

Otros recursos del mismo autor(es)

  1. Support recovery without incoherence: A case for nonconvex regularization We develop a new primal-dual witness proof framework that may be used to establish variable selectio...
  2. Latent Marked Poisson Process with Applications to Object Segmentation In difficult object segmentation tasks, utilizing image information alone is not sufficient; incorpo...
  3. Measuring Cluster Stability for Bayesian Nonparametrics Using the Linear Bootstrap 9 pages, NIPS 2017 Advances in Approximate Bayesian Inference Workshop
  4. Bayesian Nonparametric Inference of Switching Linear Dynamical Systems Many complex dynamical phenomena can be effectively modeled by a system that switches among a set of...
  5. A Sticky HDP-HMM With Application to Speaker Diarization We consider the problem of speaker diarization, the problem of segmenting an audio recording of a me...

Otros recursos de la mismacolección

  1. Exact post-selection inference for the generalized lasso path We study tools for inference conditioned on model selection events that are defined by the generaliz...
  2. Feasible invertibility conditions and maximum likelihood estimation for observation-driven models Invertibility conditions for observation-driven time series models often fail to be guaranteed in em...
  3. Ridge regression for the functional concurrent model The aim of this paper is to propose estimators of the unknown functional coefficients in the Functio...
  4. Supervised dimensionality reduction via distance correlation maximization In our work, we propose a novel formulation for supervised dimensionality reduction based on a nonli...
  5. Least tail-trimmed absolute deviation estimation for autoregressions with infinite/finite variance We propose least tail-trimmed absolute deviation estimation for autoregressive processes with infini...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.