Mostrando recursos 1 - 20 de 1.659

  1. A Conversation with Lynne Billard

    Mukhopadhyay, Nitis
    Lynne Billard was born in Toowomba, Australia. She earned her B.Sc. (Honors I) in 1966, and a Ph.D. degree in 1969, both from the University of New South Wales, Australia. She is perhaps best known for her ground breaking research in the areas of HIV/AIDS and Symbolic Data Analysis. Broadly put, Professor Billard’s research interests include epidemic theory, stochastic processes, sequential analysis, time series analysis and symbolic data. She has written extensively in all these areas and more through numerous fundamental contributions. She has published more than 200 research papers in some of the leading international journals including Australian Journal...

  2. A Conversation with Robert Groves

    Habermann, Hermann; Kennedy, Courtney; Lahiri, Partha
    Professor Robert M. Groves is among the world leaders in survey methodology and survey statistics over the last four decades. Groves’ research—particularly on survey nonresponse, survey errors and costs, and responsive design—helped to provide intellectual footing for a new academic discipline. In addition, Groves has had remarkable success building academic programs that integrate the social sciences with statistics and computer science. He was instrumental in the development of degree programs in survey methodology at the University of Michigan and the University of Maryland. Recently, as Provost of Georgetown University, he has championed the use of big data sets to increase...

  3. Forecaster’s Dilemma: Extreme Events and Forecast Evaluation

    Lerch, Sebastian; Thorarinsdottir, Thordis L.; Ravazzolo, Francesco; Gneiting, Tilmann
    In public discussions of the quality of forecasts, attention typically focuses on the predictive performance in cases of extreme events. However, the restriction of conventional forecast evaluation methods to subsets of extreme observations has unexpected and undesired effects, and is bound to discredit skillful forecasts when the signal-to-noise ratio in the data generating process is low. Conditioning on outcomes is incompatible with the theoretical assumptions of established forecast evaluation methods, thereby confronting forecasters with what we refer to as the forecaster’s dilemma. For probabilistic forecasts, proper weighted scoring rules have been proposed as decision-theoretically justifiable alternatives for forecast evaluation with...

  4. On the Sensitivity of the Lasso to the Number of Predictor Variables

    Flynn, Cheryl J.; Hurvich, Clifford M.; Simonoff, Jeffrey S.
    The Lasso is a computationally efficient regression regularization procedure that can produce sparse estimators when the number of predictors $(p)$ is large. Oracle inequalities provide probability loss bounds for the Lasso estimator at a deterministic choice of the regularization parameter. These bounds tend to zero if $p$ is appropriately controlled, and are thus commonly cited as theoretical justification for the Lasso and its ability to handle high-dimensional settings. Unfortunately, in practice the regularization parameter is not selected to be a deterministic quantity, but is instead chosen using a random, data-dependent procedure. To address this shortcoming of previous theoretical work, we...

  5. Leave Pima Indians Alone: Binary Regression as a Benchmark for Bayesian Computation

    Chopin, Nicolas; Ridgway, James
    Whenever a new approach to perform Bayesian computation is introduced, a common practice is to showcase this approach on a binary regression model and datasets of moderate size. This paper discusses to which extent this practice is sound. It also reviews the current state of the art of Bayesian computation, using binary regression as a running example. Both sampling-based algorithms (importance sampling, MCMC and SMC) and fast approximations (Laplace, VB and EP) are covered. Extensive numerical results are provided, and are used to make recommendations to both end users and Bayesian computation experts. Implications for other problems (variable selection) and...

  6. Consistency of the MLE under Mixture Models

    Chen, Jiahua
    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consistency of the plain MLE is often erroneously assumed in spite of recent research breakthroughs. This paper streamlines the consistency results for the nonparametric MLE in general, and in particular for...

  7. You Just Keep on Pushing My Love over the Borderline: A Rejoinder

    Simpson, Daniel; Rue, Håvard; Riebler, Andrea; Martins, Thiago G.; Sørbye, Sigrunn H.

  8. Toward Automated Prior Choice

    Dunson, David B.

  9. How Principled and Practical Are Penalised Complexity Priors?

    Robert, Christian P.; Rousseau, Judith

  10. Swinging for the Fence in a League Where Everyone Bunts

    Hodges, James S.

  11. Prior Specification Is Engineering, Not Mathematics

    Scott, James G.

  12. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    Simpson, Daniel; Rue, Håvard; Riebler, Andrea; Martins, Thiago G.; Sørbye, Sigrunn H.
    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to reparameterisations, have a natural connection to Jeffreys’ priors, are designed to support Occam’s razor and seem to have excellent...

  13. A Conversation with Samad Hedayat

    Martin, Ryan; Stufken, John; Yang, Min
    A. Samad Hedayat was born on July 11, 1937, in Jahrom, Iran. He finished his undergraduate education in bioengineering with honors from the University of Tehran in 1962 and came to the U.S. to study statistics at Cornell, completing his Ph.D. in 1969. Just a few years later, in 1974, Samad accepted a full professor position at the University of Illinois at Chicago Circle—now called University of Illinois at Chicago (UIC)—and was named UIC Distinguished Professor in 2003. He was an early leader in the Department of Mathematics, Statistics and Computer Science and he remains a driving force to this...

  14. A Conversation with Jeff Wu

    Chipman, Hugh A.; Joseph, V. Roshan
    Chien-Fu Jeff Wu was born January 15, 1949, in Taiwan. He earned a B.Sc. in Mathematics from National Taiwan University in 1971, and a Ph.D. in Statistics from the University of California, Berkeley in 1976. He has been a faculty member at the University of Wisconsin, Madison (1977–1988), the University of Waterloo (1988–1993), the University of Michigan (1995–2003; department chair 1995–8) and currently is the Coca-Cola Chair in Engineering Statistics and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology. He is known for his work on the convergence of the...

  15. Multiple Change-Point Detection: A Selective Overview

    Niu, Yue S.; Hao, Ning; Zhang, Heping
    Very long and noisy sequence data arise from biological sciences to social science including high throughput data in genomics and stock prices in econometrics. Often such data are collected in order to identify and understand shifts in trends, for example, from a bull market to a bear market in finance or from a normal number of chromosome copies to an excessive number of chromosome copies in genetics. Thus, identifying multiple change points in a long, possibly very long, sequence is an important problem. In this article, we review both classical and new multiple change-point detection strategies. Considering the long history...

  16. A Review and Comparison of Age–Period–Cohort Models for Cancer Incidence

    Smith, Theresa R.; Wakefield, Jon
    Age–period–cohort models have been used to examine and forecast cancer incidence and mortality for over three decades. However, the fitting and interpretation of these models requires great care because of the well-known identifiability problem that exists; given any two of age, period, and cohort, the third is determined. In this paper, we review the identifiability problem and models that have been proposed for analysis, from both frequentist and Bayesian standpoints. A number of recent analyses that use age–period–cohort models are described and critiqued before data on cancer incidence in Washington State are analyzed with various models, including a new Bayesian...

  17. Bayes, Reproducibility and the Quest for Truth

    Fraser, D. A. S.; Bédard, M.; Wong, A.; Lin, Wei; Fraser, A. M.
    We consider the use of default priors in the Bayes methodology for seeking information concerning the true value of a parameter. By default prior, we mean the mathematical prior as initiated by Bayes [Philos. Trans. R. Soc. Lond. 53 (1763) 370–418] and pursued by Laplace [Théorie Analytique des Probabilités (1812) Courcier], Jeffreys [Theory of Probability (1961) Clarendon Press], Bernardo [J. Roy. Statist. Soc. Ser. B 41 (1979) 113–147] and many more, and then recently viewed as “potentially dangerous” [Science 340 (2013) 1177–1178] and “potentially useful” [Science 341 (2013) 1452]. We do not mean, however, the genuine prior [Science 340 (2013)...

  18. Chaos Communication: A Case of Statistical Engineering

    Lawrance, Anthony J.
    The paper gives a statistically focused selective view of chaos-based communication which uses segments of noise-like chaotic waves as carriers of messages, replacing the traditional sinusoidal radio waves. The presentation concerns joint statistical and dynamical modelling of the binary communication system known as “chaos shift-keying”, representative of the area, and leverages the statistical properties of chaos. Practically, such systems apply to both wireless and optical laser communication channels. Theoretically, the chaotic waves are generated iteratively by chaotic maps, and practically, by electronic circuits or lasers. Both single-user and multiple-user systems are covered. The focus is on likelihood-based decoding of messages,...

  19. Rejoinder: Concert Unlikely, “Jugalbandi” Perhaps

    Singpurwalla, Nozer D.
    This rejoinder to the discussants of Filtering and Tracking Survival Propensity begins with a brief history of the statistical aspects of reliability and its impact on survival analysis and responds to the several issues raised by the discussants, some of which are conceptual and some pragmatic.

  20. Reconciling the Subjective and Objective Aspects of Probability

    Shafer, Glenn
    Since the early nineteenth century, the concept of objective probability has been dynamic. As we recognize this history, we can strengthen Professor Nozer Singpuwalla’s vision of reliability of survival analysis by aligning it with earlier conceptions elaborated by Laplace, Borel, Kolmogorov, Ville and Neyman. By emphasizing testing and recognizing the generality of the vision of Kolmogorov and Neyman, we gain a perspective that does not rely on exchangeability.

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.