Mostrando recursos 1 - 20 de 1.647

  1. A Conversation with Samad Hedayat

    Martin, Ryan; Stufken, John; Yang, Min
    A. Samad Hedayat was born on July 11, 1937, in Jahrom, Iran. He finished his undergraduate education in bioengineering with honors from the University of Tehran in 1962 and came to the U.S. to study statistics at Cornell, completing his Ph.D. in 1969. Just a few years later, in 1974, Samad accepted a full professor position at the University of Illinois at Chicago Circle—now called University of Illinois at Chicago (UIC)—and was named UIC Distinguished Professor in 2003. He was an early leader in the Department of Mathematics, Statistics and Computer Science and he remains a driving force to this...

  2. A Conversation with Jeff Wu

    Chipman, Hugh A.; Joseph, V. Roshan
    Chien-Fu Jeff Wu was born January 15, 1949, in Taiwan. He earned a B.Sc. in Mathematics from National Taiwan University in 1971, and a Ph.D. in Statistics from the University of California, Berkeley in 1976. He has been a faculty member at the University of Wisconsin, Madison (1977–1988), the University of Waterloo (1988–1993), the University of Michigan (1995–2003; department chair 1995–8) and currently is the Coca-Cola Chair in Engineering Statistics and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology. He is known for his work on the convergence of the...

  3. Multiple Change-Point Detection: A Selective Overview

    Niu, Yue S.; Hao, Ning; Zhang, Heping
    Very long and noisy sequence data arise from biological sciences to social science including high throughput data in genomics and stock prices in econometrics. Often such data are collected in order to identify and understand shifts in trends, for example, from a bull market to a bear market in finance or from a normal number of chromosome copies to an excessive number of chromosome copies in genetics. Thus, identifying multiple change points in a long, possibly very long, sequence is an important problem. In this article, we review both classical and new multiple change-point detection strategies. Considering the long history...

  4. A Review and Comparison of Age–Period–Cohort Models for Cancer Incidence

    Smith, Theresa R.; Wakefield, Jon
    Age–period–cohort models have been used to examine and forecast cancer incidence and mortality for over three decades. However, the fitting and interpretation of these models requires great care because of the well-known identifiability problem that exists; given any two of age, period, and cohort, the third is determined. In this paper, we review the identifiability problem and models that have been proposed for analysis, from both frequentist and Bayesian standpoints. A number of recent analyses that use age–period–cohort models are described and critiqued before data on cancer incidence in Washington State are analyzed with various models, including a new Bayesian...

  5. Bayes, Reproducibility and the Quest for Truth

    Fraser, D. A. S.; Bédard, M.; Wong, A.; Lin, Wei; Fraser, A. M.
    We consider the use of default priors in the Bayes methodology for seeking information concerning the true value of a parameter. By default prior, we mean the mathematical prior as initiated by Bayes [Philos. Trans. R. Soc. Lond. 53 (1763) 370–418] and pursued by Laplace [Théorie Analytique des Probabilités (1812) Courcier], Jeffreys [Theory of Probability (1961) Clarendon Press], Bernardo [J. Roy. Statist. Soc. Ser. B 41 (1979) 113–147] and many more, and then recently viewed as “potentially dangerous” [Science 340 (2013) 1177–1178] and “potentially useful” [Science 341 (2013) 1452]. We do not mean, however, the genuine prior [Science 340 (2013)...

  6. Chaos Communication: A Case of Statistical Engineering

    Lawrance, Anthony J.
    The paper gives a statistically focused selective view of chaos-based communication which uses segments of noise-like chaotic waves as carriers of messages, replacing the traditional sinusoidal radio waves. The presentation concerns joint statistical and dynamical modelling of the binary communication system known as “chaos shift-keying”, representative of the area, and leverages the statistical properties of chaos. Practically, such systems apply to both wireless and optical laser communication channels. Theoretically, the chaotic waves are generated iteratively by chaotic maps, and practically, by electronic circuits or lasers. Both single-user and multiple-user systems are covered. The focus is on likelihood-based decoding of messages,...

  7. Rejoinder: Concert Unlikely, “Jugalbandi” Perhaps

    Singpurwalla, Nozer D.
    This rejoinder to the discussants of Filtering and Tracking Survival Propensity begins with a brief history of the statistical aspects of reliability and its impact on survival analysis and responds to the several issues raised by the discussants, some of which are conceptual and some pragmatic.

  8. Reconciling the Subjective and Objective Aspects of Probability

    Shafer, Glenn
    Since the early nineteenth century, the concept of objective probability has been dynamic. As we recognize this history, we can strengthen Professor Nozer Singpuwalla’s vision of reliability of survival analysis by aligning it with earlier conceptions elaborated by Laplace, Borel, Kolmogorov, Ville and Neyman. By emphasizing testing and recognizing the generality of the vision of Kolmogorov and Neyman, we gain a perspective that does not rely on exchangeability.

  9. What Does “Propensity” Add?

    Hutton, Jane
    Singpurwalla addresses the important challenge of modelling a unique individual. He proposes “propensity” as an approach to describing the reliability or life time of “one of a kind”. My view is that mathematical modelling is only possible when we assume that nonunique features provide sufficient information for statistical prediction to be useful. As far as possible, we should test our assumptions. However, contrary to a popular perception of Hume, we always rely on some beliefs.

  10. How About Wearing Two Hats, First Popper’s and then de Finetti’s?

    Arjas, Elja

  11. On Software and System Reliability Growth and Testing

    Coolen, Frank P. A.
    Singpurwalla presents an insightful proposal on foundations of reliability [Statist. Sci. 31 (2016) 521–540], suggesting to consider reliability not as a probability but as a propensity, in particular as the unobservable parameter in De Finetti’s famous representation theorem. One specific issue considered is reliability growth, with example scenario the performance of software as it evolves over time. We briefly discuss some related aspects, mainly based on applied research on statistical methods to support software testing and insights from our research on system reliability.

  12. Filtering and Tracking Survival Propensity (Reconsidering the Foundations of Reliability)

    Singpurwalla, Nozer D.
    The work described here was motivated by the need to address a long standing problem in engineering, namely, the tracking of reliability growth. An archetypal scenario is the performance of software as it evolves over time. Computer scientists are challenged by the task of when to release software. The same is also true for complex engineered systems like aircraft, automobiles and ballistic missiles. Tracking problems also arise in actuarial science, biostatistics, cancer research and mathematical finance. ¶ A natural approach for addressing such problems is via the control theory methods of filtering, smoothing and prediction. But to invoke such methods, one needs...

  13. Rejoinder: Approximate Models and Robust Decisions

    Watson, James; Holmes, Chris

  14. Ambiguity Aversion and Model Misspecification: An Economic Perspective

    Hansen, Lars Peter; Marinacci, Massimo

  15. Nonparametric Bayesian Clay for Robust Decision Bricks

    Robert, Christian P.; Rousseau, Judith
    This note discusses Watson and Holmes [Statist. Sci. (2016) 31 465–489] and their proposals towards more robust Bayesian decisions. While we acknowledge and commend the authors for setting new and all-encompassing principles of Bayesian robustness, and while we appreciate the strong anchoring of these within a decision-theoretic framework, we remain uncertain as to what extent such principles can be applied outside binary decisions. We also wonder at the ultimate relevance of Kullback–Leibler neighbourhoods into characterising robustness and we instead favour extensions along nonparametric axes.

  16. Issues in Robustness Analysis

    Goldstein, Michael
    How may we develop methods of analysis which address the consequences of the mismatch between the formal structural requirements of Bayesian analysis and the actual assessments that are carried out in practice? A paper by Watson and Holmes provides an overview of methods developed to address such issues and makes suggestions as to how such analyses might be carried out. This article adds commentary on the principles and practices which should guide us in such problems.

  17. Selection of KL Neighbourhood in Robust Bayesian Inference

    Bochkina, Natalia A.

  18. Contextuality of Misspecification and Data-Dependent Losses

    Grünwald, Peter
    We elaborate on Watson and Holmes’ observation that misspecification is contextual: a model that is wrong can still be adequate in one prediction context, yet grossly inadequate in another. One can incorporate such phenomena by adopting a generalized posterior, in which the likelihood is multiplied by an exponentiated loss. We argue that Watson and Holmes’ characterization of such generalized posteriors does not really explain their good practical performance, and we provide an alternative explanation which suggests a further extension of the method.

  19. Model Uncertainty First, Not Afterwards

    Glad, Ingrid; Hjort, Nils Lid
    Watson and Holmes propose ways of investigating robustness of statistical decisions by examining certain neighbourhoods around a posterior distribution. This may partly amount to ad hoc modelling of extra uncertainty. Instead of creating neighbourhoods around the posterior a posteriori, we argue that it might be more fruitful to model a layer of extra uncertainty first, in the model building process, and then allow the data to determine how big the resulting neighbourhoods ought to be. We develop and briefly illustrate a general strategy along such lines.

  20. Approximate Models and Robust Decisions

    Watson, James; Holmes, Chris
    Decisions based partly or solely on predictions from probabilistic models may be sensitive to model misspecification. Statisticians are taught from an early stage that “all models are wrong, but some are useful”; however, little formal guidance exists on how to assess the impact of model approximation on decision making, or how to proceed when optimal actions appear sensitive to model fidelity. This article presents an overview of recent developments across different disciplines to address this. We review diagnostic techniques, including graphical approaches and summary statistics, to help highlight decisions made through minimised expected loss that are sensitive to model misspecification....

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.