Mostrando recursos 1 - 20 de 1.246

  1. A Conversation with Jayaram Sethuraman

    Hollander, Myles
    Jayaram Sethuraman was born in the town of Hubli in Bombay Province (now Karnataka State) on October 3, 1937. His early years were spent in Hubli and in 1950 his family moved to Madras (now renamed Chennai). He graduated from Madras University in 1957 with a B.Sc. (Hons) degree in statistics and he earned his M.A. degree in statistics from Madras University in 1958. He earned a Ph.D. in statistics from the Indian Statistical Institute in 1962. Before returning to ISI in 1965 as an Associate Professor, he was a Research Associate at the University of North Carolina 1962–1963, at...

  2. Karl Pearson’s Theoretical Errors and the Advances They Inspired

    Stigler, Stephen M.
    Karl Pearson played an enormous role in determining the content and organization of statistical research in his day, through his research, his teaching, his establishment of laboratories, and his initiation of a vast publishing program. His technical contributions had initially and continue today to have a profound impact upon the work of both applied and theoretical statisticians, partly through their inadequately acknowledged influence upon Ronald A. Fisher. Particular attention is drawn to two of Pearson’s major errors that nonetheless have left a positive and lasting impression upon the statistical world.

  3. Markov Chain Monte Carlo: Can We Trust the Third Significant Figure?

    Flegal, James M.; Haran, Murali; Jones, Galin L.
    Current reporting of results based on Markov chain Monte Carlo computations could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported. Thus we have little ability to objectively assess the quality of the reported estimates. We address this issue in that we discuss why Monte Carlo standard errors are important, how they can be easily calculated in Markov chain Monte Carlo and how they can be used to decide when to stop the simulation. We compare their use to a popular alternative in the context of two examples.

  4. Randomization Does Not Justify Logistic Regression

    Freedman, David A.
    The logit model is often used to analyze experimental data. However, randomization does not justify the model, so the usual estimators can be inconsistent. A consistent estimator is proposed. Neyman’s non-parametric setup is used as a benchmark. In this setup, each subject has two potential responses, one if treated and the other if untreated; only one of the two responses can be observed. Beside the mathematics, there are simulation results, a brief review of the literature, and some recommendations for practice.

  5. Covariate Balance in Simple, Stratified and Clustered Comparative Studies

    Hansen, Ben B.; Bowers, Jake
    In randomized experiments, treatment and control groups should be roughly the same—balanced—in their distributions of pretreatment variables. But how nearly so? Can descriptive comparisons meaningfully be paired with significance tests? If so, should there be several such tests, one for each pretreatment variable, or should there be a single, omnibus test? Could such a test be engineered to give easily computed p-values that are reliable in samples of moderate size, or would simulation be needed for reliable calibration? What new concerns are introduced by random assignment of clusters? Which tests of balance would be optimal? ¶ To address these questions, Fisher’s randomization...

  6. Formal and Informal Model Selection with Incomplete Data

    Verbeke, Geert; Molenberghs, Geert; Beunckens, Caroline
    Model selection and assessment with incomplete data pose challenges in addition to the ones encountered with complete data. There are two main reasons for this. First, many models describe characteristics of the complete data, in spite of the fact that only an incomplete subset is observed. Direct comparison between model and data is then less than straightforward. Second, many commonly used models are more sensitive to assumptions than in the complete-data situation and some of their properties vanish when they are fitted to incomplete, unbalanced data. These and other issues are brought forward using two key examples, one of a...

  7. Rejoinder: Gibbs Sampling, Exponential Families and Orthogonal Polynomials

    Diaconis, Persi; Khare, Kshitij; Saloff-Coste, Laurent
    We are thankful to the discussants for their hard, interesting work. The main purpose of our paper was to give reasonably sharp rates of convergence for some simple examples of the Gibbs sampler. We chose examples from expository accounts where direct use of available techniques gave practically useless answers. Careful treatment of these simple examples grew into bivariate modeling and Lancaster families. Since bounding rates of convergence is our primary focus, let us begin there.

  8. Comment: On Random Scan Gibbs Samplers

    Levine, Richard A.; Casella, George

  9. Comment: Lancaster Probabilities and Gibbs Sampling

    Letac, Gérard

  10. Comment: Gibbs Sampling, Exponential Families, and Orthogonal Polynomials

    Jones, Galin L.; Johnson, Alicia A.

  11. Comment: Gibbs Sampling, Exponential Families and Orthogonal Polynomials

    Berti, Patrizia; Consonni, Guido; Pratelli, Luca; Rigo, Pietro

  12. Gibbs Sampling, Exponential Families and Orthogonal Polynomials

    Diaconis, Persi; Khare, Kshitij; Saloff-Coste, Laurent
    We give families of examples where sharp rates of convergence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical orthogonal polynomials as eigenfunctions.

  13. The Early Statistical Years: 1947–1967. A Conversation with Howard Raiffa

    Fienberg, Stephen E.
    Howard Raiffa earned his bachelor’s degree in mathematics, his master’s degree in statistics and his Ph.D. in mathematics at the University of Michigan. Since 1957, Raiffa has been a member of the faculty at Harvard University, where he is now the Frank P. Ramsey Chair in Managerial Economics (Emeritus) in the Graduate School of Business Administration and the Kennedy School of Government. A pioneer in the creation of the field known as decision analysis, his research interests span statistical decision theory, game theory, behavioral decision theory, risk analysis and negotiation analysis. Raiffa has supervised more than 90 doctoral dissertations and...

  14. A Conversation with Peter Huber

    Buja, Andreas; Künsch, Hans R.
    Peter J. Huber was born on March 25, 1934, in Wohlen, a small town in the Swiss countryside. He obtained a diploma in mathematics in 1958 and a Ph.D. in mathematics in 1961, both from ETH Zurich. His thesis was in pure mathematics, but he then decided to go into statistics. He spent 1961–1963 as a postdoc at the statistics department in Berkeley where he wrote his first and most famous paper on robust statistics, “Robust Estimation of a Location Parameter.” After a position as a visiting professor at Cornell University, he became a full professor at ETH Zurich. He...

  15. High-Breakdown Robust Multivariate Methods

    Hubert, Mia; Rousseeuw, Peter J.; Van Aelst, Stefan
    When applying a statistical method in practice it often occurs that some observations deviate from the usual assumptions. However, many classical methods are sensitive to outliers. The goal of robust statistics is to develop methods that are robust against the possibility that one or several unannounced outliers may occur anywhere in the data. These methods then allow to detect outlying observations by their residuals from a robust fit. We focus on high-breakdown methods, which can deal with a substantial fraction of outliers in the data. We give an overview of recent high-breakdown robust methods for multivariate settings such as covariance...

  16. Verbal Autopsy Methods with Multiple Causes of Death

    King, Gary; Lu, Ying
    Verbal autopsy procedures are widely used for estimating cause-specific mortality in areas without medical death certification. Data on symptoms reported by caregivers along with the cause of death are collected from a medical facility, and the cause-of-death distribution is estimated in the population where only symptom data are available. Current approaches analyze only one cause at a time, involve assumptions judged difficult or impossible to satisfy, and require expensive, time-consuming, or unreliable physician reviews, expert algorithms, or parametric statistical models. By generalizing current approaches to analyze multiple causes, we show how most of the difficult assumptions underlying existing methods can...

  17. Rejoinder: The 2005 Neyman Lecture: Dynamic Indeterminism in Science

    Brillinger, David R.

  18. Comment: The 2005 Neyman Lecture: Dynamic Indeterminism in Science

    Yang, Grace L.

  19. Comment: The 2005 Neyman Lecture: Dynamic Indeterminism in Science

    Künsch, Hans R.

  20. The 2005 Neyman Lecture: Dynamic Indeterminism in Science

    Brillinger, David R.
    Jerzy Neyman’s life history and some of his contributions to applied statistics are reviewed. In a 1960 article he wrote: “Currently in the period of dynamic indeterminism in science, there is hardly a serious piece of research which, if treated realistically, does not involve operations on stochastic processes. The time has arrived for the theory of stochastic processes to become an item of usual equipment of every applied statistician.” The emphasis in this article is on stochastic processes and on stochastic process data analysis. A number of data sets and corresponding substantive questions are addressed. The data sets concern sardine...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.