Mostrando recursos 1 - 10 de 10

  1. Correction to “A Topologically Valid Definition of Depth for Functional Data”

    Nieto-Reyes, Alicia; Battey, Heather

  2. On a General Definition of Depth for Functional Data

    Gijbels, Irène; Nagy, Stanislav
    In this paper, we provide an elaboration on the desirable properties of statistical depths for functional data. Although a formal definition has been put forward in the literature, there are still several unclarities to be tackled, and further insights to be gained. Herein, a few interesting connections between the wanted properties are found. In particular, it is demonstrated that the conditions needed for some desirable properties to hold are extremely demanding, and virtually impossible to be met for common depths. We establish adaptations of these properties which prove to be still sensible, and more easily met by common functional depths.

  3. Elo Ratings and the Sports Model: A Neglected Topic in Applied Probability?

    Aldous, David
    In a simple model for sports, the probability A beats B is a specified function of their difference in strength. One might think this would be a staple topic in Applied Probability textbooks (like the Galton–Watson branching process model, for instance) but it is curiously absent. Our first purpose is to point out that the model suggests a wide range of questions, suitable for “undergraduate research” via simulation but also challenging as professional research. Our second, more specific, purpose concerns Elo-type rating algorithms for tracking changing strengths. There has been little foundational research on their accuracy, despite a much-copied “30...

  4. Contemporary Frequentist Views of the $2\times2$ Binomial Trial

    Ripamonti, Enrico; Lloyd, Chris; Quatto, Piero
    The $2\times2$ table is the simplest of data structures yet it is of immense practical importance. It is also just complex enough to provide a theoretical testing ground for general frequentist methods. Yet after 70 years of debate, its correct analysis is still not settled. Rather than recount the entire history, our review is motivated by contemporary developments in likelihood and testing theory as well as computational advances. We will look at both conditional and unconditional tests. Within the conditional framework, we explain the relationship of Fisher’s test with variants such as mid-$p$ and Liebermeister’s test, as well as modern...

  5. The Coordinate-Based Meta-Analysis of Neuroimaging Data

    Samartsidis, Pantelis; Montagna, Silvia; Johnson, Timothy D.; Nichols, Thomas E.
    Neuroimaging meta-analysis is an area of growing interest in statistics. The special characteristics of neuroimaging data render classical meta-analysis methods inapplicable and therefore new methods have been developed. We review existing methodologies, explaining the benefits and drawbacks of each. A demonstration on a real dataset of emotion studies is included. We discuss some still-open problems in the field to highlight the need for future research.

  6. Instrumental Variable Estimation with a Stochastic Monotonicity Assumption

    Small, Dylan S.; Tan, Zhiqiang; Ramsahai, Roland R.; Lorch, Scott A.; Brookhart, M. Alan
    The instrumental variables (IV) method provides a way to estimate the causal effect of a treatment when there are unmeasured confounding variables. The method requires a valid IV, a variable that is independent of the unmeasured confounding variables and is associated with the treatment but which has no effect on the outcome beyond its effect on the treatment. An additional assumption often made is deterministic monotonicity, which says that for each subject, the level of the treatment that a subject would take is a monotonic increasing function of the level of the IV. However, deterministic monotonicity is sometimes not realistic....

  7. Hierarchical Sparse Modeling: A Choice of Two Group Lasso Formulations

    Yan, Xiaohan; Bien, Jacob
    Demanding sparsity in estimated models has become a routine practice in statistics. In many situations, we wish to require that the sparsity patterns attained honor certain problem-specific constraints. Hierarchical sparse modeling (HSM) refers to situations in which these constraints specify that one set of parameters be set to zero whenever another is set to zero. In recent years, numerous papers have developed convex regularizers for this form of sparsity structure, which arises in many areas of statistics including interaction modeling, time series analysis, and covariance estimation. In this paper, we observe that these methods fall into two frameworks, the group...

  8. The General Structure of Evidence Factors in Observational Studies

    Rosenbaum, Paul R.
    The general structure of evidence factors is examined in terms of the knit product of two permutation groups. An observational or nonrandomized study of treatment effects has two evidence factors if it permits two (nearly) independent tests of the null hypothesis of no treatment effect and two (nearly) independent sensitivity analyses for those tests. Either of the two tests may be biased by nonrandom treatment assignment, but certain biases that would invalidate one test would have no impact on the other, so if the two tests concur, then some aspects of biased treatment assignment have been partially addressed. Expressed in...

  9. Spherical Process Models for Global Spatial Statistics

    Jeong, Jaehong; Jun, Mikyoung; Genton, Marc G.
    Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture the spatial and temporal behavior of these global data sets. Though the geodesic distance is the most natural metric for measuring distance on the surface of a sphere, mathematical limitations have compelled statisticians to use the chordal distance to compute the covariance matrix in many applications instead, which may cause physically unrealistic distortions. Therefore, covariance functions directly defined on a sphere using the geodesic distance are needed....

  10. Sufficientness Postulates for Gibbs-Type Priors and Hierarchical Generalizations

    Bacallado, S.; Battiston, M.; Favaro, S.; Trippa, L.
    A fundamental problem in Bayesian nonparametrics consists of selecting a prior distribution by assuming that the corresponding predictive probabilities obey certain properties. An early discussion of such a problem, although in a parametric framework, dates back to the seminal work by English philosopher W. E. Johnson, who introduced a noteworthy characterization for the predictive probabilities of the symmetric Dirichlet prior distribution. This is typically referred to as Johnson’s “sufficientness” postulate. In this paper, we review some nonparametric generalizations of Johnson’s postulate for a class of nonparametric priors known as species sampling models. In particular, we revisit and discuss the “sufficientness”...

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.