Recursos de colección
Project Euclid (Hosted at Cornell University Library) (192.979 recursos)
Electronic Journal of Statistics
Electronic Journal of Statistics
Ma, Shujie; Lian, Heng; Liang, Hua; Carroll, Raymond J.
While popular, single index models and additive models have potential limitations, a fact that leads us to propose SiAM, a novel hybrid combination of these two models. We first address model identifiability under general assumptions. The result is of independent interest. We then develop an estimation procedure by using splines to approximate unknown functions and establish the asymptotic properties of the resulting estimators. Furthermore, we suggest a two-step procedure for establishing confidence bands for the nonparametric additive functions. This procedure enables us to make global inferences. Numerical experiments indicate that SiAM works well with finite sample sizes, and are especially...
van der Meulen, Frank; Schauer, Moritz
Estimation of parameters of a diffusion based on discrete time observations poses a difficult problem due to the lack of a closed form expression for the likelihood. From a Bayesian computational perspective it can be casted as a missing data problem where the diffusion bridges in between discrete-time observations are missing. The computational problem can then be dealt with using a Markov-chain Monte-Carlo method known as data-augmentation. If unknown parameters appear in the diffusion coefficient, direct implementation of data-augmentation results in a Markov chain that is reducible. Furthermore, data-augmentation requires efficient sampling of diffusion bridges, which can be difficult, especially...
van der Meulen, Frank; Schauer, Moritz
Estimation of parameters of a diffusion based on discrete time observations poses a difficult problem due to the lack of a closed form expression for the likelihood. From a Bayesian computational perspective it can be casted as a missing data problem where the diffusion bridges in between discrete-time observations are missing. The computational problem can then be dealt with using a Markov-chain Monte-Carlo method known as data-augmentation. If unknown parameters appear in the diffusion coefficient, direct implementation of data-augmentation results in a Markov chain that is reducible. Furthermore, data-augmentation requires efficient sampling of diffusion bridges, which can be difficult, especially...
Butucea, Cristina; Stepanova, Natalia
We consider the problem of recovery of an unknown multivariate signal $f$ observed in a $d$-dimensional Gaussian white noise model of intensity $\varepsilon $. We assume that $f$ belongs to a class of smooth functions in $L_{2}([0,1]^{d})$ and has an additive sparse structure determined by the parameter $s$, the number of non-zero univariate components contributing to $f$. We are interested in the case when $d=d_{\varepsilon }\to \infty $ as $\varepsilon \to 0$ and the parameter $s$ stays “small” relative to $d$. With these assumptions, the recovery problem in hand becomes that of determining which sparse additive components are non-zero.
¶
Attempting to...
Butucea, Cristina; Stepanova, Natalia
We consider the problem of recovery of an unknown multivariate signal $f$ observed in a $d$-dimensional Gaussian white noise model of intensity $\varepsilon $. We assume that $f$ belongs to a class of smooth functions in $L_{2}([0,1]^{d})$ and has an additive sparse structure determined by the parameter $s$, the number of non-zero univariate components contributing to $f$. We are interested in the case when $d=d_{\varepsilon }\to \infty $ as $\varepsilon \to 0$ and the parameter $s$ stays “small” relative to $d$. With these assumptions, the recovery problem in hand becomes that of determining which sparse additive components are non-zero.
¶
Attempting to...
Xiong, Shifeng
This paper introduces a local optimization-based approach to test statistical hypotheses and to construct confidence intervals. This approach can be viewed as an extension of bootstrap, and yields asymptotically valid tests and confidence intervals as long as there exist consistent estimators of unknown parameters. We present simple algorithms including a neighborhood bootstrap method to implement the approach. Several examples in which theoretical analysis is not easy are presented to show the effectiveness of the proposed approach.
Xiong, Shifeng
This paper introduces a local optimization-based approach to test statistical hypotheses and to construct confidence intervals. This approach can be viewed as an extension of bootstrap, and yields asymptotically valid tests and confidence intervals as long as there exist consistent estimators of unknown parameters. We present simple algorithms including a neighborhood bootstrap method to implement the approach. Several examples in which theoretical analysis is not easy are presented to show the effectiveness of the proposed approach.
Butucea, Cristina; Delmas, Jean-François; Dutfoy, Anne; Fischer, Richard
We study the problem of aggregation of estimators with respect to the Kullback-Leibler divergence for various probabilistic models. Rather than considering a convex combination of the initial estimators $f_{1},\ldots,f_{N}$, our aggregation procedures rely on the convex combination of the logarithms of these functions. The first method is designed for probability density estimation as it gives an aggregate estimator that is also a proper density function, whereas the second method concerns spectral density estimation and has no such mass-conserving feature. We select the aggregation weights based on a penalized maximum likelihood criterion. We give sharp oracle inequalities that hold with high...
Butucea, Cristina; Delmas, Jean-François; Dutfoy, Anne; Fischer, Richard
We study the problem of aggregation of estimators with respect to the Kullback-Leibler divergence for various probabilistic models. Rather than considering a convex combination of the initial estimators $f_{1},\ldots,f_{N}$, our aggregation procedures rely on the convex combination of the logarithms of these functions. The first method is designed for probability density estimation as it gives an aggregate estimator that is also a proper density function, whereas the second method concerns spectral density estimation and has no such mass-conserving feature. We select the aggregation weights based on a penalized maximum likelihood criterion. We give sharp oracle inequalities that hold with high...
Chen, Fuqi; Mamon, Rogemar; Davison, Matt
The use of an Ornstein-Uhlenbeck (OU) process is ubiquitous in business, economics and finance to capture various price processes and evolution of economic indicators exhibiting mean-reverting properties. The time at which structural transition representing drastic changes in the economic dynamics occur are of particular interest to policy makers, investors and financial product providers. This paper addresses the change-point problem under a generalised OU model and investigates the associated statistical inference. We propose two estimation methods to locate multiple change points and show the asymptotic properties of the estimators. An informational approach is employed in detecting the change points, and the...
Chen, Fuqi; Mamon, Rogemar; Davison, Matt
The use of an Ornstein-Uhlenbeck (OU) process is ubiquitous in business, economics and finance to capture various price processes and evolution of economic indicators exhibiting mean-reverting properties. The time at which structural transition representing drastic changes in the economic dynamics occur are of particular interest to policy makers, investors and financial product providers. This paper addresses the change-point problem under a generalised OU model and investigates the associated statistical inference. We propose two estimation methods to locate multiple change points and show the asymptotic properties of the estimators. An informational approach is employed in detecting the change points, and the...
Dehling, Herold; Rooch, Aeneas; Taqqu, Murad S.
We investigate the power of the CUSUM test and the Wilcoxon change-point tests for a shift in the mean of a process with long-range dependent noise. We derive analytic formulas for the power of these tests under local alternatives. These results enable us to calculate the asymptotic relative efficiency (ARE) of the CUSUM test and the Wilcoxon change point test. We obtain the surprising result that for Gaussian data, the ARE of these two tests equals $1$, in contrast to the case of i.i.d. noise when the ARE is known to be $3/\pi$.
Dehling, Herold; Rooch, Aeneas; Taqqu, Murad S.
We investigate the power of the CUSUM test and the Wilcoxon change-point tests for a shift in the mean of a process with long-range dependent noise. We derive analytic formulas for the power of these tests under local alternatives. These results enable us to calculate the asymptotic relative efficiency (ARE) of the CUSUM test and the Wilcoxon change point test. We obtain the surprising result that for Gaussian data, the ARE of these two tests equals $1$, in contrast to the case of i.i.d. noise when the ARE is known to be $3/\pi$.
Preinerstorfer, David
We analytically investigate size and power properties of a popular family of procedures for testing linear restrictions on the coefficient vector in a linear regression model with temporally dependent errors. The tests considered are autocorrelation-corrected F-type tests based on prewhitened nonparametric covariance estimators that possibly incorporate a data-dependent bandwidth parameter, e.g., estimators as considered in Andrews and Monahan (1992), Newey and West (1994), or Rho and Shao (2013). For design matrices that are generic in a measure theoretic sense we prove that these tests either suffer from extreme size distortions or from strong power deficiencies. Despite this negative result we...
Preinerstorfer, David
We analytically investigate size and power properties of a popular family of procedures for testing linear restrictions on the coefficient vector in a linear regression model with temporally dependent errors. The tests considered are autocorrelation-corrected F-type tests based on prewhitened nonparametric covariance estimators that possibly incorporate a data-dependent bandwidth parameter, e.g., estimators as considered in Andrews and Monahan (1992), Newey and West (1994), or Rho and Shao (2013). For design matrices that are generic in a measure theoretic sense we prove that these tests either suffer from extreme size distortions or from strong power deficiencies. Despite this negative result we...
Bahraoui, Tarik; Quessy, Jean-François
A new class of rank statistics is proposed to assess that the copula of a multivariate population is radially symmetric. The proposed test statistics are weighted $L_{2}$ functional distances between a nonparametric estimator of the characteristic function that one can associate to a copula and its complex conjugate. It will be shown that these statistics behave asymptotically as degenerate V-statistics of order four and that the limit distributions have expressions in terms of weighted sums of independent chi-square random variables. A suitably adapted and asymptotically valid multiplier bootstrap procedure is proposed for the computation of $p$-values. One advantage of the...
Bahraoui, Tarik; Quessy, Jean-François
A new class of rank statistics is proposed to assess that the copula of a multivariate population is radially symmetric. The proposed test statistics are weighted $L_{2}$ functional distances between a nonparametric estimator of the characteristic function that one can associate to a copula and its complex conjugate. It will be shown that these statistics behave asymptotically as degenerate V-statistics of order four and that the limit distributions have expressions in terms of weighted sums of independent chi-square random variables. A suitably adapted and asymptotically valid multiplier bootstrap procedure is proposed for the computation of $p$-values. One advantage of the...
Beirlant, Jan; Fraga Alves, Isabel; Reynkens, Tom
In several applications, ultimately at the largest data, truncation effects can be observed when analysing tail characteristics of statistical distributions. In some cases truncation effects are forecasted through physical models such as the Gutenberg-Richter relation in geophysics, while at other instances the nature of the measurement process itself may cause under recovery of large values, for instance due to flooding in river discharge readings. Recently, Beirlant, Fraga Alves and Gomes (2016) discussed tail fitting for truncated Pareto-type distributions. Using examples from earthquake analysis, hydrology and diamond valuation we demonstrate the need for a unified treatment of extreme value analysis for...
Beirlant, Jan; Fraga Alves, Isabel; Reynkens, Tom
In several applications, ultimately at the largest data, truncation effects can be observed when analysing tail characteristics of statistical distributions. In some cases truncation effects are forecasted through physical models such as the Gutenberg-Richter relation in geophysics, while at other instances the nature of the measurement process itself may cause under recovery of large values, for instance due to flooding in river discharge readings. Recently, Beirlant, Fraga Alves and Gomes (2016) discussed tail fitting for truncated Pareto-type distributions. Using examples from earthquake analysis, hydrology and diamond valuation we demonstrate the need for a unified treatment of extreme value analysis for...
Marchand, Éric; Perron, François; Yadegari, Iraj
For a normally distributed $X\sim N(\mu,\sigma^{2})$ and for estimating $\mu$ when restricted to an interval $[-m,m]$ under general loss $F(|d-\mu|)$ with strictly increasing and absolutely continuous $F$, we establish the inadmissibility of the restricted maximum likelihood estimator $\delta_{\hbox{mle}}$ for a large class of $F$’s and provide explicit improvements. In particular, we give conditions on $F$ and $m$ for which the Bayes estimator $\delta_{BU}$ with respect to the boundary uniform prior $\pi(-m)=\pi(m)=1/2$ dominates $\delta_{\hbox{mle}}$. Specific examples include $L^{s}$ loss with $s>1$, as well as reflected normal loss. Connections and implications for predictive density estimation are outlined, and numerical evaluations illustrate the...