Recursos de colección

Caltech Authors (147.820 recursos)

Repository of works by Caltech published authors.

Group = Applied & Computational Mathematics

Mostrando recursos 1 - 17 de 17

  1. Randomized Single-View Algorithms for Low-Rank Matrix Approximation

    Tropp, Joel A.; Yurtsever, Alp; Udell, Madeleine; Cevher, Volkan
    This paper develops a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by computer experiments.

  2. Tail Bounds for All Eigenvalues of a Sum of Random Matrices

    Gittens, Alex A.; Tropp, Joel A.
    This work introduces the minimax Laplace transform method, a modification of the cumulant-based matrix Laplace transform method developed in [Tro11c] that yields both upper and lower bounds on each eigenvalue of a sum of random self-adjoint matrices. This machinery is used to derive eigenvalue analogs of the classical Chernoff, Bennett, and Bernstein bounds. Two examples demonstrate the efficacy of the minimax Laplace transform. The first concerns the effects of column sparsification on the spectrum of a matrix with orthonormal rows. Here, the behavior of the singular values can be described in terms of coherence-like quantities. The second example addresses the question of relative accuracy...

  3. Error Bounds for Random Matrix Approximation Schemes

    Gittens, A.; Tropp, J. A.
    Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result...

  4. The Masked Sample Covariance Estimator: An Analysis via the Matrix Laplace Transform

    Chen, Richard Y.; Gittens, Alex A.; Tropp, Joel A.
    Covariance estimation becomes challenging in the regime where the number p of variables outstrips the number n of samples available to construct the estimate. One way to circumvent this problem is to assume that the covariance matrix is nearly sparse and to focus on estimating only the significant entries. To analyze this approach, Levina and Vershynin (2011) introduce a formalism called masked covariance estimation, where each entry of the sample covariance estimator is reweighed to reflect an a priori assessment of its importance. This paper provides a new analysis of the masked sample covariance estimator based on the matrix Laplace transform method. The main result applies...

  5. User-friendly Tail Bounds for Matrix Martingales

    Tropp, Joel A.
    This report presents probability inequalities for sums of adapted sequences of random, self-adjoint matrices. The results frame simple, easily verifiable hypotheses on the summands, and they yield strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. The methods also specialize to sums of independent random matrices.

  6. Localized bases for finite dimensional homogenization approximations with non-separated scales and high-contrast

    Owhadi, Houman; Zhang, Lei
    We construct finite-dimensional approximations of solution spaces of divergence form operators with L^∞-coefficients. Our method does not rely on concepts of ergodicity or scale-separation, but on the property that the solution of space of these operators is compactly embedded in H^1 if source terms are in the unit ball of L^2 instead of the unit ball of H^−1. Approximation spaces are generated by solving elliptic PDEs on localized sub-domains with source terms corresponding to approximation bases for H^2. The H^1-error estimates show that O(h^−d)-dimensional spaces with basis elements localized to sub-domains of diameter O(h^∞ ln 1/h) (with α ∈ [1/2 , 1)) result in an O(h^(2−2α) accuracy...

  7. Optimal Uncertainty Quantification

    Owhadi, H.; Scovel, C.; Sullivan, T. J.; McKerns, M.; Ortiz, M.
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large,...

  8. A Non-adapted Sparse Approximation of PDEs with Stochastic Inputs

    Doostan, Alireza; Owhadi, Houman
    We propose a method for the approximation of solutions of PDEs with stochastic coefficients based on the direct non-adapted, i.e., non-adapted, sampling of solutions. This sampling can be done by using any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity and a concentration of measure phenomenon on the empirical correlation between samples. We show that the method is well suited for truly high-dimensional problems (with slow decay in the spectrum).

  9. User-Friendly Tail Bounds for Sums of Random Matrices

    Tropp, Joel A.
    This work presents probability inequalities for sums of independent, random, self-adjoint matrices. The results frame simple, easily verifiable hypotheses on the summands, and they yield strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of rectangular matrices follow as an immediate corollary, and similar techniques yield information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same ease of use, diversity of application, and strength of conclusion that have made the scalar inequalities so...

  10. Finding Structure with Randomness: Stochastic Algorithms for Constructing Approximate matrix Decompositions

    Halko, N.; Martinsson, P. G.; Tropp, J. A.
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques o®er a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of...

  11. Non-intrusive and structure preserving multiscale integration of stiff ODEs, SDEs and Hamiltonian systems with hidden slow dynamics via flow averaging

    Tao, Molei; Owhadi, Houman; Marsden, Jerrold E.
    We introduce a new class of integrators for stiff ODEs as well as SDEs. An example of subclass of systems that we treat are ODEs and SDEs that are sums of two terms one of which has large coefficients. These integrators are (i) Multiscale: they are based on ow averaging and so do not resolve the fast variables but rather employ step-sizes determined by slow variables (ii) Basis: the method is based on averaging the ow of the given dynamical system (which may have hidden slow and fast processes) instead of averaging the instantaneous drift of assumed separated slow and fast processes. This bypasses the need for...

  12. Flux Norm Approach to Homogenization Problems with non-separated Scales

    Berlyand, Leonid; Owhadi, Houman
    We consider linear divergence-form scalar elliptic equations and vectorial equations for elasticity with rough (L^∞(Ω­), ­Ω ⊂ ℝ^d ) coefficients a(x) that, in particular, model media with non-separated scales and high contrast in material properties. While the homogenization of PDEs with periodic or ergodic coefficients and well separated scales is now well understood, we consider here the most general case of arbitrary bounded coefficients. For such problems we introduce explicit finite dimensional approximations of solutions with controlled error estimates, which we refer to as homogenization approximations. In particular, this approach allows one to analyze a given medium directly without introducing the mathematical concept of an ∈ family of...

  13. Discrete Geometric Structures in Homogenization and Inverse Homogenization with Application to EIT

    Desbrun, Mathieu; Donaldson, Roger D.; Owhadi, Houman
    We introduce a new geometric approach for the homogenization and inverse homogenization of the divergence form elliptic operator with rough conductivity coefficients σ(x) in dimension two. We show that conductivity coefficients are in one-to-one correspondence with divergence-free matrices and convex functions s(x) over the domain Ω. Although homogenization is a non-linear and non-injective operator when applied directly to conductivity coefficients, homogenization becomes a linear interpolation operator over triangulations of Ω when re-expressed using convex functions, and is a volume averaging operator when re-expressed with divergence-free matrices. We explicitly give the transformations which map conductivity coefficients into divergence-free matrices and convex functions, as well as their respective inverses. Using...

  14. Computational Methods for Sparse Solution of Linear Inverse Problems

    Tropp, Joel A.; Wright, Stephen J.
    In sparse approximation problems, the goal is to find an approximate representation of a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major algorithms that are used for solving sparse approximation problems in practice. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, which makes the algorithms discussed in this paper versatile tools with a wealth of applications.

  15. Column Subset Selection, Matrix Factorization, and Eigenvalue Optimization

    Tropp, Joel A.
    Given a fixed matrix, the problem of column subset selection requests a column submatrix that has favorable spectral properties. Most research from the algorithms and numerical linear algebra communities focuses on a variant called rank-revealing QR, which seeks a well-conditioned collection of columns that spans the (numerical) range of the matrix. The functional analysis literature contains another strand of work on column selection whose algorithmic implications have not been explored. In particular, a celebrated result of Bourgain and Tzafriri demonstrates that each matrix with normalized columns contains a large column submatrix that is exceptionally well conditioned. Unfortunately, standard proofs of this result cannot be regarded...

  16. CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples

    Needell, D.; Tropp, J. A.
    Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For compressible signals, the running time is just O(N log^2 N), where N is the length of the signal.

  17. Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

    Tropp, Joel A.; Gilbert, Anna C.
    This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Aviso de cookies: Usamos cookies propias y de terceros para mejorar nuestros servicios, para análisis estadístico y para mostrarle publicidad. Si continua navegando consideramos que acepta su uso en los términos establecidos en la Política de cookies.