Essay: Numerical Inverting of Matrices of High Order (with H. H. Goldstine)
Context and Purpose
The essay addresses the practical problem of computing inverses and solving linear systems on the first electronic digital machines, where arithmetic was performed with a fixed finite precision. It examines how rounding errors arising from finite-word arithmetic interact with algorithmic structure, and what that interaction implies for the reliability of numerical results. Practical concerns about machine cost, operation counts, and error amplification motivated an analysis that bridges algorithmic design and numerical behavior.
Main Contributions
A central contribution is a careful study of Gaussian elimination as an inversion technique, with attention to how round-off propagates through the elimination and back-substitution steps. The paper quantifies tendencies for element growth during elimination and demonstrates that naive elimination without safeguards can amplify small rounding errors into large mistakes. It also formulates measures of sensitivity of a linear system's solution to perturbations in data, ideas that anticipate the later, standard notion of a condition number relating input errors to solution errors.
Analytical Approach
The approach blends deterministic algebraic bounds with probabilistic reasoning about rounding errors. Rounding is modeled in a way that treats many small round-off contributions as effectively random, permitting estimates of typical error magnitudes rather than only worst-case bounds. This statistical perspective yields expectations about how error accumulates in practice and explains why certain algorithms perform well on real machines despite theoretically possible catastrophic behaviors. The analysis also isolates structural factors of a matrix, such as the size relationships among its elements, that control error amplification.
Practical Recommendations
The authors emphasize algorithmic choices that reduce the risk of error blow-up on finite-precision hardware. Pivoting strategies receive particular attention: selecting large pivots mitigates element growth and reduces amplification of round-off. Monitoring intermediate magnitudes and preferring stable elimination orders are identified as practical safeguards. The essay links machine precision, algorithmic growth factors, and matrix sensitivity to give practitioners a way to predict achievable accuracy and to decide when a problem is numerically ill-conditioned versus when algorithmic adjustments can restore reliable results.
Implications and Legacy
The analysis reframed numerical linear algebra as a discipline that must balance algebraic operations, machine architecture, and probabilistic error behavior. By tying concrete computational practices to quantitative error estimates, the work laid groundwork for later formalizations of numerical stability and backward error analysis. Subsequent developments in the field, including refined bounds, algorithmic innovations, and standardized stability concepts, built on the themes and techniques introduced here. The essay remains a landmark in showing how theoretical consideration of rounding and conditioning directly informs the design and assessment of numerical algorithms for high-order matrices on finite-precision computers.
The essay addresses the practical problem of computing inverses and solving linear systems on the first electronic digital machines, where arithmetic was performed with a fixed finite precision. It examines how rounding errors arising from finite-word arithmetic interact with algorithmic structure, and what that interaction implies for the reliability of numerical results. Practical concerns about machine cost, operation counts, and error amplification motivated an analysis that bridges algorithmic design and numerical behavior.
Main Contributions
A central contribution is a careful study of Gaussian elimination as an inversion technique, with attention to how round-off propagates through the elimination and back-substitution steps. The paper quantifies tendencies for element growth during elimination and demonstrates that naive elimination without safeguards can amplify small rounding errors into large mistakes. It also formulates measures of sensitivity of a linear system's solution to perturbations in data, ideas that anticipate the later, standard notion of a condition number relating input errors to solution errors.
Analytical Approach
The approach blends deterministic algebraic bounds with probabilistic reasoning about rounding errors. Rounding is modeled in a way that treats many small round-off contributions as effectively random, permitting estimates of typical error magnitudes rather than only worst-case bounds. This statistical perspective yields expectations about how error accumulates in practice and explains why certain algorithms perform well on real machines despite theoretically possible catastrophic behaviors. The analysis also isolates structural factors of a matrix, such as the size relationships among its elements, that control error amplification.
Practical Recommendations
The authors emphasize algorithmic choices that reduce the risk of error blow-up on finite-precision hardware. Pivoting strategies receive particular attention: selecting large pivots mitigates element growth and reduces amplification of round-off. Monitoring intermediate magnitudes and preferring stable elimination orders are identified as practical safeguards. The essay links machine precision, algorithmic growth factors, and matrix sensitivity to give practitioners a way to predict achievable accuracy and to decide when a problem is numerically ill-conditioned versus when algorithmic adjustments can restore reliable results.
Implications and Legacy
The analysis reframed numerical linear algebra as a discipline that must balance algebraic operations, machine architecture, and probabilistic error behavior. By tying concrete computational practices to quantitative error estimates, the work laid groundwork for later formalizations of numerical stability and backward error analysis. Subsequent developments in the field, including refined bounds, algorithmic innovations, and standardized stability concepts, built on the themes and techniques introduced here. The essay remains a landmark in showing how theoretical consideration of rounding and conditioning directly informs the design and assessment of numerical algorithms for high-order matrices on finite-precision computers.
Numerical Inverting of Matrices of High Order (with H. H. Goldstine)
Study of algorithms for matrix inversion, rounding errors, and numerical stability, establishing principles in numerical linear algebra and highlighting implications for practical computation on early electronic computers.
- Publication Year: 1947
- Type: Essay
- Genre: Mathematics, Numerical analysis, Computer Science
- Language: en
- View all works by John von Neumann on Amazon
Author: John von Neumann
John von Neumann, a pioneering mathematician who shaped quantum mechanics, game theory, and modern computing architecture.
More about John von Neumann
- Occup.: Mathematician
- From: USA
- Other works:
- Mathematical Foundations of Quantum Mechanics (1932 Book)
- On Rings of Operators (1936 Essay)
- Theory of Games and Economic Behavior (1944 Book)
- First Draft of a Report on the EDVAC (1945 Non-fiction)
- Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components (1956 Essay)
- The Computer and the Brain (1958 Book)
- Theory of Self‑Reproducing Automata (1966 Collection)