Numerical Methods
MethodNumerical Methods - Computational Algorithms
Table of Contents
- Root Finding
- Numerical Integration
- Numerical Differentiation
- Interpolation
- Numerical Linear Algebra
- Initial Value Problems for ODEs
- Finite Difference Methods
- Finite Element Method
Root Finding
Bisection Method
Assumption: continuous on with
Algorithm:
- Set
- If (or ): done
- If : set , else set
- Repeat
Convergence: Guaranteed, linear rate (error reduces by factor 1/2 each step)
Error bound after steps:
Fixed Point Iteration
Find fixed point of g:
Algorithm:
Convergence: If near fixed point
Rate: If (higher order), convergence is quadratic.
Newton’s Method
For :
Convergence: Quadratic if
Order:
Geometric: Tangent line intersection with x-axis
Issue: May diverge or oscillate for poor initial guess
Secant Method
Approximates derivative:
Convergence: Superlinear, order
Advantage: No derivative needed
Disadvantage: Two initial guesses needed
Convergence Criteria
Absolute error:
Relative error:
Function value:
Multi-dimensional Newton
For (system):
where is Jacobian matrix.
Requires: Solving linear system at each iteration
Numerical Integration
Newton-Cotes Formulas
Closed formulas: Use endpoints
Trapezoidal Rule
Composite:
where
Error:
Simpson’s Rule
Composite ( even):
Error:
Requires: Even number of subintervals
Adaptive Integration
Adapt subinterval size based on local error estimates.
Romberg Integration: Richardson extrapolation on trapezoidal rule
Gauss Quadrature: Best -node formula is orthogonal polynomial method
Gauss-Legendre: Nodes are roots of Legendre polynomials
Numerical Differentiation
Forward Difference
Error:
Backward Difference
Error:
Central Difference
Error:
Second Derivative
Error:
Sensitivity to Roundoff
Numerical derivatives amplify rounding errors:
Relative error:
Optimal : Balance truncation and roundoff errors.
Interpolation
Polynomial Interpolation
Given : Find polynomial with
Existence: Unique polynomial of degree for points
Lagrange Interpolation
where
Properties:
- for
- Basis for polynomials
Newton Divided Differences
Divided differences:
Newton form:
Advantage: Easy to add points (compute recursively)
Hermite Interpolation
Interpolate function and derivatives
Given , find with and
Splines
Cubic spline: Piecewise cubic with continuous 1st and 2nd derivatives
Natural spline: at endpoints
Clamped spline: prescribed at endpoints
Setup: System of equations for cubic coefficients
Numerical Linear Algebra
Gaussian Elimination with Pivoting
Partial pivoting: Swap row with largest in column
Complete pivoting: Swap both rows and columns
Gauss-Jordan: Continue elimination to reduce form
LU Decomposition
where permutation, lower, upper triangular
Solve: , then
Advantage: computed once, solves multiple systems
Cholesky Decomposition
For positive definite A: ( lower triangular)
More efficient than LU
QR Decomposition
where orthogonal, upper triangular
Gram-Schmidt: Compute column by column
Householder reflections: More numerically stable
Eigenvalue Algorithms
Power method: For dominant eigenvalue
Converges to eigenvector, eigenvalue from Rayleigh quotient
Inverse iteration: For eigenvalue near
Solve
QR algorithm: Iterative method for all eigenvalues
Shifts accelerate convergence
Condition Number
Indicates sensitivity to errors in linear systems
Large nearly singular
Initial Value Problems for ODEs
Problem
Euler’s Method
where = step size
Error:
Local error:
Global error: over entire interval
Runge-Kutta Methods
RK4 (Classical):
Error:
Multistep Methods
Use previous values:
Adams-Bashforth: Explicit ()
Adams-Moulton: Implicit (, requires solve)
Advantage: More efficient per step
Disadvantage: Need starting values (e.g., RK4)
Stability
Stability region: { : method stable}
A-stable: Includes left half-plane
Implicit methods often more stable (larger stability regions)
Adaptive Step Size
Estimate local error: Compare two methods
Adjust : Larger if error small, smaller if large
Finite Difference Methods
Approximating Derivatives
Second derivative:
Boundary Value Problems
Example:
Discretize: ()
Discrete equations:
Linear system:
Tridiagonal matrix (banded)
Two-Point BVP
Eigenvalue problem:
Eigenvalues:
Use QZ algorithm for discrete problem
2D Problems
Laplace equation:
Discretize:
Five-point stencil:
Block tridiagonal system
Finite Element Method
Weak Formulation
Instead of solving strong form, solve weak form:
Find such that
for all test functions
Galerkin Method
Approximate as:
Choose test functions
Discretize:
Galerkin: Choose
Linear system:
where
Basis Functions
Piecewise linear: hat function at node
Piecewise polynomial: Higher order elements
Advantages
Adaptive meshing Handles complex geometries Irregular meshes
Mesh Generation
Triangular elements (2D) Tetrahedral elements (3D)
Supplementary Numerical Methods Reference (from mathematics_GPT)
Error Estimates for Numerical Differentiation
Forward Difference:
Error:
Backward Difference:
Error:
Central Difference:
Error:
Error Estimates for Numerical Integration
Trapezoidal Rule:
Composite:
Error:
Simpson’s Rule:
Composite ( even):
Error:
Error Estimates for ODE Solvers
Euler’s Method:
Error:
Local error:
Global error: over entire interval
Runge-Kutta Methods:
RK4 (Classical):
Error:
Error Estimates for Linear Systems
Gaussian Elimination with Pivoting:
Partial pivoting: Swap row with largest in column
Complete pivoting: Swap both rows and columns
Error:
LU Decomposition:
where permutation, lower, upper triangular
Error:
Cholesky Decomposition:
For positive definite A: ( lower triangular)
Error:
Error Estimates for Eigenvalue Problems
Power method: For dominant eigenvalue
Error: )
Inverse iteration: For eigenvalue near
Error: )
QR algorithm: Iterative method for all eigenvalues
Error:
Error Estimates for Condition Number
Indicates sensitivity to errors in linear systems
Large nearly singular
Error:
Next: Optimization or Topology
Last updated: Comprehensive numerical methods reference covering root finding, integration, and differential equation solvers.