Numerical Methods

Method

Numerical Methods - Computational Algorithms

Table of Contents

  1. Root Finding
  2. Numerical Integration
  3. Numerical Differentiation
  4. Interpolation
  5. Numerical Linear Algebra
  6. Initial Value Problems for ODEs
  7. Finite Difference Methods
  8. Finite Element Method

Root Finding

Bisection Method

Assumption: ff continuous on [a,b][a,b] with f(a)f(b)<0f(a)f(b) < 0

Algorithm:

  1. Set c=(a+b)/2c = (a+b)/2
  2. If f(c)=0f(c) = 0 (or f(c)<ϵ|f(c)| < \epsilon): done
  3. If f(a)f(c)<0f(a)f(c) < 0: set b=cb = c, else set a=ca = c
  4. Repeat

Convergence: Guaranteed, linear rate (error reduces by factor 1/2 each step)

Error bound after nn steps: cnr(ba)/2n+1|c_n - r| \leq (b-a)/2^{n+1}

Fixed Point Iteration

Find fixed point of g: x=g(x)x = g(x)

Algorithm: xn+1=g(xn)x_{n+1} = g(x_n)

Convergence: If g(x)<1|g'(x)| < 1 near fixed point rr

Rate: If g(r)=0g'(r) = 0 (higher order), convergence is quadratic.

Newton’s Method

For f(x)=0f(x) = 0:

xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

Convergence: Quadratic if f(r)0f'(r) \neq 0

Order: p=2p = 2

Geometric: Tangent line intersection with x-axis

Issue: May diverge or oscillate for poor initial guess

Secant Method

Approximates derivative:

xn+1=xnf(xn)xnxn1f(xn)f(xn1)x_{n+1} = x_n - f(x_n)\frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}

Convergence: Superlinear, order 1.618\approx 1.618

Advantage: No derivative needed

Disadvantage: Two initial guesses needed

Convergence Criteria

Absolute error: xnxn1<ϵ|x_n - x_{n-1}| < \epsilon

Relative error: xnxn1/xn<ϵ|x_n - x_{n-1}|/|x_n| < \epsilon

Function value: f(xn)<ϵ|f(x_n)| < \epsilon

Multi-dimensional Newton

For f(x)=0f(x) = 0 (system):

xn+1=xnJ1(xn)f(xn)x_{n+1} = x_n - J^{-1}(x_n) f(x_n)

where JJ is Jacobian matrix.

Requires: Solving linear system at each iteration


Numerical Integration

Newton-Cotes Formulas

Closed formulas: Use endpoints

Trapezoidal Rule

abf(x)dxba2[f(a)+f(b)]\int_a^b f(x)dx \approx \frac{b-a}{2}[f(a) + f(b)]

Composite:

abf(x)dxh[f(a)2+i=1n1f(xi)+f(b)2]\int_a^b f(x)dx \approx h\left[\frac{f(a)}{2} + \sum_{i=1}^{n-1} f(x_i) + \frac{f(b)}{2}\right]

where h=(ba)/n,xi=a+ihh = (b-a)/n, x_i = a + ih

Error: O(h2)O(h^2)

Simpson’s Rule

abf(x)dxba6[f(a)+4f(a+b2)+f(b)]\int_a^b f(x)dx \approx \frac{b-a}{6}[f(a) + 4f(\frac{a+b}{2}) + f(b)]

Composite (nn even):

abf(x)dxh3[f(x0)+4i=1n/2f(x2i1)+2i=1n/21f(x2i)+f(xn)]\int_a^b f(x)dx \approx \frac{h}{3}[f(x_0) + 4\sum_{i=1}^{n/2} f(x_{2i-1}) + 2\sum_{i=1}^{n/2-1} f(x_{2i}) + f(x_n)]

Error: O(h4)O(h^4)

Requires: Even number of subintervals

Adaptive Integration

Adapt subinterval size based on local error estimates.

Romberg Integration: Richardson extrapolation on trapezoidal rule

Gauss Quadrature: Best nn-node formula is orthogonal polynomial method

Gauss-Legendre: Nodes are roots of Legendre polynomials


Numerical Differentiation

Forward Difference

f(x)f(x+h)f(x)hf'(x) \approx \frac{f(x+h) - f(x)}{h}

Error: O(h)O(h)

Backward Difference

f(x)f(x)f(xh)hf'(x) \approx \frac{f(x) - f(x-h)}{h}

Error: O(h)O(h)

Central Difference

f(x)f(x+h)f(xh)2hf'(x) \approx \frac{f(x+h) - f(x-h)}{2h}

Error: O(h2)O(h^2)

Second Derivative

f(x)f(x+h)2f(x)+f(xh)h2f''(x) \approx \frac{f(x+h) - 2f(x) + f(x-h)}{h^2}

Error: O(h2)O(h^2)

Sensitivity to Roundoff

Numerical derivatives amplify rounding errors:

Relative error: ϵ/h\epsilon/h

Optimal hh: Balance truncation and roundoff errors.


Interpolation

Polynomial Interpolation

Given (xi,yi)(x_i, y_i): Find polynomial p(x)p(x) with p(xi)=yip(x_i) = y_i

Existence: Unique polynomial of degree n\leq n for n+1n+1 points

Lagrange Interpolation

p(x)=i=0nyiLi(x)p(x) = \sum_{i=0}^n y_i L_i(x)

where Li(x)=jixxjxixjL_i(x) = \prod_{j \neq i} \frac{x-x_j}{x_i-x_j}

Properties:

  • Li(xi)=1L_i(x_i) = 1
  • Li(xj)=0L_i(x_j) = 0 for jij \neq i
  • Basis for polynomials

Newton Divided Differences

Divided differences:

f[x0]=f(x0)f[x_0] = f(x_0) f[x0,x1]=f(x1)f(x0)x1x0f[x_0,x_1] = \frac{f(x_1) - f(x_0)}{x_1 - x_0} f[x0,...,xk]=f[x1,...,xk]f[x0,...,xk1]xkx0f[x_0,...,x_k] = \frac{f[x_1,...,x_k] - f[x_0,...,x_{k-1}]}{x_k - x_0}

Newton form:

pn(x)=f[x0]+k=1nf[x0,...,xk]j=0k1(xxj)p_n(x) = f[x_0] + \sum_{k=1}^n f[x_0,...,x_k] \prod_{j=0}^{k-1}(x-x_j)

Advantage: Easy to add points (compute recursively)

Hermite Interpolation

Interpolate function and derivatives

Given (xi,yi,yi)(x_i, y_i, y'_i), find pp with p(xi)=yip(x_i) = y_i and p(xi)=yip'(x_i) = y'_i

Splines

Cubic spline: Piecewise cubic with continuous 1st and 2nd derivatives

Natural spline: S=0S'' = 0 at endpoints

Clamped spline: SS' prescribed at endpoints

Setup: System of equations for cubic coefficients


Numerical Linear Algebra

Gaussian Elimination with Pivoting

Partial pivoting: Swap row with largest aij|a_{ij}| in column

Complete pivoting: Swap both rows and columns

Gauss-Jordan: Continue elimination to reduce form

LU Decomposition

PA=LUPA = LU where PP permutation, LL lower, UU upper triangular

Solve: Ly=PbLy = Pb, then Ux=yUx = y

Advantage: L,UL, U computed once, solves multiple systems

Cholesky Decomposition

For positive definite A: A=LLTA = LL^T (LL lower triangular)

More efficient than LU

QR Decomposition

A=QRA = QR where QQ orthogonal, RR upper triangular

Gram-Schmidt: Compute column by column

Householder reflections: More numerically stable

QTQ=IQ^T Q = I

Eigenvalue Algorithms

Power method: For dominant eigenvalue

xk+1=AxkAxkx_{k+1} = \frac{A x_k}{||A x_k||}

Converges to eigenvector, eigenvalue from Rayleigh quotient

Inverse iteration: For eigenvalue near μ\mu

Solve (AμI)xk+1=xk(A - \mu I)x_{k+1} = x_k

QR algorithm: Iterative method for all eigenvalues

Shifts accelerate convergence

Condition Number

κ(A)=AA1\kappa(A) = ||A|| ||A^{-1}||

Indicates sensitivity to errors in linear systems

Large κ    \kappa \implies nearly singular


Initial Value Problems for ODEs

Problem

y=f(t,y),y(t0)=y0y' = f(t,y), \quad y(t_0) = y_0

Euler’s Method

yn+1=yn+hf(tn,yn)y_{n+1} = y_n + h f(t_n, y_n)

where hh = step size

Error: O(h)O(h)

Local error: O(h2)O(h^2)

Global error: O(h)O(h) over entire interval

Runge-Kutta Methods

RK4 (Classical):

k1=hf(tn,yn)k_1 = h f(t_n, y_n) k2=hf(tn+h/2,yn+k1/2)k_2 = h f(t_n + h/2, y_n + k_1/2) k3=hf(tn+h/2,yn+k2/2)k_3 = h f(t_n + h/2, y_n + k_2/2) k4=hf(tn+h,yn+k3)k_4 = h f(t_n + h, y_n + k_3) yn+1=yn+(k1+2k2+2k3+k4)/6y_{n+1} = y_n + (k_1 + 2k_2 + 2k_3 + k_4)/6

Error: O(h4)O(h^4)

Multistep Methods

Use previous values:

yn+1=j=0k1αjynj+hj=1k1βjfnjy_{n+1} = \sum_{j=0}^{k-1} \alpha_j y_{n-j} + h \sum_{j=-1}^{k-1} \beta_j f_{n-j}

Adams-Bashforth: Explicit (β1=0\beta_{-1} = 0)

Adams-Moulton: Implicit (β10\beta_{-1} \neq 0, requires solve)

Advantage: More efficient per step

Disadvantage: Need starting values (e.g., RK4)

Stability

Stability region: {zCz \in \mathbb{C} : method stable}

A-stable: Includes left half-plane

Implicit methods often more stable (larger stability regions)

Adaptive Step Size

Estimate local error: Compare two methods

Adjust hh: Larger hh if error small, smaller if large


Finite Difference Methods

Approximating Derivatives

Second derivative:

u(xi)u(xi+1)2u(xi)+u(xi1)h2u''(x_i) \approx \frac{u(x_{i+1}) - 2u(x_i) + u(x_{i-1})}{h^2}

Boundary Value Problems

Example: u=f(x),u(0)=α,u(1)=βu'' = f(x), u(0) = \alpha, u(1) = \beta

Discretize: xi=ih,i=0,...,n+1x_i = ih, i = 0,...,n+1 (h=1/(n+1)h = 1/(n+1))

Discrete equations:

ui+12ui+ui1h2=f(xi)\frac{u_{i+1} - 2u_i + u_{i-1}}{h^2} = f(x_i)

Linear system: Au=bAu = b

Tridiagonal matrix (banded)

Two-Point BVP

Eigenvalue problem: u=λu,u(0)=u(1)=0-u'' = \lambda u, u(0) = u(1) = 0

Eigenvalues: λk=(kπ)2,k=1,2,\lambda_k = (k\pi)^2, k = 1, 2, \dots

Use QZ algorithm for discrete problem

2D Problems

Laplace equation: uxx+uyy=0u_{xx} + u_{yy} = 0

Discretize: ui,ju(xi,yj)u_{i,j} \approx u(x_i, y_j)

Five-point stencil:

ui+1,j2ui,j+ui1,jhx2+ui,j+12ui,j+ui,j1hy2=0\frac{u_{i+1,j} - 2u_{i,j} + u_{i-1,j}}{h_x^2} + \frac{u_{i,j+1} - 2u_{i,j} + u_{i,j-1}}{h_y^2} = 0

Block tridiagonal system


Finite Element Method

Weak Formulation

Instead of solving strong form, solve weak form:

Find uu such that ΩuvdΩ=ΩfvdΩ\int_{\Omega} \nabla u \cdot \nabla v \, d\Omega = \int_{\Omega} fv \, d\Omega

for all test functions vv

Galerkin Method

Approximate uu as: uuiϕiu \approx \sum u_i \phi_i

Choose test functions ψj\psi_j

Discretize: uiϕiψj=fψj\sum u_i \int \nabla\phi_i \cdot \nabla\psi_j = \int f \psi_j

Galerkin: Choose ψj=ϕj\psi_j = \phi_j

Linear system: Ku=FKu = F

where Kij=ϕiϕjK_{ij} = \int \nabla\phi_i \cdot \nabla\phi_j

Basis Functions

Piecewise linear: ϕi(x)\phi_i(x) hat function at node ii

Piecewise polynomial: Higher order elements

Advantages

Adaptive meshing Handles complex geometries Irregular meshes

Mesh Generation

Triangular elements (2D) Tetrahedral elements (3D)


Supplementary Numerical Methods Reference (from mathematics_GPT)

Error Estimates for Numerical Differentiation

Forward Difference:

f(x)f(x+h)f(x)hf'(x) \approx \frac{f(x+h) - f(x)}{h}

Error: O(h)O(h)

Backward Difference:

f(x)f(x)f(xh)hf'(x) \approx \frac{f(x) - f(x-h)}{h}

Error: O(h)O(h)

Central Difference:

f(x)f(x+h)f(xh)2hf'(x) \approx \frac{f(x+h) - f(x-h)}{2h}

Error: O(h2)O(h^2)

Error Estimates for Numerical Integration

Trapezoidal Rule:

abf(x)dxba2[f(a)+f(b)]\int_a^b f(x)dx \approx \frac{b-a}{2}[f(a) + f(b)]

Composite:

abf(x)dxh[f(a)2+i=1n1f(xi)+f(b)2]\int_a^b f(x)dx \approx h\left[\frac{f(a)}{2} + \sum_{i=1}^{n-1} f(x_i) + \frac{f(b)}{2}\right]

Error: O(h2)O(h^2)

Simpson’s Rule:

abf(x)dxba6[f(a)+4f(a+b2)+f(b)]\int_a^b f(x)dx \approx \frac{b-a}{6}[f(a) + 4f(\frac{a+b}{2}) + f(b)]

Composite (nn even):

abf(x)dxh3[f(x0)+4i=1n/2f(x2i1)+2i=1n/21f(x2i)+f(xn)]\int_a^b f(x)dx \approx \frac{h}{3}[f(x_0) + 4\sum_{i=1}^{n/2} f(x_{2i-1}) + 2\sum_{i=1}^{n/2-1} f(x_{2i}) + f(x_n)]

Error: O(h4)O(h^4)

Error Estimates for ODE Solvers

Euler’s Method:

yn+1=yn+hf(tn,yn)y_{n+1} = y_n + h f(t_n, y_n)

Error: O(h)O(h)

Local error: O(h2)O(h^2)

Global error: O(h)O(h) over entire interval

Runge-Kutta Methods:

RK4 (Classical):

k1=hf(tn,yn)k_1 = h f(t_n, y_n) k2=hf(tn+h/2,yn+k1/2)k_2 = h f(t_n + h/2, y_n + k_1/2) k3=hf(tn+h/2,yn+k2/2)k_3 = h f(t_n + h/2, y_n + k_2/2) k4=hf(tn+h,yn+k3)k_4 = h f(t_n + h, y_n + k_3) yn+1=yn+(k1+2k2+2k3+k4)/6y_{n+1} = y_n + (k_1 + 2k_2 + 2k_3 + k_4)/6

Error: O(h4)O(h^4)

Error Estimates for Linear Systems

Gaussian Elimination with Pivoting:

Partial pivoting: Swap row with largest aij|a_{ij}| in column

Complete pivoting: Swap both rows and columns

Error: O(n3)O(n^3)

LU Decomposition:

PA=LUPA = LU where PP permutation, LL lower, UU upper triangular

Error: O(n3)O(n^3)

Cholesky Decomposition:

For positive definite A: A=LLTA = LL^T (LL lower triangular)

Error: O(n3/6)O(n^3/6)

Error Estimates for Eigenvalue Problems

Power method: For dominant eigenvalue

xk+1=AxkAxkx_{k+1} = \frac{A x_k}{||A x_k||}

Error: O(ϵO(\epsilon)

Inverse iteration: For eigenvalue near μ\mu

Error: O(ϵO(\epsilon)

QR algorithm: Iterative method for all eigenvalues

Error: O(n3)O(n^3)

Error Estimates for Condition Number

κ(A)=AA1\kappa(A) = ||A|| ||A^{-1}||

Indicates sensitivity to errors in linear systems

Large κ    \kappa \implies nearly singular

Error: O(n2)O(n^2)


Next: Optimization or Topology


Last updated: Comprehensive numerical methods reference covering root finding, integration, and differential equation solvers.