round-off and truncation error(舍入和截断误差)

# round-off and truncation error
# 舍入和截断误差

1. Round-off errors are due to approximate representation of floating-point number

from math import pi
print(pi)
3.141592653589793

π \pi π=3.141592653589793238462643…

2. subtractive cancellation

x = 1 x=1 x=1
y = 1 + 1 0 − 15 2 y=1+10^{-15}\sqrt{2} y=1+10152

from math import sqrt
x = 1.0
y = 1.0 + 1e-15 * sqrt(2)
dt = 1e-15 * sqrt(2)
dn = y - x
print("dt = ", dt)
print("dn = ", dn)
print("Relative error: ", (dt - dn) / dt)
dt =  1.4142135623730953e-15
dn =  1.3322676295501878e-15
Relative error:  0.05794452478973511

similar accumulation of round-off in addition(large number + small number).

3. Truncation Errors

Truncation errors are created by truncating the math.

Example: MacLaurin Series for exponential function
e x = ∑ n = 0 ∞ x n n ! e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!} ex=n=0n!xn

Now, we only use three terms
e x = ∑ n = 0 ∞ x n n ! ≈ 1 + x + x 2 2 ! e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}\approx 1 + x + \frac{x^2}{2!} ex=n=0n!xn1+x+2!x2

4. Numerical Differentiation and numerical errors

definition of true derivative
round-off and truncation error(舍入和截断误差)_第1张图片

f ′ ( x ) = lim ⁡ d x → 0 f ( x + d x ) − f ( x ) d x f^{'}(x)=\lim_{dx\rightarrow 0}\frac{f(x+dx)-f(x)}{dx} f(x)=dx0limdxf(x+dx)f(x)

approximation of derivative(numerical derivative)
round-off and truncation error(舍入和截断误差)_第2张图片

f ′ ( x i ) = f ( x i + 1 ) − f ( x i ) x i + 1 − x i f^{'}(x_i)=\frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i} f(xi)=xi+1xif(xi+1)f(xi)

5. Truncation error estimate using Taylor series

f ( x i + 1 ) = f ( x i ) + f ′ ( x i ) ( x i + 1 − x i ) + R 1 f(x_{i+1})=f(x_i)+f^{'}(x_i)(x_{i+1}-x_i)+R_1 f(xi+1)=f(xi)+f(xi)(xi+1xi)+R1

f ′ ( x i ) = f ( x i + 1 ) − f ( x i ) x i + 1 − x i − R 1 x i + 1 − x i f^{'}(x_i)=\frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i}-\frac{R_1}{x_{i+1}-x_i} f(xi)=xi+1xif(xi+1)f(xi)xi+1xiR1

where f ( x i + 1 ) − f ( x i ) x i + 1 − x i \frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i} xi+1xif(xi+1)f(xi) is the approximation term, and R 1 x i + 1 − x i \frac{R_1}{x_{i+1}-x_i} xi+1xiR1 is the truncation error.

R n = f n + 1 ( ξ ) ( n + 1 ) ! h n + 1 = O ( h n + 1 ) R_n=\frac{f^{n+1}(\xi)}{(n+1)!}h^{n+1}=O(h^{n+1}) Rn=(n+1)!fn+1(ξ)hn+1=O(hn+1)

three expressions of numerical derivative:

(1) Forward formula
f ′ ( x i ) = f ( x i + 1 ) − f ( x i ) x i + 1 − x i + O ( h ) f^{'}(x_i)=\frac{f(x_{i+1})-f(x_i)}{x_{i+1}-x_i}+ O(h) f(xi)=xi+1xif(xi+1)f(xi)+O(h)
(2) Backward formula
f ′ ( x i ) = f ( x i ) − f ( x i − 1 ) x i − x i − 1 + O ( h ) f^{'}(x_i)=\frac{f(x_{i})-f(x_{i-1})}{x_{i}-x_{i-1}}+ O(h) f(xi)=xixi1f(xi)f(xi1)+O(h)
(3) Centered formula
f ′ ( x i ) = f ( x i + 1 ) − f ( x i − 1 ) x i + 1 − x i − 1 + O ( h 2 ) f^{'}(x_i)=\frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1}-x_{i-1}}+ O(h^2) f(xi)=xi+1xi1f(xi+1)f(xi1)+O(h2)

so, centered formula is more accurate than forward and backward formula

6. Error propagation and condition number

f ( x ) = f ( x ∗ ) + f ′ ( x ∗ ) ( x − x ∗ ) f(x) = f(x^*)+f^{'}(x^*)(x-x^*) f(x)=f(x)+f(x)(xx)

f ( x ) − f ( x ∗ ) f ( x ∗ ) = f ′ ( x ∗ ) ( x − x ∗ ) f ( x ∗ ) \frac{f(x)-f(x^*)}{f(x^*)}=\frac{f^{'}(x^*)(x-x^*)}{f(x^*)} f(x)f(x)f(x)=f(x)f(x)(xx)

f ( x ) − f ( x ∗ ) f ( x ∗ ) = x ∗ f ′ ( x ∗ ) f ( x ∗ ) x − x ∗ x ∗ \frac{f(x)-f(x^*)}{f(x^*)}=\frac{x^*f^{'}(x^*)}{f(x^*)}\frac{x-x^*}{x^*} f(x)f(x)f(x)=f(x)xf(x)xxx

where x ∗ f ′ ( x ∗ ) f ( x ∗ ) \frac{x^*f^{'}(x^*)}{f(x^*)} f(x)xf(x) is called condition number.

f ( x ) − f ( x ∗ ) f ( x ∗ ) = C o n d i t i o n _ n u m b e r × x − x ∗ x ∗ \frac{f(x)-f(x^*)}{f(x^*)}=Condition\_number \times\frac{x-x^*}{x^*} f(x)f(x)f(x)=Condition_number×xxx

Small condition number = error decreases.

Large condition number = ill-conditioned.

你可能感兴趣的:(python工程计算,python,机器学习,开发语言)