0% found this document useful (0 votes)
214 views103 pages

Sayısal Analiz Çalışma

The document outlines a course on Numerical Analysis, detailing various numerical methods for root finding, solving linear systems, and numerical integration. Key methods discussed include Bisection, Regula Falsi, Secant, Fixed Point, and Newton-Raphson methods, along with their applications and examples. It also references essential textbooks and lecture notes for further study.

Uploaded by

benguk73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
214 views103 pages

Sayısal Analiz Çalışma

The document outlines a course on Numerical Analysis, detailing various numerical methods for root finding, solving linear systems, and numerical integration. Key methods discussed include Bisection, Regula Falsi, Secant, Fixed Point, and Newton-Raphson methods, along with their applications and examples. It also references essential textbooks and lecture notes for further study.

Uploaded by

benguk73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Numerical Analysis Gr:3 – INS2942

Assistant Professor Gökhan Demirdöğen

gokhand@yildiz.edu.tr

References:
- Chapra, S. C., & Canale, R. P. (2011). Numerical methods for engineers (Vol. 1221). New York: Mcgraw-hill.
- Zafer Kütüğ, Numerical analysis lecture notes, Yildiz Technical University,
Outline
 Root finding (Slayt 1)  Interpolation (Slayt 53)

 Bisection method  Lagrange polynominal


 Newton polynominal
 Regula Falsi
 Finite difference polynominal
 Secant
 Least squares
 Fixed point
 Numerical Integration (Slayt 69)
 Newton-Raphson
 Trapezodial rule
 Solution of linear systems (Slayt 30)  1/3 Simpson integration
 Matrices  3/8 Simpson integration
 Cramer rule  Numerical differantiation (Slayt 79)

 Gauss elimination  Taylor series

 Jacobi  Euler method

 Gauss siedel  Finite difference for solving differential equations (Slayt 86)
 Central finite difference from Taylor series (Slayt 86)
 Central finite difference from Lagrange Polynomial (Slayt 92-
102)
References:

 Steven C. Chapra and Raymond P. Canale, Numerical Methods for Engineers, Mc Graw
Hill Education
 Sayısal analiz ders notları, Zafer Kütüğ

 Chapra&Canale, çev. Heperkan ve Keskin : Mühendisler için Sayısal Yöntemler, Literatür


Yyn.
 Mathews, John H. Numerical Methods for Mathematics, Science and Engineering-Prentice
Hall Publ.
 Jeffrey R. Chasnov, Numerical methods, The Hong Kong University of Science and
Technology
 Behiç Çağal : Sayısal Analiz, Birsen Yayınevi.
 Bakioğlu, M.; Sayısal Analiz, Beta Yayınları.
1) Root finding
 What we learned was:

 For solving:

 However, there are many other functions for which the root cannot be
determined so easily.
1.1) Bisection (Bifurcation method)
 The bisection method is a numerical algorithm used to
find the roots of a given function.
 The bisection method, which is alternatively called
binary chopping, interval halving, or Bolzano’s method,
is one type of incremental search method in which the
interval is always divided in half.
 If a function changes sign over an interval, the function
value at the midpoint is evaluated. The location of the
root is then determined as lying at the midpoint of the
subinterval within which the sign change occurs. The
process is repeated to obtain refined estimates.
 The basic idea of the method is to repeatedly divide
an interval in half and then select the subinterval in
which the root must lie, until the interval becomes
sufficiently small to obtain the desired accuracy.
1.1) Bisection (Bifurcation method)

 f(x) function is continious function between [a,b].


 If f(a).f(b)<0, then there must be at least one root that
make f(x)=0 between [a,b]
 The error after n iteration is lb-al/2
1.1) Bisection (Bifurcation method)

1. Choose an interval [a,b] such that f(a) and f(b) have


opposite signs.
2. Compute the midpoint of the interval: c = (a + b) / 2.
3. Evaluate the function at the midpoint: f(c).
4. If f(c) is zero, then c is the root and we are done.
5. Otherwise, if f(c) and f(a) have opposite signs, then the
root must lie in the interval [a,c]. We repeat the process on
this subinterval.
6. Otherwise, if f(c) and f(b) have opposite signs, then the
root must lie in the interval [c,b]. We repeat the process on
this subinterval.
7. Repeat steps 2-6 until the interval [a,b] becomes
sufficiently small (that satisfies our error limit).
1.1) Bisection (Bifurcation method)
1.1) Bisection (Bifurcation method)
1.1) Bisection (Bifurcation method)
 Example: Find the root of
between [1,2],
1.1) Bisection (Bifurcation method)
 Example: Your turn?

 F(x) = 2x3+sin2(x)-5x Find the root between [1,6]


1.2) Regula Falsi Method (False
position)
 Although bisection is a perfectly valid technique for determining roots, its
“brute-force” approach is relatively inefficient.
 False position is an alternative based on a graphical insight.
 shortcoming of the bisection method is that, in dividing the interval from xl to xu
into equal halves, no account is taken of the magnitudes of f(xl) and f(xu). For
example, if f(xl) is much closer to zero than f(xu), it is likely that the root is closer to
xl than to xu
 An alternative method that exploits this graphical insight is to join f(xl) and f(xu)
by a straight line
 The intersection of this line with the x axis represents an improved estimate of the
root.
 The fact that the replacement of the curve by a straight line gives a “false
position” of the root is the origin of the name, method of false position, or in
Latin, regula falsi.
1.2) Regula Falsi Method (False position)
1.2) Regula Falsi Method (False position)

 If f(c).f(a)<0, change b with c; use a and c


 If f(c).f(b)<0 change a with c; use b and c
1.2) Regula Falsi Method (False position)

 Example: Using regula falsi method find the root of


between [0,1], Note: Error = c(new)-c(old)

n a b f(a) f(b) c f(c) Dx


n a b f(a) f(b) c f(c) Dx
1 0 1 -1 1.1231892 0.4709896 0.265159
2 0 0.47099 -1 0.2651588 0.3722771 0.029534 0.098712542
3 0 0.372277 -1 0.0295337 0.3615977 0.002941 0.010679308
4 0 0.361598 -1 0.002941 0.3605374 0.000289 0.001060341
5 0 0.360537 -1 0.0002894 0.3604331 2.85E-05 0.000104327
1.2) Regula Falsi Method (False position)
 Example: Your turn?

 F(x) = ex + x(1/3) between -1 and 1 [-1,1]


1.3) Secant method

 The secant method is a numerical root-finding


algorithm that is used to find the root of a function. It is
a derivative-free method and is based on the idea of
using a straight line to approximate the behavior of
the function near the root.

 The secant method requires two initial guesses, x0 and


x1, which should be chosen.
1.3) Secant method (Open method –
Not bracketing)

 X1 is more closer to the root than x0; ensure


 This helps make iterations more effectively
1.3) Secant method

 Example 1: Using Secant method, [1,2]


1.3) Secant method

 Example 2: Using Secant method, [0,1]


1.4) Fixed point method

 The fixed point method is a root finding algorithm that


seeks to find a fixed point of a function, which is a
value that does not change when the function is
applied to it. In other words, if we have a function f(x),
the fixed point method seeks a value x* such that f(x*)
= x*

 If f(k) = 0, k become the root of f(x) and k =f(k)


1.4) Fixed point method

 Example 1: Using fixed point method, X0 = 4

x=(2x+3)^1/2 x0 x=3/(x-2) x0 (x2-3)/2 x0


4 3,316625 4 1,5 4 6,5
3,316625 3,103748 1,5 -6 6,5 19,625
3,103748 3,034385 -6 -0,375 19,625 191,0703
3,034385 3,01144 -0,375 -1,26316 191,0703 18252,43
3,01144 3,003811 -1,26316 -0,91935 18252,43 1,67E+08
3,003811 3,00127 -0,91935 -1,02762 1,67E+08 1,39E+16
3,00127 3,000423 -1,02762 -0,99088 1,39E+16 9,62E+31
-0,99088 -1,00305 9,62E+31 4,63E+63
-1,00305 -0,99898 4,63E+63 1,1E+127
-0,99898 -1,00034 1,1E+127 5,7E+253 diverges
-1,00034 -0,99989
1.4) Fixed point method

 Example 2: In 1225 Leonardo examined the root


of the following equation:

 Given that the converted g(x)


is illustrated as:
Starting from x0=1, how many times it was tried to find
that the root is k=1,368808107.

1,000000000 1,538461538 1,295019157 1,401825309 1,354209390 1,375298092 1,365929788 1,370086003


1,368241024 1,369059812 1,368696398 1,368857689 1,368786103 1,368817874 1,368803773 1,368810032
1,368807254 1,368808487 1,368807940 1,368808182 1,368808075 1,368808123 1,368808101 1,368808111
1,368808107 1,368808108 1,368808108

25
1.5) Newton-Raphson method

 Perhaps the most widely used of all root-locating


formulas is the Newton-Raphson equation

 The Newton-Raphson method is an iterative numerical


method used for finding the roots of a function. It is
particularly useful for finding the roots of nonlinear
functions.
1.5) Newton-Raphson method

 Rearrange this formula:

 Called, Netwon-Rampson formula


1.5) Newton-Raphson method

 Common derivatives:
1.5) Newton-Raphson method

 Example 1: Using Newton method, find the root of the


following equation between X0=0

n X0 F(x0) F'(x0) X1 Delta x


1 0,0000000 -1,0000000 3,0000000 0,3333333
2 0,3333333 -0,0684177 2,5493445 0,3601707 0,026837380
3 0,3601707 -0,0006280 2,5022625 0,3604217 0,000250967
4 0,3604217 -0,0000001 2,5018142 0,3604217 0,000000022
1.5) Newton-Raphson method

 Example 2: Using Newton method, find the root of the


following equation using

n X0 F(x0) F'(x0) X1 Delta x


1 1,500000000 -0,58953114 0,52554666 2,62174843
2 2,621748432 0,71708590 1,24367132 2,04516048 1,12174843
3 2,045160482 -0,07588225 1,30163431 2,10345816 -0,57658795
4 2,103458157 0,00147221 1,35035330 2,10236792 0,05829767
5 2,102367919 0,00000044 1,34954192 2,10236759 -0,00109024
6 2,102367592 0,00000000 1,34954168 2,10236759 -0,00000033
1.5) Newton-Raphson method
 Example 3:

n X0 F(x0) F'(x0) X1 Delta x


1 0 1 -2 0,5
2 0,500000000 0,10653066 -1,60653066 0,566311003 0,50000000
3 0,566311003 0,00130451 -1,567615513 0,567143165 0,06631100
4 0,567143165 1,9648E-07 -1,567143362 0,56714329 0,00083216
5 0,567143290 4,44089E-15 -1,56714329 0,56714329 0,00000013
6 0,567143290 0 -1,56714329 0,56714329 0,00000000
2) Solution of linear systems

 where the a’s are constant coefficients, the b’s are


constants, and n is the number of equations
2) Solution of linear systems
 Matrices
2) Solution of linear systems
 Matrix
2) Solution of linear systems
 Square matrix (n=m)

 The diagonal consisting of the elements a11, a22, a33,


and a44 is termed the principal or main diagonal of
the matrix
2) Solution of linear systems
 Identity matrix

 Diagonal matrix
2) Solution of linear systems
 Upper triangular matrix

 Lower triangular matrix


2) Solution of linear systems
 Multiplication example

 z11=3x5+1x7=22
 z12=3x9+1x2=29
 z21=8x5+6x7=82
 z22=8x9+6x2=84
 z31=0x5+4x7=28
 z32=0x9+4x2=8
2) Solution of linear systems
 Inverse of matrix
2) Solution of linear systems
 Inverse of matrix
1 2 −1 1 0 0
 −2 0 1 I= 0 1 0
1 −1 0 0 0 1

 Row operations

1 1 2
1 1 1
2 3 4
2) Solution of linear systems
 Determinant of matrix
2.1) Cramer Method
2.1) Cramer Method

Det(A)

Cramer rule:

Replace b with the first


column
2.1) Cramer Method
 Example 1: Solve the followings using Cramer rule
1 A b By hand
1 1 2 -1 1 1 2
2 -1 2 -4 2 -1 2
4 -1 4 -2 -8 4 -1 4 -4
-2 1 1 2 -4
Det(A) 2 8 2 -1 2 8

-1 1 2 Det(A) 2
-4 -1 2
-2 -1 4

Det(A) 18 x1 9

1 -1 2
2 -4 2
4 -2 4

Det(A) 12 x2 6

1 1 -1
2 -1 -4
4 -1 -2

Det(A) -16 x3 -8
2.1) Cramer Method
 Example 2: Solve the followings using Cramer rule

2 A b By hand
2 -1 1 3 -1 2 -1 1 3
1 1 -1 -4 6 1 1 -1 -4
3 -1 1 1 4 3 -1 1 1
1 3 0 3 -5 1 3 0 3

Det(A) 15 1 -1 -4 Det
+ 2 -1 1 1 9
-1 -1 1 3 3 0 3
6 1 -1 -4
4 -1 1 1 1 -1 -4 Det
-5 3 0 3 - -1 3 1 1 15
1 0 3
Det(A) 15 x1 1
1 1 -4 Det
2 -1 1 3 + 1 3 -1 1 -54
1 6 -1 -4 1 3 3
3 4 1 1
1 -5 0 3 1 1 -1 Det
- 3 3 -1 1 -12
Det(A) -6,7E-15 x2 -4,4E-16 1 3 0

2 -1 -1 3
1 1 6 -4 Det(A) 15
3 -1 4 1
1 3 -5 3

Det(A) 45 x3 3

2 -1 1 -1
1 1 -1 6
3 -1 1 4
1 3 0 -5

Det(A) -30 x4 -2
2.2) Gauss Elimination !
 Transforming a matrix to a row echelon form
2.2) Gauss Elimination
 Example 1: Use gauss elimination method

1
R1 3 2 1 5
R2 2 5 1 -3
R3 2 1 3 11

R1/3 1 0,666667 0,333333 1,666667


R2 2 5 1 -3
R3 2 1 3 11

R1 1 0,666667 0,333333 1,666667


R2-2R1 0 3,666667 0,333333 -6,33333
R3-2R1 0 -0,33333 2,333333 7,666667

R1 1 0,666667 0,333333 1,666667


R2/3,666667 0 1 0,090909 -1,72727
R3 0 -0,33333 2,333333 7,666667

R1 1 0,666667 0,333333 1,666667 x1 2


R2 0 1 0,090909 -1,72727 x2 -2
R3+0,33333*R2 0 0 2,363636 7,090909 x3 3
2.2) Gauss Elimination
 Example 2: Use gauss 2
elimination method R1
R2
1
2
1
-1
-1
1
-2
5
R3 -1 2 2 1

R1 1 1 -1 -2
 x + y – z = -2 R2-2R1 0 -3 3 9
R3+R1 0 3 1 -1
 2x – y + z = 5
R1 1 1 -1 -2
 -x + 2y + 2z = 1 R2 0 -3 3 9
R3+R2 0 0 4 8

R1 1 1 -1 -2
R2/-3 0 1 -1 -3
R3/4 0 0 1 2

R1 x y (-z) -2 x 1
R2 y (-z) -3 y -1
R3 z 2 z 2
2.3) Jacobi method
 Each unknown is placed alone at the left side of the equation

 Start finding the solution by using initial values (i.e., 0, 0, 0) and then continue
iteratively

 Convert the order to make diagonals dominant!!


2.3) Jacobi method
 Example 1: Use Jacobi method by using (0,0,0) initial values (Error: 0.001)

1
n x1 x2 x3
1 0 0 0
2 1,833333 0,714286 0,2
3 2,038095 1,180952 0,852381
4 2,084921 1,053061 1,08
5 2,004354 1,001406 1,038209
6 1,994101 0,990327 1,001433
7 1,996537 0,997905 0,994951
8 2,000143 1,000453 0,998469
9 2,000406 1,000478 1,00021
10 2,000124 1,000056 1,000273
11 1,999973 0,999958 1,000047
2.3) Jacobi method
 Example 2: Use Jacobi method by using (1,1,1) initial values

2
k 0 x1 x2 x3
x1 0,333333 7,85 0,1 0,2
x2 0,142857 -19,3 -0,1 0,3
x3 0,1 71,4 -0,3 0,2

n x1 x2 x3
1 1 1 1
2 2,716667 -2,72857 7,13
3 3,001048 -2,49038 7,003929
4 3,000583 -2,49985 7,000161
5 3,000016 -2,5 6,999986
6 2,999999 -2,5 6,999999
7 3 -2,5 7
8 3 -2,5 7
9 3 -2,5 7
10 3 -2,5 7
11 3 -2,5 7
2.4) Gauss-Siedel
 Similar to Jacobi method but show difference in terms of iteration

 A special case of Jacobi

 When we find x1, this is used exactly for the following variable (i.e., x2)

 Note: Diagonal should be in its dominant form!


2.4) Gauss-Siedel
 Example: : Use Gauss-Siedel method by using (0,0,0) initial values (Error: 0.1)

1
n x1 x2 x3 Error 1 Error 2 Error 3 Error
1 0 0 0
2 1,833333 1,238095 1,061905
3 2,069048 0,934694 1,061905
4 1,967914 1,002041 0,987687
5 2,002732 0,994351 0,994399
6 1,99905 1,002381 0,998287 0,003682 0,00803 0,003888 0,009652
7 2,001079 1,000218 1,000762 0,002029 0,002163 0,002476 0,003863
8 1,999946 1,000091 1,000303 0,001134 0,000128 0,000459 0,00123
9 1,99998 0,999898 1,000025 3,4E-05 0,000193 0,000278 0,00034
10 1,999962 0,999987 0,999955 1,79E-05 8,91E-05 7,03E-05 0,000115
11 2,000003 1,000002 0,999987 4,14E-05 1,5E-05 3,2E-05 5,45E-05
2.4) Gauss-Siedel
 Example: : Use Gauss-siedel by using (1,1,1) initial values

2
k 0 x1 x2 x3
x1 0,333333 7,85 0,1 0,2
x2 0,142857 -19,3 -0,1 0,3
x3 0,1 71,4 -0,3 0,2

n x1 x2 x3
1 1 1 1
2 2,716667 -2,7531 7,003438
3 2,991793 -2,49974 7,000252
4 3,000026 -2,49999 6,999999
5 3 -2,5 7
6 3 -2,5 7
7 3 -2,5 7
8 3 -2,5 7
9 3 -2,5 7
10 3 -2,5 7
11 3 -2,5 7
3) Interpolation (Curve fitting)

 Data are often given for discrete values along


a continuum. However, you may require
estimates at points between the discrete
values.
 In addition, you may require a simplified
version of a complicated function. One way to
do this is to compute values of the function at
a number of discrete values along the range
of interest. Then, a simpler function may be
derived to it these values. Both of these
applications are known as curve fitting.
 a) lest squares regression, (b) linear
interpolation, (c) curvilinear interpolation
3.1) Lagrange Polynomial

 While the exact values are obtained at the points given in the
Lagrangian Polynomial, approximate results are obtained at the
intermediate values.
 As in all polynomial fittings, the degree of the polynomial must
be at least 1 order less than the given number of points. For
instance, the polynomial to be fitted for 4 fn values
corresponding to the 4 Xn points can be maximum 3rd order.
Lagrange approximation polynomial:
3.1) Lagrange Polynomial

 Example 1: Find a 3rd order polynomial using Lagrange Polynomials with the
following values obtained during an experiment.
0 1 2 3 4
x 3.2 2.7 1.0 4.8 5.6
f ( x) 22.0 17.8 14.2 38.3 51.7
3.1) Lagrange Polynomial
 Example 2: According to the provided a and cos(a) values find cos(8040’) use
Lagrange third order polynomial.

 İf 0.5xa = 30’  8.40’ = 8.6667


 Consider 8, 8.5, 9, 9.5 and find for 8.667
3.2) Newton Polynomial
 In Newton's Polynomial, a polynomial is obtained by using linearapproximation
to numerical values at a given point
3.2) Newton Polynomial
 In Table format
3.2) Newton Polynomial
 Example 1: Find a polynomial using Newton interpolation method based on
the provided information.
3.2) Newton Polynomial
 Example 2: Find a polynomial using Newton interpolation method based on
the provided information. Using this polynomial, estimate the value of y(6).
3.3) Finite Difference Polynomial

 Finite Difference Method is actually a special case of Newton Interpolation


Polynomial. In this method, unlike the Newton interpolation polynomial, the
intervals of the data are equal to each other.
3.3) Finite Difference Polynomial
 Similar to Newton Interpolation Polynomial:
3.3) Finite Difference Polynomial
 Example 1: Obtain a polynomial using finite difference interpolation with the
following data. Using this polynomial, find an approximate value for x=0.54.

xi yi Df D2 f D3 f
x0 0.4 1.491825 (f0)
Delta x 0.1 0.156897
x 0.54 x1 0.5 1.648721 0.016501 (f1)
0.173398 0.001735
x2 0.6 1.822119 0.018236 (f2)
0.191634
x3 0.7 2.013753 (f3)

n
0 a0 1.491825
1 a1 1.568966
2 a2 0.825048 Eğri 1.7160
3 a3 0.289237
xyii  e xi

3.3) Finite Difference Polynomial


 Example 2: Obtain a polynomial using finite difference interpolation with the
following data. Using this polynomial, find an approximate value for x=0.08.

xi yi Df D2 f D3 f
Check x0 0.0585 0.14201 (f0)
Delta x 0.0185 0.0185 0.03575
x 0.08 x1 0.077 0.17776 -0.00401 (f1)
0.03174 0.00104
x2 0.0955 0.2095 -0.00297 (f2)
0.02877
x3 0.114 0.23827 (f3)

n
0 a0 0.14201
1 a1 1.932432
2 a2 -5.85829 Eğri 0.183152
3 a3 27.37581
3.4) Least Squares

 The Least Squares Method is one of the most widely used interpolation methods.
The reason why it is so preferred is that it is not only easy to remember, but also
very good in precision. However, it is not preferred to obtain polynomials larger
than the 4th or 5th order because of the difficulty and the low sensitivity of the
approximate results.
 Finding the difference between Y and the polynomial:
3.4) Least Squares

 Making the minimum (least) of this, the derivative shoudl be 0:

 If we apply this for each a: If we refine them:


3.4) Least Squares
 The matrix form can be generated:

 B is the coefficient matrix and it can be expressed as:


 Here, A is regulation matrix
3.4) Least Squares
 Example 1: Using the data in the table below, derive a quadratic (second
order) polynomial with respect to x by Least Squares Method
4) Numerical Integration
4.1) Trapezodial rule
 The trapezoidal method is based on the division of the area under the
function to be integrated into finite intervals and the sum of the trapezoidal
areas obtained by combining the successive fi and fi+1 values with a line.
4.1) Trapezodial rule
 Example: Find the integral of the f(x) function, whose numerical values are
given in the table below, in the interval from x=1.8 to x=3.4, using the
Trapezoid Method.
4.1) Trapezodial rule
 Example: The f(x) values are given in the table below. Integrate between
x0=1 and x1=1.8 using the Trapezoidal Method.
4.2) 1/3 Simpson Integration Method

 The calculation of the area under the curve by combining the subregions
divided into finite and equally spaced parts under the f(x) function with the
quadratic parabolic curves is called the 1/3 Simpson Integration Method. The
reason why the method is named this way is that the coefficient of 1/3 is found
in the formulation of the method:
4.3) 3/8 Simpson Integration Method

 Computing the area under the curve by combining the subregions divided into
finite and equally spaced segments under the F(x) function with third-degree
cubic curves is called the 3/8 Simpson Integration Method. The reason why the
method is named this way is that there is a coefficient of 3/8 in the formulation
of the method.
 Example: Take the integration of the following function using 1/3 Simpson
method by dividing 6 equal intervals between x0=0 and x1= 𝜋/2
 Example: Take the integration of the following function using 1/3 Simpson
method by dividing 8 equal intervals between x0=1 and x1=2
 Example: Take the integration of the following function by dividing 8 equal
intervals between x0=0 and x1= 2. Use 3/8 Simpson method for the first 6
intervals and 1/3 Simpson method for the last 2 intervals.
 Example: Using the tabulated values of the f(x) function given in the table
below, calculate the integral over the range from x0=a=0.7 to xn=b=2.1.
Consider the followings:
 a) First 3 intervals using 3/8 Simpson and last 4 intervals using 1/3 Simpson
 b) First 4 intervals 1/3 Simpson and last 3 intervals 3/8 Simpson
 c) First 6 intervals 3/8 Simpson and last interval Trapezodial
 d) First interval Trapezodial and the following 6 intervals 3/8 Sİmpson
 a) First 3 intervals using 3/8 Simpson and last 4 intervals using 1/3 Simpson
 b) First 4 intervals 1/3 Simpson and last 3 intervals 3/8 Simpson
 c) First 6 intervals 3/8 Simpson and last interval Trapezodial
 d) First interval Trapezodial and the following 6 intervals 3/8 Sİmpson
5) Numerical Differantiation
5.1) Taylor series
 If the function f(x) around x0=0 is continuous, that is, if it and its derivatives f',
f'' ..., f(n), the function around x0=0:

 Called «Mc Laurin Series»


5) Numerical Differantiation
5.1) Taylor series
 x0 = a

 Called «Taylor series»


 If it is desired to find the value of the function around a point far away from
x = a+h, x–a=h arrangement is made in the above series.
5) Numerical Differantiation
5.1) Taylor series
 Example:

 Find one solution of the differantial equation based on the conditions using
Taylor series

x y
0
0.1
0.2
0.3
0.4
0.5
0.6
5) Numerical Differantiation
5.1) Taylor series
 Example:
 Find one solution of the differantial equation based on the conditions using
Taylor series, consider first five expressions in the series.

x
0
0.1
0.2
0.3
0.4
0.5
0.6
5) Numerical Differantiation
5.2) Euler method
 If h is small enough, first-order linear differential equations can be solved
using the first two terms of the Taylor Series.
5) Numerical Differantiation
5.2) Euler method
 Example 1: Use Euler method

 If find the errors, h = 0.1

n x y yg Delta y n x y yg Delta y
0 0 0 0 -1 -1 0
1 0.1 1 0.1 -0.9 -0.91451 0.014512
2 0.2 2 0.2 -0.83 -0.85619 0.026192
3 0.3 -0.787 -0.82245 0.035455
3 0.3 4 0.4 -0.7683 -0.81096 0.04266
4 0.4
5) Numerical Differantiation
5.2) Euler method
 Example 2: Use Euler method

 If y(g)= and fill the table and find the errors in each step:

n x y yg Delta y
0 0 1.000000 1 0
1 1 0.990000 1.135335 0.145335
2 1.01 1.000200 1.142655 0.142455
3 1.02 1.010396 1.150029 0.139633
4 1.03 1.020588 1.157454 0.136866
5 1.04 1.030776 1.16493 0.134154
6 1.05 1.040961 1.172456 0.131496
6) Finite difference
6.1)Central finite difference from Taylor series
 Consider Taylor series of F(x) at xi-1=xi – Δx and xi+1=xi + Δx

 Which can be expressed as


6) Finite difference
6.1)Central finite difference from Taylor series
 Eq 12 – Eq 11 and consider first three terms:

 Eq 12 + Eq 11 and consider first three terms:


6) Finite difference
6.1)Central finite difference from Taylor series
 Consider Taylor series of F(x) at xi-2 = xi – 2Δx and xi+2 = xi + 2Δx

 Which can be expressed as:


6) Finite difference
6.1)Central finite difference from Taylor series
 (-2)* Eq. 11 + 2*(Eq 12) and take the first five terms:

 (-1)* Eq. 17 + 1*(Eq 18) and take the first five terms:

 Eq 20 – Eq 19
6) Finite difference
6.1)Central finite difference from Taylor series
 (4)* Eq. 11 + 4*(Eq 12) and take the first five terms:

 (1)* Eq. 17 + 1*(Eq 18) and take the first five terms:

 Eq 23 – Eq 22
6) Finite difference
6.1)Central finite difference from Taylor series
 Central finite difference:
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 We need to write a Lagrange polynomial passes three and five points:

 A second degree Lagrange polynomial that passes


and its first and second derivatives can be written as follows:
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 .

 Write x=x0 in Eq 26 and 27


6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 A fourth degree Lagrange polynomial that passes
and its third and forth derivatives can be written as follows:
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 :

 Write x=x0
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 Write x=x0
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 Example: y(0) = 1, y’(0)=-2,, use central finite differnece formulation to solve
for 6 points with Δx=0.1. (Gökhan)
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial

1 i x(i) Delta x y(i-1) y(i) y(i+1)


0 0 0.1 1 0.81
1 0.1 0.1 1 0.81 0.644439
2 0.2 0.1 0.81 0.644439 0.506725
3 0.3 0.1 0.644439 0.506725 0.399443
4 0.4 0.1 0.506725 0.399443 0.324566
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 Example 2: y(0) = - 1, y(1)= - 0.9, use central finite differnece formulation to
solve for 4 points with Δx=0.1. (Gökhan)

2 i x(i) Delta x y(i-1) y(i) y(i+1)


0 0 0.1
1 0.1 0.1 -1 -0.9 -0.86
2 0.2 0.1 -0.9 -0.86 -0.808
3 0.3 0.1 -0.86 -0.808 -0.8184
4 0.4 0.1 -0.808 -0.8184 -0.80432
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 Example 3: By using the Central Finite Difference method for 4 intervals, the
differential equation of the carrier system, whose loading conditions and
boundary conditions are given below;
 a. Find the deflections (y [cm]) at each point using Cramer's Rule.
 b. Calculate the moments at the supports and the middle of the beam,
respectively, M(0), M(2), M(4).

Differential equation:

Conditions:
6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
3a A b
 A) 5 -4 1 0.238095 0.238095238
-4 6 -4 0.238095
1 -4 5 0.238095

Det(A) 16

0.238095 -4 1
0.238095 6 -4
0.238095 -4 5

Det(A1) 9.52381 y1 0.595238095

5 0.238095 1
-4 0.238095 -4
1 0.238095 5

Det(A2) 13.33333 y2 0.833333333

5 -4 0.238095
-4 6 0.238095
1 -4 0.238095

Det(A3) 9.52381 y3 0.595238095


6) Finite difference
6.2)Central finite difference from Lagrange Polynomial
 B)

You might also like