Dzunic-On Efficient Two Parameters
Dzunic-On Efficient Two Parameters
net/publication/257634287
CITATIONS READS
24 40
1 author:
Jovana Dzunic
University of Niš
32 PUBLICATIONS 681 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Construction and analysis of efficient algorithms for solving nonlinear equations View project
All content following this page was uploaded by Jovana Dzunic on 10 October 2015.
Jovana Džunić
Numerical Algorithms
ISSN 1017-1398
Volume 63
Number 3
1 23
Your article is protected by copyright and
all rights are held exclusively by Springer
Science+Business Media, LLC. This e-offprint
is for personal use only and shall not be self-
archived in electronic repositories. If you wish
to self-archive your article, please use the
accepted manuscript version for posting on
your own website. You may further deposit
the accepted manuscript version in any
repository, provided it is only made publicly
available 12 months after official publication
or later and provided acknowledgement is
given to the original source of publication
and a link is inserted to the published article
on Springer's website. The link must be
accompanied by the following text: "The final
publication is available at link.springer.com”.
1 23
Author's personal copy
Numer Algor (2013) 63:549–569
DOI 10.1007/s11075-012-9641-3
ORIGINAL PAPER
Jovana Džunić
Received: 16 May 2012 / Accepted: 24 August 2012 / Published online: 7 September 2012
© Springer Science+Business Media, LLC 2012
This research was supported by the Serbian Ministry of Education and Science under grant
number 174022.
J. Džunić (B)
Faculty of Electronic Engineering, Department of Mathematics,
University of Niš, 18000 Niš, Serbia
e-mail: [email protected]
Author's personal copy
550 Numer Algor (2013) 63:549–569
1 Introduction
and
f ( j) (α)
cj = ( j = 2, 3, . . .).
j! f (α)
Let us consider the following modification of method (1) with an additional
parameter p,
f (xk )
xk+1 = xk − (k = 0, 1, . . .). (4)
f [xk , wk ] + pf (wk )
Using Taylor’s series about a simple zero α of f, we obtain
f (xk ) = f (α) εk + c2 εk2 + c3 εk3 + O εk4 , (5)
f (α) 2
εk,w = 1 + γ f (α) εk + γ εk + O εk3 , (6)
2
4
f (wk ) = f (α) εk,w + c2 εk,w
2
+ c3 εk,w
3
+ O εk,w . (7)
f (xk )
= εk − (c2 + p) εk εk,w + O εk2 εk,w , (9)
f [xk , wk ] + pf (wk )
f (xk )
εk+1 = xk+1 − α = εk −
f [xk , wk ] + pf (wk )
= (c2 + p) εk εk,w + O εk2 εk,w (10)
and hence
εk+1 ∼ 1 + γ f (α) (c2 + p)εk2 . (11)
From relation (11) we conclude that the biparametric Steffensen-like method
without memory (4) has order of convergence two, the same one as
method (1).
Error relation (11) plays the key role in our study of the convergence ac-
celeration. Method (4) serves as the basis for constructing a new biparametric
derivative free family of two-point methods of the form
⎧
⎪ f (xk )
⎪
⎨ yk = xk − f [x , w ] + pf (w ) , wk = xk + γ f (xk ),
k k k
f (yk ) f (yk ) (12)
⎪
⎪
⎩ xk+1 = yk − g(tk ) , tk = .
f yk , wk + pf (wk ) f (xk )
A weight function g should be determined in such way that it provides the
optimal order four for the two-point iterative scheme (12).
Author's personal copy
554 Numer Algor (2013) 63:549–569
+ O εk3 εk,w , (13)
f (yk ) 2
= εk,y −(c2 + p)εk,w εk,y + (c2 + p)2 −(c3 + pc2 ) εk,w εk,y
f [yk , wk ]+ pf (wk )
3
+ O εk,w εk,y . (17)
g(t) = g(0) + g (0)t + 12 g (0)t2 + O t3 . (18)
εk+1 = xk+1 − α = εk,y − g(0) + g (0)tk + 12 g (0)tk2 + O tk3 εk,y
2 3
−(c2 + p)εk,w εk,y + (c2 + p)2 − (c3 + pc2 ) εk,w εk,y + O εk,w εk,y
= εk,y 1−g(0)+ g(0)−g (0) (c2 + p)εk,w −g (0) c3 −2c2 (c2 + p) εk εk,w
+ (g(0)−g (0))(c3 + pc2 )− g(0)− 2g (0)+ 12 g (0) (c2 + p)
2
εk,w
2
+O εk2 εk,w εk,y . (19)
Author's personal copy
Numer Algor (2013) 63:549–569 555
From (19) we conclude that the choice g(0) = g (0) = 1 gives the highest
order to the family of two-point methods (12), with the error relation
εk+1 = 2c2 (c2 + p) − c3 εk εk,w + 1 − 12 g (0) (c2 + p)2 εk,w
2
εk,y
2
+ O εk εk,w εk,y
= 2c2 (c2 + p)−c3 + 1− 12 g (0) (1+γ f (α))(c2 + p)2 εk εk,w εk,y
+ O εk2 εk,w εk,y . (20)
After substituting relations (6) and (13) into (20), the error relation takes the
form
2
εk+1 ∼ (c2 + p) 1 + γ f (α)
× 2c2 (c2 + p) − c3 + 1 − 12 g (0) (1 + γ f (α))(c2 + p)2 εk4 . (21)
We give some examples for the weight function g with a simple form
satisfying conditions (22). The first example is
t s
g(t) = 1 + , s = 0.
s
In particular, for s = 1 we obtain g(t) = 1 + t. Also, the choices s = −1 with
g(t) = 1/(1 − t) (giving
√ the Kung–Traub two-point method [6] for p = 0) and
s = 1/2 with g(t) = 1 + t/2 are convenient for practical applications. Some
other simple examples are
1 + β1 t 1 + β2 t 2
g(t) = , g(t) = ,
1 + (β1 − 1)t 1−t
t2 + (β3 − 1)t − 1 1
g(t) = , g(t) = ,
β3 t − 1 1 − t + β4 t 2
where β1 , β2 , β3 , β4 ∈ R . All of these four weight functions are only special
cases of the generalized function
1 + at + b t2
g(t) =
1 + (a − 1)t + ct2
Author's personal copy
556 Numer Algor (2013) 63:549–569
for a, b , c ∈ R and s1 , s2 = 0.
The main idea in constructing methods with memory consists of the calculation
of parameters γ = γk and p = pk as the iteration proceeds by the formulas
γk = −1/ f (α) and pk = − c2 for k = 1, 2, . . . , where
f (α) and
c2 are approxi-
mations to f (α) and c2 , respectively. In essence, in this way we minimize the
factors 1 + γk f (α) and c2 + pk that appear in (21). It is preferable to find
approximations f (α) and
c2 of a good quality, for example, 1 + γk f (α) =
q q
O(εk−1 ) and c2 + pk = O(εk−1 ) for q > 1. It is assumed that initial estimates
γ0 and p0 should be chosen before starting the iterative process, for example,
to take γ0 using one of the ways proposed in [13, p. 186] and set p0 = 0.
Our model for the calculation of the self-accelerating parameters γk and pk
in each iteration is based on the approximations
1 1 1
γk = − =− ≈− , (23)
N2 (xk )
f (α) f (α)
N3 (wk )
pk = − = −
c2 ≈ −c2 , (24)
2N3 (wk )
where
where Dk,r tends to the asymptotic error constant Dr of the iterative method
(I M) when k → ∞. Similarly to (26), we write
Proof We use the well known formula for the error of Newton’s interpolation.
If Newton’s interpolating polynomial of degree s ∈ N is set through the nodes
t0 , t1 , . . . , ts from the interval I = [min{t0 , t1 , . . . , ts }, max{t0 , t1 , . . . , ts }], then for
some ζ ∈ I we have
f (s+1) (ζ )
s
f (t) − Ns (t) = t − tj . (28)
(s + 1)! j=0
f (ζ2 )
N2 (xk ) = f (xk ) − (xk − wk−1 )(xk − xk−1 )
3!
∼ f (α) 1 + 2c2 εk − c3 εk−1,w εk−1 , (29)
f (4) (ζ3 )
N3 (wk ) = f (wk ) − (wk − xk )(wk − wk−1 )(wk − xk−1 )
4!
∼ f (α) 1 + 2c2 εk,w + c4 εk εk−1,w εk−1 , (30)
2 f (4) (ζ3 )
N3 (wk ) = f (wk ) − (wk − xk )(wk − wk−1 ) + (wk − xk )(wk − xk−1 )
4!
+(wk − wk−1 )(wk − xk−1 )
3c3 c4
∼ f (α) 1 + εk,w − εk−1,w εk−1 . (31)
2c2 c2
Author's personal copy
558 Numer Algor (2013) 63:549–569
f (α) c4
∼
1 − εk−1,w εk−1 , (33)
2 f (α) c2
that is,
f (α) N (wk ) c4
c2 + pk =
− 3 ∼ εk−1,w εk−1 . (34)
2 f (α) 2N3 (wk ) c2
From (29) it follows that
1
1 + γk f (α) ∼ 1 − ∼ −c3 εk−1,w εk−1 , (35)
1 − c3 εk−1,w εk−1 + 2c2 εk
since εk ∼ (c2 + pk−1 )εk−1,w εk−1 = o(εk−1,w εk−1 ), based on (10) and (34).
Proof By virtue of Lemma 1, from (26), (27), (35) and (34) we have
1 + γk f (xk ) ∼ −c3 Dk−1,r1 εk−1
1+r1
, (36)
c4 1+r1
c2 + pk ∼ Dk−1,r1 εk−1 . (37)
c2
Combining (10), (26), (27), (36) and (37) yields
c4 1+r1 1+r1 (1+r1 )(1+r)
εk+1 ∼ (c2 + pk )εk,w εk ∼ Dk−1,r1 Dk,r1 εk−1 εk ∼ Ak+1 εk−1 , (38)
c2
c4
where Ak+1 = Dk−1,r1 Dk,r1 D1+r
k−1,r . Similarly, by (6), (27), (36) and (37),
1
c2
1+r1 +r
εk,w ∼ 1 + γk f (α) εk ∼ −c3 Dk−1,r1 εk−1 1+r1
εk ∼ Bk εk−1 , (39)
where Bk = −c3 Dk−1,r1 Dk−1,r . From (26) and (27) we obtain another pair of
estimates
2
εk+1 ∼ Dk,r Drk−1,r εk−1
r
, (40)
Equating the exponents of the error εk−1 in pairs of relations (38) and (40),
and (39) and (41), we arrive at the system of equations
r2 − (1 + r)(1 + r1 ) = 0,
rr1 − r − r1 − 1 = 0.
√
Positive solution is given by r = 12 (3 + 17) ≈ 3.56, and it defines the order of
convergence of the Steffensen-like method with memory (25).
1 1 1
γk = − =− ≈− , (42)
N3 (xk )
f (α) f (α)
N4 (wk )
pk = − c2 ≈ −c2 ,
= − (43)
2N4 (wk )
where
and
Combining (12) with (42) and (43), we construct the following derivative
free family of two-point methods with memory of Steffensen’s type
⎧
⎪
⎪ γ0 , p0 are given, wk = xk + γk f (xk ),
⎪
⎪
⎪
⎪
⎪
⎪ 1 N (wk )
⎪ γk = −
⎪ , pk = − 4 for k ≥ 1,
⎪
⎨ N3 (xk ) 2N4 (wk )
f (xk ) (k = 0, 1, . . .),
⎪
⎪ yk = xk −
⎪
⎪ f [xk , wk ] + pk f (wk )
⎪
⎪
⎪
⎪
⎪
⎪ f (yk ) f (yk )
⎪
⎩ xk+1 = yk − g(tk ) , tk = ,
f [yk , wk ] + pf (wk ) f (xk )
(44)
where the weight function g satisfies conditions (22).
Similarly to the proof of Lemma 1, with regard to (20) the following
statement can be proved:
Now we state the convergence theorem for the family of two-point methods
with memory (44).
Proof Similar to (26) and (27), let us assume that for the iterative sequence
{yk } the following relations hold
εk,y ∼ Dk,r2 εkr2 (48)
and
εk,y ∼ Dk,r2 Drk−1,r
2 rr2
εk−1 . (49)
From (45)–(48), (26) and (27) we have
1+r1 +r2
1 + γk f (α) ∼ Gk Dk−1,r1 Dk−1,r2 εk−1 , (50)
c5 1+r1 +r2
c2 + pk ∼ − Dk−1,r1 Dk−1,r2 εk−1 . (51)
c2
Author's personal copy
Numer Algor (2013) 63:549–569 561
c5 (1+r1 )(1+r)+r2
εk,y ∼ (c2 + pk )εk εk,w ∼ − Dk−1,r1 Dk−1,r2 Dk,r1 D1+r
k−1,r εk−1
1
, (53)
c2
1+r1 +r2 +r
εk,w ∼ (1 + γk f (α))εk ∼ Gk Dk−1,r1 Dk−1,r2 Dk−1,r εk−1 , (54)
where Bk is given in (47) and Ak+1 = Bk Dk,r1 Dk,r2 .
Equating the exponents of the error εk in pair of relations (52) and (26) and
the error εk−1 in pairs of relations (54) and (41), and (53) and (49), we arrive at
the system of equations
r − r1 − r2 − 1 = 0,
rr1 − r − r1 − r2 − 1 = 0,
rr2 − (1 + r1 )(1 + r) + r2 = 0.
Positive solution is given by r = 7 so that convergence order of the family (44)
of two-point methods with memory is seven, which concludes the proof.
Remark 3 The simplest choice for pk that still gives order 7 for the family with
memory (44) is pk = −N4 (wk )/(2 f [wk , xk ]), see Remark 2.
Remark 4 The other choice of nodes (of worse quality) gives approximations
for γk and pk of somewhat less accuracy, so that the corresponding families of
the form (44) have the convergence order greater than 4 but less than 7. In
this paper we are concentrating on the choice of as good as possible available
nodes to obtain the maximal order 7.
6 Numerical results
(see [7, p. 20], [13, Appendix C]). According to the last formula we find
1/2
1 √
E(25) = (3 + 17) ≈ 1.887, and E(44) = 71/3 ≈ 1.913,
2
whereas computational efficiency of the most efficient three-point schemes
(I M) is only E(I M) = 81/4 ≈ 1.682. Obviously, the obtained efficiency indices
of the proposed methods are remarkably high.
As mentioned in Section 2, applying any root solver with local convergence,
a special attention must be paid to the choice of initial approximations. If initial
values are sufficiently close to the sought roots, then the expected (theoretical)
convergence speed is obtainable in practice; otherwise, all iterative root-
finding methods work with great difficulties and show slower convergence,
especially at the beginning of the iterative process. In our numerical examples
we have chosen initial approximations to real zeros of real functions using
the mentioned Yun’s global algorithm [17]. We have also experimented with
crude approximations (see Tables 2 (part I), 5 (part I), 7 (part I) and 10 (part
I)) in order to study convergence behavior of the proposed methods under
modest initial conditions, and randomly chosen approximations (Tables 4, 9
and 12). In some examples different choice of initial approximations caused
the convergence to different zeros, see Tables 6 and 11.
We have tested the methods (4), (12), (25) and (44) for the functions given
in Table 1.
Tables 2, 3, 4, 5 and 6 contain results of iterative methods (1) and its
accelerated variant (3), as well. The results for
f (xk )
xk+1 = xk − (Newton’s method) (55)
f (xk )
of the second order and
f (xk )
xk+1 = xk − (Halley’s method) (56)
f (xk ) f (xk )
f (xk ) +
2 f (xk )
with order of convergence three, are also included, for better observation of
convergence acceleration obtained and an insight of impact on quality of the
produced results.
The selected test-functions are not of simple form and they are certainly
more complicated than most of often used Rice’s test functions [11], Jenkins–
Traub’s test polynomials [5] and test functions used in many papers concerning
nonlinear equations. For example, the function f1 displayed in Fig. 1 shows
nontrivial behavior since its graph is of -form with two relatively close zeros
and a singularity close to the sought zero. The test function f2 is a polynomial
of Wilkinson’s type with real zeros 1, 2, . . . , 20. It is well-known that this class
of polynomials is ill-conditioned (a part of its “uphill-downhill” graph with
very large amplitudes is displayed in Fig. 2); small perturbations of polyno-
mial coefficients cause drastic variations of zeros. Notice that many iterative
Author's personal copy
Numer Algor (2013) 63:549–569 563
Test function α
f1 (x − 1)(x6 + x−6 + 4) sin(x2 ) 1
20
f2 (x − i) 2, 8, 16
i=1
sin x
e−x
2
f3 + x2 log(1 + x − π) π
x2 − 1
1
f4 x + sin x + − 1 + 2i 0.28860 . . . − i1.24220 . . .
x √
4 √
ex −2x+3 + x +
2
f5 −2+i 2 1 + i 2, 0.50195 . . . + i0.05818 . . .
x−1
−50
−100
8 10 12 14 16
−3 1012
−6 1012
Author's personal copy
566 Numer Algor (2013) 63:549–569
Table 12 Methods without and with memory applied to f2 for random initial values and g (t) =
1+t
related to steady-state heat flow, electrostatic potential and fluid flow. For
example, two-dimensional system
x y
u(x, y) = 2+ y+ 2 +cos x sinh y, v(x, y) = −1+x+ 2 +sin x cosh y,
x +y 2 x + y2
related to complex potentials u and v, can be transformed to the analytic
function f4 (z) = u(x, y) + iv(x, y) = z + sin z + zi − 1 + 2i (z = x + iy) whose
zeros can be determined by the proposed methods.
We have used the computational software package Mathematica with
multiple-precision arithmetic. The errors |xk − α| of approximations to the
zeros are given in Tables 2–12, where A(−h) denotes A × 10−h . These tables
include the values of the computational order of convergence rc calculated by
the formula [8]
log | f (xk )/ f (xk−1 )|
rc = , (57)
log | f (xk−1 )/ f (xk−2 )|
taking into consideration the last three approximations in the iterative process.
It is evident from Tables 2–12 that approximations to the roots possess great
accuracy when the proposed methods with memory are applied. Results of the
third iteration in Tables 7, 8, 9, 10, 11 and 12 and fourth iteration in Tables 2–6
are given only for demonstration of convergence speed of the tested methods
and in most cases they are not required for practical problems at present. From
Tables 3, 8 and 12 we observe that all tested methods work with efforts (even
the basic fourth-order method (11) diverges in three cases) when they are
applied to the polynomial f2 of Wilkinson’s type [14], which is, actually, a well-
known fact from practice. We also notice that the values of the computational
order of convergence rc somewhat differ from the theoretical ones for the
methods with memory. This fact is not strange having in mind that formula
(57) for rc is derived for the methods without memory when rc usually fits very
well the theoretical order.
References
1. Aberth, O.: Iteration methods for finding all zeros of a polynomial simultaneously. Math.
Comput. 27, 339–344 (1973)
2. Džunić, J., Petković, M.S.: On generalized multipoint root-solvers with memory. J. Comput.
Appl. Math. 236, 2909–2920 (2012)
3. Džunić, J., Petković, M.S., Petković, L.D.: Three-point methods with and without memory for
solving nonlinear equations. Appl. Math. Comput. 218, 4917–4927 (2012)
4. Geum, Y.H., Kim, Y.I.: A multi-parameter family of three-step eighth-order iterative methods
locating a simple root. Appl. Math. Comput. 215, 3375–3382 (2010)
5. Jenkins, M.A., Traub, J.F.: Principles of testing polynomial zero-finding programs. ACM
Trans. Math. Softw. 34, 1–26 (1975)
6. Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iteration. J. ACM 21, 643–
651 (1974)
Author's personal copy
Numer Algor (2013) 63:549–569 569
7. Ostrowski, A.M.: Solution of Equations and Systems of Equations. Academic Press, New York
(1960)
8. Petković, M.S.: Remarks on “On a general class of multipoint root-finding methods of high
computational efficiency”. SIAM J. Numer. Anal. 49, 1317–1319 (2011)
9. Petković, M.S., Džunić, J., Petković, L.D.: A family of two-point methods with memory for
solving nonlinear equations. Appl. Anal. Discrete Math. 5, 298–317 (2011)
10. Petković, M.S., Yun, B.I.: Sigmoid-like functions and root finding methods. Appl. Math.
Comput. 204, 784–793 (2008)
11. Rice, J.R.: A set of 74 test functions for nonlinear equation solvers. Computer Science Tech-
nical Reports 69-034, paper 269 (1969)
12. Steffensen, I.F.: Remarks on iteration. Skand. Aktuarietidskr. 16, 64–72 (1933)
13. Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs
(1964)
14. Wilkinson, J.H.: Rounding Errors in Algebraic Processes. Prentice Hall, Englewood Cliffs
(1963)
15. Yun, B.I.: A non-iterative method for solving nonlinear equations. Appl. Math. Comput. 198,
691–699 (2008)
16. Yun, B.I., Petković, M.S.: Iterative methods based on the signum function approach for solving
nonlinear equations. Numer. Algorithms 52, 649–662 (2009)
17. Yun, B.I.: Iterative methods for solving nonlinear equations with finitely any roots in an
interval. J. Comput. Appl. Math. 236, 3308–3318 (2012)