0% found this document useful (0 votes)
25 views24 pages

Dzunic-On Efficient Two Parameters

penyelesain persamaan nonlinear menggunakan 2 parameter

Uploaded by

nalaazana55
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views24 pages

Dzunic-On Efficient Two Parameters

penyelesain persamaan nonlinear menggunakan 2 parameter

Uploaded by

nalaazana55
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/257634287

On efficient two-parameter methods for solving nonlinear equations

Article in Numerical Algorithms · July 2013


DOI: 10.1007/s11075-012-9641-3

CITATIONS READS

24 40

1 author:

Jovana Dzunic
University of Niš
32 PUBLICATIONS 681 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Construction and analysis of efficient algorithms for solving nonlinear equations View project

All content following this page was uploaded by Jovana Dzunic on 10 October 2015.

The user has requested enhancement of the downloaded file.


On efficient two-parameter methods for
solving nonlinear equations

Jovana Džunić

Numerical Algorithms

ISSN 1017-1398
Volume 63
Number 3

Numer Algor (2013) 63:549-569


DOI 10.1007/s11075-012-9641-3

1 23
Your article is protected by copyright and
all rights are held exclusively by Springer
Science+Business Media, LLC. This e-offprint
is for personal use only and shall not be self-
archived in electronic repositories. If you wish
to self-archive your article, please use the
accepted manuscript version for posting on
your own website. You may further deposit
the accepted manuscript version in any
repository, provided it is only made publicly
available 12 months after official publication
or later and provided acknowledgement is
given to the original source of publication
and a link is inserted to the published article
on Springer's website. The link must be
accompanied by the following text: "The final
publication is available at link.springer.com”.

1 23
Author's personal copy
Numer Algor (2013) 63:549–569
DOI 10.1007/s11075-012-9641-3
ORIGINAL PAPER

On efficient two-parameter methods


for solving nonlinear equations

Jovana Džunić

Received: 16 May 2012 / Accepted: 24 August 2012 / Published online: 7 September 2012
© Springer Science+Business Media, LLC 2012

Abstract Derivative free methods for solving nonlinear equations of


Steffensen’s type are presented. Using two self-correcting parameters, cal-
culated by Newton’s interpolatory polynomials of second and third degree,
the order of convergence is increased from 2 to 3.56. This method is used
as a corrector for a family of biparametric two-step derivative free methods
with and without memory with the accelerated convergence rate up to order
7. Significant acceleration of convergence is attained without any additional
function calculations, which provides very high computational efficiency of
the proposed methods. Another advantage is a convenient fact that the
proposed methods do not use derivatives. Numerical examples are given to
demonstrate excellent convergence behavior of the proposed methods and
good coincidence with theoretical results.

Keywords Nonlinear equations · Iterative methods · Multipoint methods


with memory · Acceleration of convergence · Computational efficiency

Mathematics Subject Classification (2010) 65H05

This research was supported by the Serbian Ministry of Education and Science under grant
number 174022.
J. Džunić (B)
Faculty of Electronic Engineering, Department of Mathematics,
University of Niš, 18000 Niš, Serbia
e-mail: [email protected]
Author's personal copy
550 Numer Algor (2013) 63:549–569

1 Introduction

In this paper we present two new iterative methods with memory of


Steffensen’s type for solving nonlinear equations. Our motivation for con-
structing these methods is directly connected with the basic concept of Nu-
merical analysis that any numerical algorithm should give as good as possible
output results with minimal computational cost. In other words, it is necessary
to search for algorithms of great computational efficiency. The main goal
of this paper is to present new, simple methods of considerably increased
computational efficiency, higher than that of the existing methods in the
considered class of methods.
A very high computational efficiency of these methods is obtained by
applying a new accelerating procedure based on varying two parameters cal-
culated by Newton’s interpolating polynomials in each iteration. Considerable
increase of convergence is achieved without additional function evaluations,
making the proposed root solvers very efficient. Biparametric acceleration
technique by self-correcting parameters was not applied in the literature at
present. Numerical examples confirm excellent convergence properties of the
presented methods.
In our convergence analysis of the proposed methods we employ the O-
and o-notation: If {gk } and {hk } are null sequences and gk / hk → C, where C is
a nonzero constant, we write gk = O(hk ) or gk ∼ Chk . If gk / hk → 0, we write
gk = o(hk ).
Let α be a simple real zero of a real function f : D ⊂ R → R and let x0 be
an initial approximation to α. In his book [13] Traub considered the iterative
function of order two
γ f (x)2 f (x)
(x, γ ) = x − = x−  , (1)
f (x + γ f (x)) − f (x) f x, x + γ f (x)
f (x) − f (y)
where γ = 0 is a real constant and f [x, y] = denotes a di-
x−y
vided difference. Note that the choice γ = 1 reduces (1) to the well-known
Steffensen iterative method [12].
Introducing the abbreviations
f (x) f  (x)
u(x) = , C2 (x) = ,
f  (x) 2 f  (x)
Traub [13] derived the error relation of method (1) in the form
   
(x, γ ) − α = 1 + γ f  (x) C2 (x)u(x)2 + O u(x)3 , (2)
and showed that the Steffensen-like method (1) can somewhat be improved by
the reuse of information from the previous iteration. Approximating f  (x) by
the secant
f (xk ) − f (xk−1 )  

f  (xk ) = = f xk , xk−1 ,
xk − xk−1
Author's personal copy
Numer Algor (2013) 63:549–569 551

Traub constructed the following method with memory


⎧ xk − xk−1

⎨ γ0 is given, γk = − f (xk ) − f (xk−1 ) for k ≥ 1,

(k = 0, 1, . . .) (3)

⎪ γk f (xk )2
⎩ xk+1 = xk − ,
f (xk + γk f (xk )) − f (xk )

with the order of convergence at least 1 + 2 ≈ 2.414.
Similar approaches for accelerating derivative free multipoint methods by
varying parameters were applied in [2, 3, 9] in a considerably more efficient
way. Following Traub’s classification [13, pp. 8–9], methods that use informa-
tion from the current and previous iteration are called methods with memory.
This kind of methods is the subject of this paper.
In this paper we show that the iterative method (1) can be additionally
accelerated without increasing computational cost, which directly improves
computational efficiency of the modified method. The main idea in construct-
ing a higher order method consists of the introduction of another parameter p
and the improvement of accelerating technique for the parameter γ . The error
relation of the new method, constructed in this way, gives a clear idea and
motivation for further acceleration with the reuse of available information.
The same idea is applied to a family of two-point Steffensen-like methods to
achieve considerable improvement of computational efficiency.

2 Choice of initial approximations

Although the choice of good initial approximations is of great importance


in the application of iterative methods, including multipoint methods, this
task is very seldom considered in the literature. Recall that Newton-like
and Steffensen-like methods of the second order have been most frequently
used as predictors in the first step of multipoint methods. Both classes of
these methods are of tangent type and, therefore, they are locally convergent,
which means that a reasonably close initial approximation to the sought zero
α should be found. Otherwise, if the chosen initial approximation is too
far from the sought zero (say, if it is chosen randomly), then the applied
methods, either the ones proposed in this paper or some others with local
convergence developed during the last two centuries, will probably find some
other (often unwanted) zero or they will diverge. Therefore, the determination
of a reasonably good approximation x0 that guarantees the convergence of the
sequence of approximations {xk }k∈ to the zero of f is a significant task. It is
interesting to note that initial approximations, chosen randomly in a suitable
way, give acceptable results when simultaneous methods for finding all roots
of polynomial equations are applied, e.g., employing Aberth’s approach [1].
There are many methods (mainly of non-iterative nature) and strategies
for finding sufficiently good initial approximations. The well-known bisection
method and its modifications belong to the simplest but not always sufficiently
Author's personal copy
552 Numer Algor (2013) 63:549–569

efficient techniques. There is a vast literature on this subject so that we


omit details here. We only note that complete root-finding algorithms often
consist of two parts: (i) slowly convergent search algorithm to isolate distinct
real or complex interval containing single root and (ii) rapidly convergent
iterative method for finding sufficiently close approximation of the isolated
root to the required accuracy. In this paper we are concentrating on the
part (ii). Applying computer algebra systems, a typical statement for solving
nonlinear equations reads FindRoot[equation, {x, x0 }], see, e.g., Wolfram’s
computational software package Mathematica, that is, an initial approximation
x0 is required.
In finding good initial approximations, a great advance was recently
achieved by developing an efficient non-iterative method of significant prac-
tical importance, originally proposed by Yun [15] and latter discussed in
[10, 16, 17]. Yun’s method is based on numerical integration briefly referred
to as NIM, where tanh, arctan and signum functions are involved. The NIM
requires neither any knowledge of the derivative f  (x) nor any iterative
process. Handling non-pathological cases it is not necessary to have a close
approximation to the zero; instead, a real interval (not necessarily tight) that
contains the root (so-called inclusion interval) is sufficient. For illustration, to
find an initial approximation x0 of the zero α = π of the function
sin x
f (x) = e−x
2
+ x2 log(1 + x − π )
x2 − 1
(tested in Section 6), isolated in the interval [2.2, 10], we employed Yun’s
algorithm with the statement
x0=0.5*(a+b+Sign[f[a]]*NIntegrate[Tanh[m*f[x]],{x,a,b}])
taking m = 3, a = 2.2, b = 10, and found very good approximation x0 =
3.14147 with |x0 − α| ≈ 1.2 × 10−4 .

Remark 1 Solving real-life problems in engineering disciplines, computer sci-


ences, physics, biology, economics, etc., an approximate location of the wanted
solution of a given nonlinear equation is most frequently known to the user.
This means that there is no need for an extensive zero-searching method;
the mentioned Yun’s method [15] applied to the isolated inclusion interval
(not necessarlly tight) gives satisfactory results. Having in mind this fact and
recently developed Yun’s improved NIM global method [17] that finds all (sim-
ple or multiple) roots within a given interval, it could be said that numerical
testing locally convergent root-finding methods with randomly selected initial
approximations are rather of academic interest.

3 Improved Steffensen-like method

For k ≥ 1 introduce the abbreviations


εk = xk − α, εk+1 = xk+1 − α, wk = xk + γ f (xk ), εk,w = wk − α,
Author's personal copy
Numer Algor (2013) 63:549–569 553

and
f ( j) (α)
cj = ( j = 2, 3, . . .).
j! f  (α)
Let us consider the following modification of method (1) with an additional
parameter p,
f (xk )
xk+1 = xk − (k = 0, 1, . . .). (4)
f [xk , wk ] + pf (wk )
Using Taylor’s series about a simple zero α of f, we obtain
 
f (xk ) = f  (α) εk + c2 εk2 + c3 εk3 + O εk4 , (5)

  f  (α) 2  
εk,w = 1 + γ f  (α) εk + γ εk + O εk3 , (6)
2
 4 
f (wk ) = f  (α) εk,w + c2 εk,w
2
+ c3 εk,w
3
+ O εk,w . (7)

In view of (4)–(7) we find


 
f [xk , wk ] + pf (wk ) = f  (α) 1+c2 εk +(c2 + p) εk,w +c3 εk2 +εk εk,w
 
+ (c3 + pc2 ) εk,w
2
+ O εk3 , (8)

f (xk )  
= εk − (c2 + p) εk εk,w + O εk2 εk,w , (9)
f [xk , wk ] + pf (wk )
f (xk )
εk+1 = xk+1 − α = εk −
f [xk , wk ] + pf (wk )
 
= (c2 + p) εk εk,w + O εk2 εk,w (10)
and hence
 
εk+1 ∼ 1 + γ f  (α) (c2 + p)εk2 . (11)
From relation (11) we conclude that the biparametric Steffensen-like method
without memory (4) has order of convergence two, the same one as
method (1).
Error relation (11) plays the key role in our study of the convergence ac-
celeration. Method (4) serves as the basis for constructing a new biparametric
derivative free family of two-point methods of the form

⎪ f (xk )

⎨ yk = xk − f [x , w ] + pf (w ) , wk = xk + γ f (xk ),
k k k
f (yk ) f (yk ) (12)


⎩ xk+1 = yk − g(tk )   , tk = .
f yk , wk + pf (wk ) f (xk )
A weight function g should be determined in such way that it provides the
optimal order four for the two-point iterative scheme (12).
Author's personal copy
554 Numer Algor (2013) 63:549–569

Let εk,y = yk − α. Taylor’s expansion (5) and relations (8)–(10) lead to


 
εk,y = (c2 + p)εk εk,w +(c3 −c2 (c2 + p))εk2 εk,w + c3 + pc2 −(c2 + p)2 εk εk,w
2

 
+ O εk3 εk,w , (13)

f (yk ) = f  (α) εk,y + c2 εk,y


2
+ O εk,y
3
, (14)

f (yk ) εk,y 1+c2 εk,y + O εk,y


2
 
tk = =  2  = (c2 + p)εk,w + c3 −2c2 (c2 + p) εk εk,w
f (xk ) εk 1+c2 εk + O εk
  2
+ c3 + pc2 − (c2 + p)2 εk,w + O(εk2 εk,w ) (15)
and

f [yk , wk ]+ pf (wk ) = f  (α) 1 + (c2 + p)εk,w + (c3 + pc2 )εk,w


2
+c2 εk,y
 3 
+ O εk,w , (16)

f (yk )   2
= εk,y −(c2 + p)εk,w εk,y + (c2 + p)2 −(c3 + pc2 ) εk,w εk,y
f [yk , wk ]+ pf (wk )
 3 
+ O εk,w εk,y . (17)

Since tk → 0 when k → ∞ (see (15)), let g be represented by its Taylor’s


expansion about 0,

 
g(t) = g(0) + g (0)t + 12 g (0)t2 + O t3 . (18)

After some elementary calculations, having in mind (18), we arrive at the


error estimate for the new approximation xk+1 ,

 
εk+1 = xk+1 − α = εk,y − g(0) + g (0)tk + 12 g (0)tk2 + O tk3 εk,y

  2  3 
−(c2 + p)εk,w εk,y + (c2 + p)2 − (c3 + pc2 ) εk,w εk,y + O εk,w εk,y

   
= εk,y 1−g(0)+ g(0)−g (0) (c2 + p)εk,w −g (0) c3 −2c2 (c2 + p) εk εk,w



 

+ (g(0)−g (0))(c3 + pc2 )− g(0)− 2g (0)+ 12 g (0) (c2 + p)
2
εk,w
2

 
+O εk2 εk,w εk,y . (19)
Author's personal copy
Numer Algor (2013) 63:549–569 555

From (19) we conclude that the choice g(0) = g (0) = 1 gives the highest
order to the family of two-point methods (12), with the error relation
   
εk+1 = 2c2 (c2 + p) − c3 εk εk,w + 1 − 12 g (0) (c2 + p)2 εk,w
2
εk,y
 2 
+ O εk εk,w εk,y
 
= 2c2 (c2 + p)−c3 + 1− 12 g (0) (1+γ f  (α))(c2 + p)2 εk εk,w εk,y
 
+ O εk2 εk,w εk,y . (20)
After substituting relations (6) and (13) into (20), the error relation takes the
form
 2
εk+1 ∼ (c2 + p) 1 + γ f  (α)
 
× 2c2 (c2 + p) − c3 + 1 − 12 g (0) (1 + γ f  (α))(c2 + p)2 εk4 . (21)

The above results can be summarized in the following theorem:

Theorem 1 For a suf f iciently good initial approximation x0 of a simple zero α


of the function f, the family of two-point methods (12) obtains the order at least
four if the weight function g satisf ies conditions
 
g(0) = 1, g (0) = 1, g (0) < ∞. (22)
Then the error relation for the family (12) is given by (21).

We give some examples for the weight function g with a simple form
satisfying conditions (22). The first example is

t s
g(t) = 1 + , s = 0.
s
In particular, for s = 1 we obtain g(t) = 1 + t. Also, the choices s = −1 with
g(t) = 1/(1 − t) (giving
√ the Kung–Traub two-point method [6] for p = 0) and
s = 1/2 with g(t) = 1 + t/2 are convenient for practical applications. Some
other simple examples are
1 + β1 t 1 + β2 t 2
g(t) = , g(t) = ,
1 + (β1 − 1)t 1−t

t2 + (β3 − 1)t − 1 1
g(t) = , g(t) = ,
β3 t − 1 1 − t + β4 t 2
where β1 , β2 , β3 , β4 ∈ R . All of these four weight functions are only special
cases of the generalized function
1 + at + b t2
g(t) =
1 + (a − 1)t + ct2
Author's personal copy
556 Numer Algor (2013) 63:549–569

for a, b , c ∈ R. Another general class of weight functions is given by


 s
1 + at + b t2 1
g(t) = s2 ,
as1 − 1
1+ t + ct2
s2

for a, b , c ∈ R and s1 , s2 = 0.

4 Acceleration of the one-point method

The main idea in constructing methods with memory consists of the calculation
of parameters γ = γk and p = pk as the iteration proceeds by the formulas
γk = −1/  f  (α) and pk = − c2 for k = 1, 2, . . . , where 
f  (α) and 
c2 are approxi-
mations to f  (α) and c2 , respectively. In essence, in this way we minimize the
factors 1 + γk f  (α) and c2 + pk that appear in (21). It is preferable to find
approximations  f  (α) and 
c2 of a good quality, for example, 1 + γk f  (α) =
q q
O(εk−1 ) and c2 + pk = O(εk−1 ) for q > 1. It is assumed that initial estimates
γ0 and p0 should be chosen before starting the iterative process, for example,
to take γ0 using one of the ways proposed in [13, p. 186] and set p0 = 0.
Our model for the calculation of the self-accelerating parameters γk and pk
in each iteration is based on the approximations

1 1 1
γk = − =− ≈−  , (23)
N2 (xk ) 
f (α) f (α)

N3 (wk )
pk = − = −
c2 ≈ −c2 , (24)
2N3 (wk )

where

N2 (τ ) = N2 (τ ; xk , wk−1 , xk−1 ) and N3 (τ ) = N3 (τ ; wk , xk , wk−1 , xk−1 )

are Newton’s interpolating polynomials set through 3 and 4 available ap-


proximations (nodes) from the current and previous iteration. Obviously, if
fewer nodes are used for the interpolating polynomials, slower acceleration is
achieved.
Combining (4) with (23) and (24), we construct the following derivative free
method with memory of Steffensen’s type


⎪ γ0 , p0 are given, wk = xk + γk f (xk ),



⎨ 1 N  (wk )
γk = −  , pk = − 3 for k ≥ 1, (25)
⎪ N2 (xk ) 2N3 (wk )



⎪ f (x k )
⎩ xk+1 = xk − , (k = 0, 1, . . .).
f [xk , wk ] + pk f (wk )
Author's personal copy
Numer Algor (2013) 63:549–569 557

Let {xk } be a sequence of approximations generated by an iterative method


(I M). If this sequence converges to the zero α of f with the order at least r, we
will write

εk+1 ∼ Dk,r εkr , (26)

where Dk,r tends to the asymptotic error constant Dr of the iterative method
(I M) when k → ∞. Similarly to (26), we write

εk,w ∼ Dk,r1 εkr1 . (27)

Formally, we use the order of a sequence of approximations as the subscribe


index to distinguish asymptotic error constants.

Lemma 1 The estimates


c4
1 + γk f  (α) ∼ −c3 εk−1,w εk−1 and c2 + pk ∼ εk−1,w εk−1
c2
hold.

Proof We use the well known formula for the error of Newton’s interpolation.
If Newton’s interpolating polynomial of degree s ∈ N is set through the nodes
t0 , t1 , . . . , ts from the interval I = [min{t0 , t1 , . . . , ts }, max{t0 , t1 , . . . , ts }], then for
some ζ ∈ I we have

f (s+1) (ζ )  
s

f (t) − Ns (t) = t − tj . (28)
(s + 1)! j=0

After differentiating (28) once at the point xk (for s = 2) and twice at wk


(for s = 3), we find

f  (ζ2 )
N2 (xk ) = f  (xk ) − (xk − wk−1 )(xk − xk−1 )
3!
 
∼ f  (α) 1 + 2c2 εk − c3 εk−1,w εk−1 , (29)

f (4) (ζ3 )
N3 (wk ) = f  (wk ) − (wk − xk )(wk − wk−1 )(wk − xk−1 )
4!
 
∼ f  (α) 1 + 2c2 εk,w + c4 εk εk−1,w εk−1 , (30)

2 f (4) (ζ3 ) 
N3 (wk ) = f  (wk ) − (wk − xk )(wk − wk−1 ) + (wk − xk )(wk − xk−1 )
4!

+(wk − wk−1 )(wk − xk−1 )

 3c3 c4
∼ f (α) 1 + εk,w − εk−1,w εk−1 . (31)
2c2 c2
Author's personal copy
558 Numer Algor (2013) 63:549–569

Using (30) and (31) we obtain


3c3 c4
f  (α) 1 + εk,w − εk−1,w εk−1
N3 (wk ) 1 2c2 c2
∼   (32)
2N3 (wk ) 2 f  (α) 1 + 2c2 εk,w + c4 εk εk−1,w εk−1

f  (α)  c4 
∼ 
1 − εk−1,w εk−1 , (33)
2 f (α) c2
that is,
f  (α) N  (wk ) c4
c2 + pk = 
− 3 ∼ εk−1,w εk−1 . (34)
2 f (α) 2N3 (wk ) c2
From (29) it follows that
1
1 + γk f  (α) ∼ 1 − ∼ −c3 εk−1,w εk−1 , (35)
1 − c3 εk−1,w εk−1 + 2c2 εk
since εk ∼ (c2 + pk−1 )εk−1,w εk−1 = o(εk−1,w εk−1 ), based on (10) and (34).

Now we state the following convergence theorem:

Theorem 2 If an initial approximation x0 is suf f iciently close to a simple zero α


of f, then the order of √ convergence of the Stef fensen-like method with memory
(25) is at least 12 (3 + 17) ≈ 3.56.

Proof By virtue of Lemma 1, from (26), (27), (35) and (34) we have
1 + γk f  (xk ) ∼ −c3 Dk−1,r1 εk−1
1+r1
, (36)

c4 1+r1
c2 + pk ∼ Dk−1,r1 εk−1 . (37)
c2
Combining (10), (26), (27), (36) and (37) yields
c4 1+r1 1+r1 (1+r1 )(1+r)
εk+1 ∼ (c2 + pk )εk,w εk ∼ Dk−1,r1 Dk,r1 εk−1 εk ∼ Ak+1 εk−1 , (38)
c2
c4
where Ak+1 = Dk−1,r1 Dk,r1 D1+r
k−1,r . Similarly, by (6), (27), (36) and (37),
1

c2
  1+r1 +r
εk,w ∼ 1 + γk f  (α) εk ∼ −c3 Dk−1,r1 εk−1 1+r1
εk ∼ Bk εk−1 , (39)
where Bk = −c3 Dk−1,r1 Dk−1,r . From (26) and (27) we obtain another pair of
estimates
2
εk+1 ∼ Dk,r Drk−1,r εk−1
r
, (40)

εk,w ∼ Dk,r1 Drk−1,r


1 rr1
εk−1 . (41)
Author's personal copy
Numer Algor (2013) 63:549–569 559

Equating the exponents of the error εk−1 in pairs of relations (38) and (40),
and (39) and (41), we arrive at the system of equations

r2 − (1 + r)(1 + r1 ) = 0,

rr1 − r − r1 − 1 = 0.

Positive solution is given by r = 12 (3 + 17) ≈ 3.56, and it defines the order of
convergence of the Steffensen-like method with memory (25).

Remark 2 From (32) we note that an interpolating polynomial of lower degree


can be used for approximating f  (α) in the expression for c2 without decreasing
the convergence order, as long as it is set through the nodes wk and xk .
In this manner, the simplest choice for pk , which would still give maximal
convergence order, is pk = − c2 = −N3 (wk )/(2 f [wk , xk ]). However, although
the described simplification is theoretically founded, numerical results have
shown that this approximation gives slight drop in the accuracy of the obtained
approximations for the sought root.

5 Acceleration of the family of two-point methods

An accelerating approach, similar to that used in the previous section, will be


now applied for constructing two-point methods with memory. Calculation
of the parameters γ = γk and p = pk becomes more complex since more
information are available per iteration. Formulas for the calculation of γ = γk
and p = pk are given by

1 1 1
γk = − =− ≈−  , (42)
N3 (xk ) 
f (α) f (α)

N4 (wk )
pk = − c2 ≈ −c2 ,
= − (43)
2N4 (wk )

where

N3 (τ ) = N3 (τ ; xk , yk−1 , wk−1 , xk−1 )

and

N4 (τ ) = N4 (τ ; wk , xk , yk−1 , wk−1 , xk−1 )

are Newton’s interpolating polynomials set through 4 and 5 available approxi-


mations (nodes) from the current and previous iteration.
Author's personal copy
560 Numer Algor (2013) 63:549–569

Combining (12) with (42) and (43), we construct the following derivative
free family of two-point methods with memory of Steffensen’s type


⎪ γ0 , p0 are given, wk = xk + γk f (xk ),





⎪ 1 N  (wk )
⎪ γk = − 
⎪ , pk = − 4 for k ≥ 1,

⎨ N3 (xk ) 2N4 (wk )
f (xk ) (k = 0, 1, . . .),

⎪ yk = xk −

⎪ f [xk , wk ] + pk f (wk )





⎪ f (yk ) f (yk )

⎩ xk+1 = yk − g(tk ) , tk = ,
f [yk , wk ] + pf (wk ) f (xk )
(44)
where the weight function g satisfies conditions (22).
Similarly to the proof of Lemma 1, with regard to (20) the following
statement can be proved:

Lemma 2 The estimates


1 + γk f  (α) ∼ c4 εk−1,y εk−1,w εk−1 − 2c2 εk ∼ Gk εk−1,y εk−1,w εk−1 (45)
and
c5
c2 + pk ∼ − εk−1,y εk−1,w εk−1 (46)
c2
hold, where Gk = c4 − 2c2 Bk−1 and
  
Bk = 2c2 (c2 + pk )−c3 + 1− 12 g (0) 1+γk f  (α) (c2 + pk )2 . (47)

Now we state the convergence theorem for the family of two-point methods
with memory (44).

Theorem 3 If an initial approximation x0 is suf f iciently close to a simple zero


α of f, then the order of convergence of the family of two-point methods with
memory (44) is at least seven.

Proof Similar to (26) and (27), let us assume that for the iterative sequence
{yk } the following relations hold
εk,y ∼ Dk,r2 εkr2 (48)
and
εk,y ∼ Dk,r2 Drk−1,r
2 rr2
εk−1 . (49)
From (45)–(48), (26) and (27) we have
1+r1 +r2
1 + γk f  (α) ∼ Gk Dk−1,r1 Dk−1,r2 εk−1 , (50)

c5 1+r1 +r2
c2 + pk ∼ − Dk−1,r1 Dk−1,r2 εk−1 . (51)
c2
Author's personal copy
Numer Algor (2013) 63:549–569 561

Combining (13), (20), (27) and (45)–(50), we find


εk+1 ∼ Bk εk εk,w εk,y ∼ Bk εk Dk,r1 εkr1 Dk,r2 εkr2 ∼ Ak+1 εk1+r1 +r2 , (52)

c5 (1+r1 )(1+r)+r2
εk,y ∼ (c2 + pk )εk εk,w ∼ − Dk−1,r1 Dk−1,r2 Dk,r1 D1+r
k−1,r εk−1
1
, (53)
c2

1+r1 +r2 +r
εk,w ∼ (1 + γk f  (α))εk ∼ Gk Dk−1,r1 Dk−1,r2 Dk−1,r εk−1 , (54)
where Bk is given in (47) and Ak+1 = Bk Dk,r1 Dk,r2 .
Equating the exponents of the error εk in pair of relations (52) and (26) and
the error εk−1 in pairs of relations (54) and (41), and (53) and (49), we arrive at
the system of equations
r − r1 − r2 − 1 = 0,
rr1 − r − r1 − r2 − 1 = 0,
rr2 − (1 + r1 )(1 + r) + r2 = 0.
Positive solution is given by r = 7 so that convergence order of the family (44)
of two-point methods with memory is seven, which concludes the proof.

Remark 3 The simplest choice for pk that still gives order 7 for the family with
memory (44) is pk = −N4 (wk )/(2 f [wk , xk ]), see Remark 2.

Remark 4 The other choice of nodes (of worse quality) gives approximations
for γk and pk of somewhat less accuracy, so that the corresponding families of
the form (44) have the convergence order greater than 4 but less than 7. In
this paper we are concentrating on the choice of as good as possible available
nodes to obtain the maximal order 7.

6 Numerical results

The presentation of numerical results in this section serves to point to very


high computational efficiency and, of less importance, to demonstrate fast
convergence of the proposed methods. It is clear that a very fast convergence,
a property of many existing methods, is not of particular advantage if the
considered method is too expensive from a computational point of view.
This simple fact has been often neglected in many papers. In this section the
convergence rate is expressed by the computational convergence order rc cal-
culated by (57) and given in the last column of displayed tables. Computational
efficiency can be successfully measured by the ef f iciency index. For an iterative
method (IM) with convergence order r that requires θ function evaluations,
the efficiency index (also named computational efficiency) is calculated by
Ostrowski–Traub’s formula
E(I M) = r1/θ
Author's personal copy
562 Numer Algor (2013) 63:549–569

(see [7, p. 20], [13, Appendix C]). According to the last formula we find
1/2
1 √
E(25) = (3 + 17) ≈ 1.887, and E(44) = 71/3 ≈ 1.913,
2
whereas computational efficiency of the most efficient three-point schemes
(I M) is only E(I M) = 81/4 ≈ 1.682. Obviously, the obtained efficiency indices
of the proposed methods are remarkably high.
As mentioned in Section 2, applying any root solver with local convergence,
a special attention must be paid to the choice of initial approximations. If initial
values are sufficiently close to the sought roots, then the expected (theoretical)
convergence speed is obtainable in practice; otherwise, all iterative root-
finding methods work with great difficulties and show slower convergence,
especially at the beginning of the iterative process. In our numerical examples
we have chosen initial approximations to real zeros of real functions using
the mentioned Yun’s global algorithm [17]. We have also experimented with
crude approximations (see Tables 2 (part I), 5 (part I), 7 (part I) and 10 (part
I)) in order to study convergence behavior of the proposed methods under
modest initial conditions, and randomly chosen approximations (Tables 4, 9
and 12). In some examples different choice of initial approximations caused
the convergence to different zeros, see Tables 6 and 11.
We have tested the methods (4), (12), (25) and (44) for the functions given
in Table 1.
Tables 2, 3, 4, 5 and 6 contain results of iterative methods (1) and its
accelerated variant (3), as well. The results for
f (xk )
xk+1 = xk − (Newton’s method) (55)
f  (xk )
of the second order and
f (xk )
xk+1 = xk − (Halley’s method) (56)

f (xk ) f  (xk )
f (xk ) +
2 f  (xk )
with order of convergence three, are also included, for better observation of
convergence acceleration obtained and an insight of impact on quality of the
produced results.
The selected test-functions are not of simple form and they are certainly
more complicated than most of often used Rice’s test functions [11], Jenkins–
Traub’s test polynomials [5] and test functions used in many papers concerning
nonlinear equations. For example, the function f1 displayed in Fig. 1 shows
nontrivial behavior since its graph is of -form with two relatively close zeros
and a singularity close to the sought zero. The test function f2 is a polynomial
of Wilkinson’s type with real zeros 1, 2, . . . , 20. It is well-known that this class
of polynomials is ill-conditioned (a part of its “uphill-downhill” graph with
very large amplitudes is displayed in Fig. 2); small perturbations of polyno-
mial coefficients cause drastic variations of zeros. Notice that many iterative
Author's personal copy
Numer Algor (2013) 63:549–569 563

Table 1 List of test functions

Test function α
f1 (x − 1)(x6 + x−6 + 4) sin(x2 ) 1

20
f2 (x − i) 2, 8, 16
i=1
sin x
e−x
2
f3 + x2 log(1 + x − π) π
x2 − 1
1
f4 x + sin x + − 1 + 2i 0.28860 . . . − i1.24220 . . .
x √
4 √
ex −2x+3 + x +
2
f5 −2+i 2 1 + i 2, 0.50195 . . . + i0.05818 . . .
x−1

Table 2 One-step methods without and with memory applied to f1

Methods |x1 − α| |x2 − α| |x3 − α| |x4 − α| rc (57)


α=1 x0 = −1.5 γ0 = −0.1 p0 = −0.01
(1) 1.91 (−3) 2.31 (−6) 3.39 (−12) 7.30 (−24) 2.00
(3) 1.91 (−3) 2.71 (−6) 2.29 (−14) 2.35 (−33) 2.35
(55) diverges
(56) diverges
(4) 1.10 (−2) 7.75 (−5) 3.79 (−9) 9.04 (−18) 2.00
(25) 1.10 (−2) 5.84 (−5) 4.72 (−16) 2.25 (−54) 3.45
α=1 x0 = 1.3 γ0 = −0.1 p0 = −0.1
(1) 1.36 (−2) 1.20 (−4) 9.13 (−9) 5.30 (−17) 2.00
(3) 1.36 (−2) 1.08 (−4) 2.69 (−10) 1.28 (−23) 2.38
(55) 1.14 (−1) 2.06 (−2) 5.90 (−4) 4.48 (−7) 2.01
(56) 4.78 (−2) 1.69 (−4) 1.45 (−11) 9.20 (−33) 3.00
(4) 1.31 (−2) 1.03 (−4) 6.23 (−9) 2.27 (−17) 2.00
(25) 1.31 (−2) 2.83 (−8) 1.15 (−27) 3.52 (−95) 3.48

Table 3 One-step methods without and with memory applied to f2

Methods |x1 − α| |x2 − α| |x3 − α| |x4 − α| rc (57)


α=2 x0 = 1.6 γ0 = −0.01 p0 = −5
(1) diverges
(3) 4.00 (−1) 6.56 (−2) 4.63 (−3) 7.68 (−6) 2.29
(55) 1.39 (−1) 3.18 (−2) 2.27 (−3) 1.28 (−5) 1.91
(56) 8.95 (−2) 2.25 (−3) 4.97 (−8) 5.42 (−22) 3.00
(4) diverges
(25) 4.00 (−1) 1.86 (−1) 9.15 (−4) 9.37 (−11) 2.81
α=8 x0 = 8.4 γ0 = −0.01 p0 = −5
(1) diverges
(3) converges to 7
(55) converges to 7
(56) 2.92 (−1) 8.94 (−2) 1.42 (−3) 4.78 (−9) 3.08
(4) diverges
(25) 4.00 (−1) 1.16 (−1) 2.88 (−2) 2.54 (−7) 8.19
α = 16 x0 = 16.4 γ0 = −0.01 p0 = −5
(1) diverges
(3) 4.00 (−1) 4.77 (−3) 6.10 (−6) 2.72 (−13) 2.54
(55) 2.13 (−2) 6.04 (−4) 4.49 (−7) 2.49 (−13) 2.00
(56) 1.08 (−1) 2.17 (−3) 2.31 (−8) 2.80 (−23) 3.00
(4) diverges
(25) 4.00 (−1) 4.77 (−3) 6.10 (−6) 2.72 (−13) 2.54
Author's personal copy
564 Numer Algor (2013) 63:549–569

Table 4 One-step methods without and with memory applied to f3

Methods |x1 − α| |x2 − α| |x3 − α| |x4 − α| rc (57)


α=π x0 = 6 γ0 = −0.05 p0 = −0.05
(1) 1.78 (−1) 2.44 (−3) 4.12 (−7) 1.18 (−14) 2.00
(3) 1.78 (−1) 2.06 (−3) 1.56 (−8) 9.37 (−21) 2.39
(55) 9.55 (−1) 1.56 (−1) 3.86 (−3) 2.05 (−6) 2.03
(56) 3.45 (−1) 8.91 (−4) 6.92 (−11) 3.24 (−32) 3.00
(4) 1.44 (−1) 1.08 (−3) 5.09 (−8) 1.14 (−16) 2.00
(25) 1.44 (−1) 8.90 (−7) 1.79 (−23) 6.27 (−83) 3.56
α=π x0 = 7 γ0 = −0.05 p0 = −0.05
(1) 7.29 (−3) 3.65 (−6) 9.21 (−13) 5.88 (−26) 2.00
(3) 7.29 (−3) 3.66 (−6) 1.81 (−15) 2.24 (−37) 2.35
(55) 1.45 (0) 3.29 (−1) 1.86 (−2) 4.87 (−5) 2.04
(56) 6.29 (−1) 8.21 (−4) 5.39 (−11) 1.53 (−32) 3.00
(4) 5.92 (−3) 1.52 (−6) 1.02 (−13) 4.57 (−28) 2.00
(25) 5.92 (−3) 1.13 (−11) 1.70 (−40) 8.55 (−144) 3.58
α=π x0 = 9 γ0 = −0.02 p0 = −0.08
(1) 1.45 (0) 2.51 (−1) 8.32 (−3) 7.67 (−6) 2.03
(3) 1.45 (0) 2.01 (−1) 1.55 (−3) 1.00 (−8) 2.44
(55) 2.50 (0) 7.84 (−1) 1.07 (−1) 1.78 (−3) 1.95
(56) 1.28 (0) 4.05 (−2) 5.60 (−6) 1.71 (−17) 2.98
(4) 9.43 (−1) 7.62 (−2) 3.24 (−4) 4.77 (−9) 2.03
(25) 9.43 (−1) 3.61 (−3) 4.96 (−10) 2.54 (−35) 3.69

methods encounter serious difficulties in finding the zeros of Wilkinson-like


polynomials.
Complex test functions f4 and f5 are used to show that the proposed
methods are applicable to the complex domain too. As noted by Geum and
Kim in [4], functions of the form like f4 and f5 arise from the real-life problems

Table 5 One-step methods without and with memory applied to f4

Methods |x1 − α| |x2 − α| |x3 − α| |x4 − α| rc (57)


α ≈ 0.29 − i1.24 x0 = −1 − 3i γ0 = −0.2 p0 = 0.2
(1) 5.87 (−1) 3.09 (−2) 6.80 (−5) 3.16 (−10) 2.01
(3) 5.87 (−1) 5.35 (−2) 9.77 (−5) 2.26 (−11) 2.42
(55) 1.29 (0) 4.95 (−1) 1.95 (−2) 7.51 (−5) 1.70
(56) 5.51 (−1) 6.90 (−2) 7.07 (−5) 7.15 (−14) 3.02
(4) 6.31 (−1) 2.54 (−2) 2.85 (−5) 3.50 (−11) 2.00
(25) 6.31 (−1) 2.69 (−3) 1.93 (−11) 1.63 (−39) 3.45
α ≈ 0.29 − i1.24 x0 = −i/2 γ0 = −0.02 p0 = 0.2
(1) 3.36 (−2) 7.66 (−5) 4.01 (−10) 1.10 (−20) 2.00
(3) 3.36 (−2) 4.19 (−5) 2.48 (−12) 1.09 (−29) 2.40
(55) 2.85 (−1) 1.37 (−2) 3.92 (−5) 3.17 (−10) 2.00
(56) 5.67 (−1) 3.27 (−2) 6.71 (−6) 6.13 (−17) 3.00
(4) 2.47 (−2) 2.51 (−5) 2.71 (−11) 3.16 (−23) 2.00
(25) 2.47 (−2) 1.30 (−6) 2.40 (−23) 1.93 (−69) 3.50
Author's personal copy
Numer Algor (2013) 63:549–569 565

Table 6 One-step methods without and with memory applied to f5

Methods |x1 − α| |x2 − α| |x3 − α| |x4 − α| rc (57)



α = 1 + i 2, x0 = i γ0 = −0.1 p0 = 0.2
(1) 2.26 (−1) 2.99 (−2) 5.61 (−4) 1.91 (−7) 2.01
(3) 2.26 (−1) 2.57 (−2) 9.84 (−5) 1.63 (−10) 2.40
(55) 6.39 (−1) 2.17 (−1) 3.26 (−2) 8.65 (−4) 1.84
(56) 2.22 (−1) 3.82 (−3) 2.56 (−8) 7.76 (−24) 3.00
(4) 2.16 (−1) 2.66 (−2) 4.09 (−4) 9.53 (−8) 2.00
(25) 2.16 (−1) 1.99 (−3) 5.89 (−12) 3.44 (−41) 3.43
α ≈ 0.502 + i0.058, x0 = 0 γ0 = −0.01 p0 = −1
(1) 2.37 (−1) 2.57 (−2) 5.94 (−4) 3.73 (−7) 1.97
(3) 2.37 (−1) 4.45 (−3) 1.49 (−6) 7.21 (−15) 2.39
(55) 1.77 (−1) 8.12 (−3) 5.60 (−5) 2.67 (−9) 2.00
(56) 7.09 (−2) 1.02 (−3) 2.82 (−9) 5.93 (−26) 3.00
(4) 3.15 (−1) 1.27 (−1) 1.65 (−2) 1.96 (−4) 2.20
(25) 3.15 (−1) 3.23 (−3) 4.59 (−10) 2.74 (−32) 3.25

Fig. 1 The graph of f1 (x) =


(x − 1)(x6 + x−6 + 4) sin x2
0.5 1.0 1.5 2.0

−50

−100

Fig. 2 The graph of



20 6 1012
f2 (x) = (x − i) on the
i=1
interval [5, 17] 3 1012

8 10 12 14 16

−3 1012

−6 1012
Author's personal copy
566 Numer Algor (2013) 63:549–569

Table 7 Two-step methods without and with memory applied to f1

Methods g (t) |x1 − α| |x2 − α| |x3 − α| rc (57)


α=1 x0 = −1.5 γ0 = −0.1 p0 = −0.01
(12) 1+t 6.36 (−3) 1.94 (−10) 2.48 (−40) 3.97
(44) 1+t 6.36 (−3) 1.47 (−15) 9.48 (−103) 6.90
(12) 1/(1 − t) 6.36 (−3) 6.15 (−10) 6.13 (−38) 3.99
(44) 1/(1 − t) 6.36 (−3) 1.47 (−15) 9.48 (−103) 6.90
α=1 x0 = 1.3 γ0 = −0.1 p0 = −0.1
(12) 1+t 2.14 (−4) 5.45 (−16) 2.31 (−62) 4.00
(44) 1+t 2.14 (−4) 2.50 (−25) 3.98 (−171) 6.96
(12) 1/(1 − t) 2.06 (−4) 8.29 (−16) 2.19 (−61) 4.00
(44) 1/(1 − t) 2.06 (−4) 1.80 (−25) 4.08 (−172) 6.96

Table 8 Two-step methods without and with memory applied to f2

Methods g (t) |x1 − α| |x2 − α| |x3 − α| rc (57)


α=2 x0 = 1.6 γ0 = −0.01 p0 = −5
(12) 1+t diverges
(44) 1+t 4.00 (−1) 1.01 (−1) 4.72 (−7) 6.55
(12) 1/(1 − t) 1.39 (−1) 3.18 (−2) 2.27 (−3) 1.58
(44) 1/(1 − t) 1.39 (−1) 3.18 (−2) 5.03 (−10) 10.50
α=8 x0 = 8.4 γ0 = −0.01 p0 = −5
(12) 1+t diverges
(44) 1+t 4.00 (−1) 8.72 (−2) 1.43 (−9) 17.52
(12) 1/(1 − t) converges to 7
(44) 1/(1 − t) converges to 7
α = 16 x0 = 16.4 γ0 = −0.01 p0 = −5
(12) 1+t diverges
(44) 1+t 4.00 (−1) 5.93 (−3) 1.40 (−16) 7.06
(12) 1/(1 − t) 2.13 (−2) 6.04 (−4) 4.49 (−7) 2.04
(44) 1/(1 − t) 2.13 (−2) 6.04 (−4) 2.55 (−23) 1.26

Table 9 Two-step methods without and with memory applied to f3

Methods g (t) |x1 − α| |x2 − α| |x3 − α| rc (57)


α=π x0 = 6 γ0 = −0.05 p0 = −0.05
(12) 1+t 3.48 (−3) 2.90 (−13) 1.39 (−53) 4.00
(44) 1+t 3.48 (−3) 2.33 (−19) 2.61 (−132) 6.98
(12) 1/(1 − t) 3.36 (−3) 2.61 (−13) 9.62 (−54) 4.00
(44) 1/(1 − t) 3.36 (−3) 2.06 (−19) 1.10 (−132) 6.99
α=π x0 = 7 γ0 = −0.05 p0 = −0.05
(12) 1+t 2.70 (−6) 1.05 (−25) 2.42 (−103) 4.00
(44) 1+t 2.70 (−6) 1.54 (−39) 1.48 (−273) 7.04
(12) 1/(1 − t) 2.70 (−6) 1.10 (−25) 3.04 (−103) 4.00
(44) 1/(1 − t) 2.70 (−6) 1.55 (−39) 1.53 (−273) 7.04
α=π x0 = 9 γ0 = −0.02 p0 = −0.08
(12) 1+t 1.81 (−1) 3.38 (−6) 4.70 (−25) 3.98
(44) 1+t 1.81 (−1) 6.48 (−11) 2.79 (−73) 6.59
(12) 1/(1 − t) 1.77 (−1) 3.39 (−6) 4.88 (−25) 3.98
(44) 1/(1 − t) 1.77 (−1) 3.76 (−11) 6.14 (−75) 6.59
Author's personal copy
Numer Algor (2013) 63:549–569 567

Table 10 Two-step methods without and with memory applied to f4

Methods g (t) |x1 − α| |x2 − α| |x3 − α| rc (57)


α ≈ 0.29 − i1.24 x0 = −1 − 3i γ0 = −0.2 p0 = 0.2
(12) 1+t 7.41 (−2) 6.62 (−8) 4.08 (−32) 4.00
(44) 1+t 7.41 (−2) 1.76 (−10) 1.06 (−70) 6.98
(12) 1/(1 − t) 9.10 (−2) 1.56 (−7) 1.30 (−30) 4.00
(44) 1/(1 − t) 9.10 (−2) 3.63 (−10) 1.65 (−68) 6.90
α ≈ 0.29 − i1.24 x0 = −i/2 γ0 = −0.02 p0 = 0.2
(12) 1+t 1.01 (−3) 2.24 (−15) 5.32 (−62) 4.00
(44) 1+t 1.01 (−3) 1.37 (−22) 2.08 (−155) 7.04
(12) 1/(1 − t) 1.02 (−3) 2.43 (−15) 7.65 (−62) 4.00
(44) 1/(1 − t) 1.02 (−3) 1.43 (−22) 2.83 (−155) 7.04

Table 11 Two-step methods without and with memory applied to f5

Methods |x1 − α| |x2 − α| |x3 − α| |x4 − α| rc (57)



α = 1 + i 2, x0 = i γ0 = −0.1 p0 = 0.2
(12) 1+t 5.10 (−2) 4.07 (−6) 1.51 (−22) 4.01
(44) 1+t 5.10 (−2) 3.23 (−10) 1.43 (−67) 7.00
(12) 1/(1 − t) 4.91 (−2) 2.60 (−6) 1.84 (−23) 4.01
(44) 1/(1 − t) 4.91 (−2) 2.68 (−10) 3.85 (−68) 7.00
α ≈ 0.502 + i0.058, x0 = 0 γ0 = −0.01 p0 = −1
(12) 1+t 1.34 (−1) 7.13 (−4) 8.99 (−13) 3.95
(44) 1+t 1.34 (−1) 9.60 (−8) 7.72 (−49) 6.71
(12) 1/(1 − t) 6.19 (−2) 5.04 (−5) 2.10 (−17) 4.02
(44) 1/(1 − t) 6.19 (−2) 8.17 (−9) 3.52 (−56) 6.90

Table 12 Methods without and with memory applied to f2 for random initial values and g (t) =
1+t

Methods |x1 − α| |x2 − α| |x3 − α| rc (57) α


x0 = 3.5
(12), γ0 = −10−2 , p0 = 0.1 diverges
(44), γ0 = −10−2 , p0 = 0.1 5.00 (−1) 3.71 (−1) 1.05 (−5) 3.55 4
(55) 7.80 (−2) 1.25 (−2) 2.33 (−4) 2.39 4
(56) 2.23 (−1) 1.72 (−2) 1.28 (−5) 2.58 4
x0 = 4.5
(12), γ0 = −10−2 , p0 = 0.1 diverges
(44), γ0 = −10−2 , p0 = 0.1 1.50 (0) 1.16 (−2) 1.68 (−13) 4.94 4
(55) 2.23 (−1) 1.68 (−1) 1.81 (−2) 0.99 5
(56) 2.59 (−1) 2.57 (−2) 3.58 (−5) 2.65 5
x0 = 5.5
(12), γ0 = −10−2 , p0 = 0.1 diverges
(44), γ0 = −10−2 , p0 = 0.1 4.50 (0) 2.58 (−1) 5.91 (−5) 2.88 10
(55) div.
(56) 2.97 (−1) 4.15 (−2) 1.30 (−4) 2.79 6
x0 = 6.5
(12), γ0 = −10−2 , p0 = 0.1 diverges
(44), γ0 = −10−2 , p0 = 0.1 4.50 (0) 2.73 (−1) 1.72 (−4) 3.76 11
(55) 3.18 (−1) 6.71 (−2) 3.39 (−3) 1.84 8
(56) 3.36 (−1) 7.09 (−2) 5.73 (−4) 3.08 7
Author's personal copy
568 Numer Algor (2013) 63:549–569

related to steady-state heat flow, electrostatic potential and fluid flow. For
example, two-dimensional system
x y
u(x, y) = 2+ y+ 2 +cos x sinh y, v(x, y) = −1+x+ 2 +sin x cosh y,
x +y 2 x + y2
related to complex potentials u and v, can be transformed to the analytic
function f4 (z) = u(x, y) + iv(x, y) = z + sin z + zi − 1 + 2i (z = x + iy) whose
zeros can be determined by the proposed methods.
We have used the computational software package Mathematica with
multiple-precision arithmetic. The errors |xk − α| of approximations to the
zeros are given in Tables 2–12, where A(−h) denotes A × 10−h . These tables
include the values of the computational order of convergence rc calculated by
the formula [8]
log | f (xk )/ f (xk−1 )|
rc = , (57)
log | f (xk−1 )/ f (xk−2 )|
taking into consideration the last three approximations in the iterative process.
It is evident from Tables 2–12 that approximations to the roots possess great
accuracy when the proposed methods with memory are applied. Results of the
third iteration in Tables 7, 8, 9, 10, 11 and 12 and fourth iteration in Tables 2–6
are given only for demonstration of convergence speed of the tested methods
and in most cases they are not required for practical problems at present. From
Tables 3, 8 and 12 we observe that all tested methods work with efforts (even
the basic fourth-order method (11) diverges in three cases) when they are
applied to the polynomial f2 of Wilkinson’s type [14], which is, actually, a well-
known fact from practice. We also notice that the values of the computational
order of convergence rc somewhat differ from the theoretical ones for the
methods with memory. This fact is not strange having in mind that formula
(57) for rc is derived for the methods without memory when rc usually fits very
well the theoretical order.

Acknowledgement I would like to express my gratitude to anonymous referee for helpful


suggestions and valuable remarks.

References

1. Aberth, O.: Iteration methods for finding all zeros of a polynomial simultaneously. Math.
Comput. 27, 339–344 (1973)
2. Džunić, J., Petković, M.S.: On generalized multipoint root-solvers with memory. J. Comput.
Appl. Math. 236, 2909–2920 (2012)
3. Džunić, J., Petković, M.S., Petković, L.D.: Three-point methods with and without memory for
solving nonlinear equations. Appl. Math. Comput. 218, 4917–4927 (2012)
4. Geum, Y.H., Kim, Y.I.: A multi-parameter family of three-step eighth-order iterative methods
locating a simple root. Appl. Math. Comput. 215, 3375–3382 (2010)
5. Jenkins, M.A., Traub, J.F.: Principles of testing polynomial zero-finding programs. ACM
Trans. Math. Softw. 34, 1–26 (1975)
6. Kung, H.T., Traub, J.F.: Optimal order of one-point and multipoint iteration. J. ACM 21, 643–
651 (1974)
Author's personal copy
Numer Algor (2013) 63:549–569 569

7. Ostrowski, A.M.: Solution of Equations and Systems of Equations. Academic Press, New York
(1960)
8. Petković, M.S.: Remarks on “On a general class of multipoint root-finding methods of high
computational efficiency”. SIAM J. Numer. Anal. 49, 1317–1319 (2011)
9. Petković, M.S., Džunić, J., Petković, L.D.: A family of two-point methods with memory for
solving nonlinear equations. Appl. Anal. Discrete Math. 5, 298–317 (2011)
10. Petković, M.S., Yun, B.I.: Sigmoid-like functions and root finding methods. Appl. Math.
Comput. 204, 784–793 (2008)
11. Rice, J.R.: A set of 74 test functions for nonlinear equation solvers. Computer Science Tech-
nical Reports 69-034, paper 269 (1969)
12. Steffensen, I.F.: Remarks on iteration. Skand. Aktuarietidskr. 16, 64–72 (1933)
13. Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs
(1964)
14. Wilkinson, J.H.: Rounding Errors in Algebraic Processes. Prentice Hall, Englewood Cliffs
(1963)
15. Yun, B.I.: A non-iterative method for solving nonlinear equations. Appl. Math. Comput. 198,
691–699 (2008)
16. Yun, B.I., Petković, M.S.: Iterative methods based on the signum function approach for solving
nonlinear equations. Numer. Algorithms 52, 649–662 (2009)
17. Yun, B.I.: Iterative methods for solving nonlinear equations with finitely any roots in an
interval. J. Comput. Appl. Math. 236, 3308–3318 (2012)

View publication stats

You might also like