A New Splitting Technique for Solving Nonlinear Equations by an Iterative Scheme

Using a new trick, the nonlinear equation is recast to a coupled system consisting of a linear equation and a nonlinear equation. For the latter, with a weight factor we split the nonlinear term into two parts on both sides of the equation. When the two-dimensional nonlinear system is linearized around the iteration point to be a linear system, we can easily solve it and develop a fast convergent iterative scheme to solve nonlinear equations. In order to further enhance the convergence speed, a linear term is added on both sides of the first linear equation, which results to a very powerful iterative scheme with parameter being analyzed by the eigenvalues. The new iterative scheme is proved to be absolutely convergent, and the number of iterations for convergence is estimated. The merits of the present iterative scheme are insensitive to the initial guess of the solution, convergent very fast, and without needing of the differentials of the function.


Introduction
In this paper, we derive a fast convergent iterative scheme to solve a scalar nonlinear equation: where we do not need the differential of f , which means that f is allowed to be a non-differentiable function of x.
The requirement of a solution of Equation (1) is popular and frequently encountered in the applications of engineering and science. For instance, we have to solve cos x cosh x + 1 = 0, for the determination of the natural frequencies of the free vibration of a uniform cantilever beam [Liu & Li (2017)]. Equation (2) is a transcendental equation and the closed-form solution does not exist. Indeed, for most nonlinear equations, merely the numerical methods can help us to find the approximate solutions.
Given an initial guess x 0 , from Equation (1) follows an iterative Newton method (NM): which is known to be quadratic convergence. Nevertheless, it is well known that the NM is sensitive to the initial guess and may confront a dangerous situation in the fraction term when f ′ (x k ) tends to zero, and in this case the NM will be divergent.

A New Splitting Technique
Without losing any generality, f (x) in Equation (1) can be decomposed to where f 1 (x)x is the nonlinear part of f , while ax − b is the linear part of f .
be a new variable. Then, from Equations (1), (4) and (5) it follows that which are coupled system of a linear equation and a nonlinear equation for (x, y). The intersection point of the straight line (6) and the curve (7) is just the solution of Equation (1).
For Equation (7), we propose a new splitting technique by where w is a weight parameter, which will be discussed below.
Next, we develop an iterative scheme to solve Equations (6) and (8), when we know x k at the kth step: where we have linearized Equation (8) around x k , such that Equations (9) and (10) are linear system for (x, y). In doing so, we can seek the next x k+1 by solving which readily leads to Up to here, we no more need the variable y, because the iterative scheme (12) is self-contained for x.
Multiplying the denominator and nominator on the right-hand side in Equation (12) by x k and using obtained from Equation (4), we can further refine Equation (12) to which is a new iterative scheme to solve Equation (1).
If a = 0, Equations (6) and (7) return to the original nonlinear Equation (1), and meanwhile Equation (13) is not workable, when f (x k ) tends to −b, leading to the divergence of the iterative scheme, or when also b = 0, yielding an incorrect iteration x k+1 = wx k /(w − 1). To remedy this shortcoming, we can add a term cx on both the sides of Equation (6), which renders a modification of Equation (13) to where c is a given parameter. Obviously, the above iterative scheme satisfies the basic requirement that x k+1 −→ x k when f (x k ) −→ 0. Later, from the analysis of eigenvalues we will examine the advantage obtained by supplementing cx on both sides, even though a is not zero.
Remark 1. Since x = 0 is a fixed point of the iterative schemes (13) and (14), in general we cannot choose x 0 = 0 as the initial guess, unless x = 0 is a trivial solution of f (x) = 0. The iterative scheme (13) reduces to the fixed point method: if we take w = 1 and a = 1. It is obtained from Equation (1) by and set the left x to x k+1 and the right x to x k . In general, the iterative scheme with w = 1 is not convergent. From this aspect, Equation (8) is clever than the technique in Equation (15), where we split the nonlinear term on both sides of the equation through a weight factor w in Equation (8).
Now, we specify how to choose the parameter c, which is acting as another acceleration parameter. As mentioned above we have added a term cx on both the sides of Equation (9), such that Equation (11) .
The eigenvalues of the coefficient matrix decide the convergence behavior and its speed. Through some operations, we can find two eigenvalues: Since we need to inverting the coefficient matrix to obtain the iterative scheme (14), for the convergence of the iterative scheme, the values of λ 1 and λ 2 must satisfy For the special case with w = 1, we have λ 1 = c and λ 2 = −1, which is likely to be divergent, depending on the initial value x 0 and c.
Especially, the number of iterations is greatly influenced by λ 1 , wherein |λ 2 | > 1 and λ 2 is closed to -1, when c > 0. Therefore, we have the following estimation of the number of iterations for the convergence: where ε is the tolerance for the error of the solution. How to selecting a suitable value of c is a trade-off between the convergence behavior and the accuracy. Larger value of c will increase the absolute values of λ 1 and λ 2 , as a consequence, enhancing the convergence speed; however, it also makes a loss of the accuracy, due to the operation by adding cx on both sides of Equation (9), which, in order to achieve the same convergence criterion, might need more iterations.

Numerical Tests
In this section, we give some examples to assess the performance of the newly proposed iterative scheme and analyze its mechanism. For some cases we will compare the computed results to the Newton method (NM) and other methods with the same convergence criteria: We fix ε = 10 −15 for all tests, and the maximum number of iterations is set to be 500. Sometimes the NM may lead to spurious solution, satisfying the first criterion but not the second criterion. In order to get rid of the spurious solution, both two criteria are used here.
which has two solutions x = −10 and x = 3. Here, a = 0 and b = 1.
Given x 0 = 4, the NM converges with 20 iterations to achieve the solution x = 3, while the method of Bahgat (2012), shortened as BM, is convergent with 8 iterations. However, when we take x 0 < 2.6, both the NM and the BM are divergent. With x 0 = 2.6, the NM is convergent with 314 iterations, while the BM is divergent. The present method with w = −8 and c = 10 is convergent with 10 iterations. In this case, λ 1 = 10.27 and λ 2 = −1.27. According to Equation (20), the number N for the convergence is about N = 15. In practice, the number 10 of iterations is slightly faster than the estimated one.
We list the maximum error (ME) to satisfy f = 0 with the root x = 3 and the number of iterations (NI) for different values of w and x 0 in Table 1. If we take c = 0.1, λ 1 = 0.117, consequently the present method is divergent. If we take w = 1 and c = 13.4, λ 1 = 13.5 and λ 2 = −1, which makes the present method divergence with x 0 = 4, and convergence with 39 iterations with x 0 = 1. Basically, w = 1 is not suggested.
For f = 0 with the root x = −10, we list the NI for different values of w and x 0 in Table 2, which are compared to the NM and the BM. Tables 1 and 2 show that the NI of the present method is quite uniform. With w = 1, the present method is not applicable. The value of w with w < 1 is suggested. From these discussions we can observe that the present method is insensitive to the initial value, but both the NM and the BM are sensitive to the initial value. (17) and (18), we can observe that when c is given and fixed, we can give more negative value of w to increase the absolute values of λ 1 and λ 2 . This is a simple strategy to choose the value of w. For example, instead of w = −2 in Table 2, we can take w = −5, then the INs from 35 and 36 reduce to 29. In Section 4, we will derive a more delicate method to determine w, which is allowed to be a function of x.

Remark 2. From Equations
Example 2. In order to accelerate the convergence of the present method, we can also consider a suitable value of c, even though a 0. Let us consider where x = −10 is a solution. Here, a = 1 and b = −9.
Let x 0 = −20. It is interesting that the present method with w = −1 and c = −20 is convergent very fast with two iterations to obtain the solution x = −10, while the NM converges with 237 iterations and the BM is divergent.
If we take the initial value to be x 0 = 10, the three iterative schemes all obtained another solution x = −9.00000614462766 of Equation (23), with respectively, 9 iterations for the present method with c = 0, 144 iterations for the NM, and 51 iterations for the BM.
Example 3. Consider where x = 1 is a solution. Here, a = 1 and b = 4 in the present problem.
Let x 0 = 0.5. The present method with w = −3 and c = 0 is convergent with 30 iterations, while the NM converges with 8 iterations. It is amazing that when we take c = −1, the present method with w = −2 is convergent with 6 iterations, slightly faster than the NM.
where x = 1.875104068711961 is a solution. We take a = 0, b = −1, w = 0.9 and c = −4.1 in the present method.
In Table 3, we list the NI and the ME for different values of x 0 , which are compared to the NM and the BM. It can be seen that the present method is convergent very fast and is also insensitive to the initial values of x 0 .   In order to demonstrate the convergent mechanism of the present method, we consider two cases (a) c = −4 and (b) c = −5, with the same w = −2 and x 0 = 1. For the case (a), the present method converges with 28 iterations as shown by the solid line in Fig. 1(a), while for the case (b), the present method converges with 35 iterations as shown by the dashed line in Fig. 1(a). After the initial point at the first step, both the residual curves of f are monotonically convergent to zero. At the same time, the orbits of x, as shown in Fig. 1(b), are monotonically convergent to the true solution r = 1.875104068711961.
Upon letting and from Equation (14), we can derive We plot η k in Fig. 1(b), where we can observe two mechanisms to approach to the solution: They indicate that x k is a monotonically decreasing sequence for the Case (a), while for the Case (b), a monotonically increasing sequence. Now, we are in a good position to prove the following result.
Theorem 1. The iterative scheme in Equation (14) for solving f (x) = 0 is absolutely convergent, for some suitable values of the parameters w and c.
Proof. Let r be a true solution of f (x) = 0, i.e., f (r) = 0. We define and it follows that e k+1 − e k = x k+1 − x k .
With a suitable choice of the values of w and c, Equations (??) and (??) are easily proved from Equation (??). Without losing generality, we assume that the sequence of x k is x k > 0 and the real root r > 0.
Applying this inequality N − 1 times, we can derive Due to 1 + η i < 1, i = 2, . . . , N, the left sequence converges very fast to zero; hence, e N+1 = e N implies x N+1 = x N . As a consequence, the iterative scheme is absolutely convergent for this case.
Then, we consider the Case (b). From Equation (??), we have Since the sequence x k has an upper bound with x k < r, we merely require to prove that x k is a strictly increasing sequence, that is x k+1 > x k . Due to x k > 0, we need to prove that η k > 0. In Equation (??), f (x k ) > 0, thus we need to prove which can be achieved by selecting c such that where w < 1. Taking x 0 into the right-hand side with c = −aw + (w − 1)[b + f (x 0 )]/x 0 − 1 can hold the above inequality.
The sequence x k converges very fast to the true solution x = r. Hence, the iterative scheme is absolutely convergent for the Case (b). 2

Function of w(x)
From the derivations in Section 2, we can observe that w can be a function of x in Equation (8). For the determination of w(x), we let r be a simple solution of f (x), i.e., f (r) = 0 and f ′ (r) 0. We suppose that after N 0 iterations of Equation (14), the iterative scheme provides a sufficiently close solution to the exact one r, and we define e k = x k − r. Then, by using the Taylor series we have Let and we hope η to be a small quantity. Some operations lead to Journal of Mathematics Research Vol. 12, No. 4;2020 To require an n-order convergence rate, we take Therefore, we can derive awx k ϵ n−1 + (1 − w)ϵ n + (1 − w)bϵ n−1 + cx k ϵ n−1 = 1, which generates Hence, w may abide the following rule: If we take N 0 = 500 to be the maximum number of iterations, we return to the original iterative scheme. If we take N 0 = 0, then we fully use the new technique in the iterative scheme. Otherwise, it is a mixed type iterative scheme.
For example 3, we apply this modified method with c = −2, x 0 = 0.5, w 0 = −2 and n = 3, and we list the NI for different values of ϵ and N 0 in Table 4. It can be seen that the new strategy is effective, and it is sensitive to the value of ϵ. The original iterative scheme needs 18 iterations, while the new one with N 0 = 0 and ϵ = 0.315 needs 10 iterations.
In order to further compare the iterative scheme (14) to other methods, we consider the following test functions: f 2 (x) = sin 2 x − x 2 + 1, f 5 (x) = x 3 − 10.
In Table 5, for different functions we list the NI obtained by the present algorithm (14), which are compared to the NM, the method of Noor et al. (2006)

Conclusions
Employing the new trick of a new idea, the nonlinear equation was recast to a coupled system of a linear equation and a nonlinear equation. With a weight factor w, we split the nonlinear term into two parts on both sides of the equation. Through the consideration of the eigenvalues, a linear term cx was also added on both sides of the first linear equation. Then, we can easily derive the iterative scheme by solving the linearized system. The iterative scheme was proven to be absolutely convergent, and the iterations' number for the convergence was analyzed and provided by using the eigenvalues. Upon comparing four examples to other methods, we found that the present iterative scheme possessed three major advantages: insensitive to the initial value, convergent very fast, and without needing of the differentials of the function. Further tests on five functions revealed that the present iterative scheme was competitive to the cubic convergence iterative scheme, as well as to the fourth-order convergence iterative scheme.