An Improved Secant Algorithm of Variable Order to Solve Nonlinear Equations Based on the Disassociation of Numerical Approximations and Iterative Progression

We propose an iterative method to evaluate the roots of nonlinear equations. This Secant-based technique approximates the derivatives of the function numerically through a constant discretization step h disassociated from the iterative progression. The algorithm is developed, implemented, and tested. Its order of convergence is found to be h-dependent. The results obtained corroborate the theoretical deductions and evidence its excellent behavior. For infinitesimal h-values, the algorithm accelerates the convergence of the Secant method to order 2 (the one of the Newton-Raphson method) with no need for analytic expression of derivatives (the advantage of the Secant method).

The objective of this work is to develop a technique able to increase the order of convergence of the Secant method by maintaining the finite-difference concept for the derivatives and thus the user-friendly character of the method, avoiding the need for finding the derivative of the function defining the equation. To this end, we develop a new kind of algorithms by introducing a constant parameter, similar to a discretization step for the definition of the backward finite-difference formula, which is disassociated from the sequence of roots obtained by the iterations. The convergence of the new algorithm depends on the value of this parameter. The analysis shows that this technique allows us to improve the results obtained in terms of convergence regarding the Secant method and can even match the order of convergence of the Newton-Raphson method.
Section 2 develops the new algorithm, evaluates its order of convergence, and describes its computational implementation. Section 3 tests this algorithm by solving several nonlinear equations, discusses the results and gives the conclusions of this work.

Second-Order Iterative Method
We consider the nonlinear function : fC  of the variable x , assumed to be sufficiently differentiable on the open interval C, and the simple root 0 x in C of the equation An iterative numerical method evaluates an approximation of 0 x at each iteration n, 0 n x , from the preceding one at iteration n-1, 1 0 n x  , which is then known, and starting from an initial estimation of the root, 0 0 x (Mathews & Fink, 1999). This procedure ends after N iterations by finding the approximate root 0 N x that verifies the given tolerance criterion for the first time: in which the upper-index (i) means the derivative of order i. Note that in the following the first, second, and third-order derivatives of f will be denoted by ' f , '' f , and ''' f . The truncation of second and higher-order terms in Eq.
(2), once inserted in Eq. (1), leads to the well-known second-order Newton-Raphson iterative method (Mathews & Fink, 1999), denoted by NRO2 here: This equation means that the root at each iteration is given by the intersection of the tangent of the function (first derivative) and the x-axis, i.e., by the linearization of the function.

First-Order Finite-Difference Algorithm for the Second-Order Iterative Method
The major drawback of the method described in Section 2.1, as well as for all the iterative methods for which the roots are dependent on the derivatives of the function f, is the analytic expression of the derivatives of f. The numerical approximation of the derivatives involved in these methods is an option to get rid of this drawback.
The substitution of the first derivative of f in the NRO2 formula, Eq. (3), by a first-order backward finite-difference formula (Smith, 1986), , between two consecutive iterative points leads to the well-known Secant algorithm (Mathews & Fink, 1999), denoted by o1NRO2 here: This method requires two initial estimation roots, 0 0 x and 1 0 x , and starts the evaluation of the approximations from the 2 nd iteration.

Disassociation of Discretization and Iterations
In o1NRO2, Eq. (4), the discretization used for the calculation of the derivatives is imposed by the iterative process. This discretization is not constant during the entire procedure, it changes from one iteration to another, and thus it is not controlled by the user. It can even lead to very bad approximations and erroneous evaluations of the derivative when both consecutive iteration points are separated by a large distance; in this case, the finite-difference approximation cannot approximate the local character of the derivative.
We propose an alternative for this method to get rid of this major drawback. It consists of the disassociation of the discretization for the finite-difference approximation from the iterative progression. This technique allows us to approximate the required derivative during the current iteration through an imposed discretization step, h, controlled by the user, only from the last value of the approximate root, evaluated during the precedent iteration. Thus, this h-based discretization makes it possible to conserve the local character of the derivative during the approximation by using two proximate points given by the discretization step h. It thus gets rid of an important drawback of o1NRO2, i.e., its dependence on the arbitrary difference between several values of the approximate root, evaluated during several former iterations, for the evaluation of this derivative.
We apply this development to the o1NRO2 method, Eq. (4), by considering the two equidistant points belonging to the same iteration, i.e., the initial estimation is set, 0 0 x , and the other initial point is evaluated from this point by considering the discretization step h for finite differentiations, 00 This method is named ho1NRO2.

Order of Convergence
The convergence of NRO2, Eq. (3), is known to be of order 2. The convergence of o1NRO2, Eq. (4), is known to be of order 1.618 (golden ratio) (Mathews and Fink, 1999). Now the order of convergence of the disassociated technique ho1NRO2, Eq. (5), is analyzed.
The development in Section 2.3 gives rise to a two-parameter dependence of the method, on   throughout iterations n and the definition of the derivatives at each iteration of the numerical scheme using the new parameter h. It is thus expected that the order of convergence of the algorithm will be h-dependent as well.
then k is the order of convergence of the sequence and K is the asymptotic error constant. x , 1 After subtracting 0 x from both of its sides the ho1NRO2 scheme, Eq. (5), written at 0 1 n and after defining the constants i.e., the order of convergence of ho1NRO2, Eq. (5) by assuming the convergence of the sequence, the last relation gives us: which, by Def. 1, Eq. (6), proves that i.e., the order of convergence of ho1NRO2, Eq. (5), is 2 for infinitesimal values of h. □ Theorem 1 states that the parameter h introduced in the algorithm makes it possible when it tends to zero to obtain the same order of convergence as NRO2, although the derivative of the function defining the nonlinear equation is not evaluated exactly, but by an h-dependent numerical approximation. The new h-dependent ho1NRO2 algorithm possesses the benefits of both NRO2 and o1NRO2, i.e., the convergence speed of NRO2 and the fact that it avoids the exact expression of the derivative of the function defining the nonlinear equation like o1NRO2.
Moreover, it will be seen in Section 3 that a transient range of h values exists, for which the order of convergence of ho1NRO2 is between 1 and 2.

Implementation
The implementation of the new algorithm ho1NRO2 in the Matlab ® R2017a environment is represented in Fig. 1.

Results and Discussion
The new h-dependent iterative algorithm developed in Section 2, ho1NRO2, is compared to the classic methods NRO2, o1NRO2, and VM by solving several nonlinear equations defined by quadratic, cubic, exponential, and logarithmic functions. Their results and performance in terms of speed of convergence are analyzed. The graphical representations of these functions and their first derivatives are displayed in Fig. 2, which shows the roots we approximate in the following Sections. is used for ho1NRO2 (see Section 3.5). In these tables, bold numbers indicate the initial estimation point used for each algorithm (the entire number just above the root we seek). The corresponding percentual relative errors ( %0 100 r n n E E x  ) are given in Tables 1bis, 2bis, 3bis, and 4bis.

Quadratic Function
We consider   2 21 f x x  and we aim to approximate its root 0 x close to 0.7071 (see Fig. 2a). Table 1 gives the results.

Cubic Function
We consider   3 155 f x x  and we aim to approximate its root 0 x close to 5.3717 (see Fig. 2b). Table 2 gives the results.

Exponential Function
We consider and we aim to approximate its root 0 x close to 2.1200 (see Fig. 2c). Table 3 gives the results.  and we aim to approximate its root 0 x close to 0.3854 (see Fig. 2d). Table 4 gives the results.  In all the cases shown above ho1NRO2 converges to the solution faster than o1NRO2 and as fast as NRO2. This means that what is lost with o1NRO2 by approximating the derivative of the function from NRO2 is now compensated for with the introduction of the new parameter h in ho1NRO2. Note that VM behaves similarly to NRO2 and ho1NRO2. However, regarding the new algorithm proposed in this paper, ho1NRO2, VM requires the definition of the highest exponent of 10 in the sought root (see Section 1), which is not always easy to find, especially in cases for which automatization is required by repeating the process in a subroutine for different functions within a global code.
It can be seen in Table 1 for the quadratic function that the introduction of the parameter h modifies the application of the derivative from o1NRO2 to ho1NRO2. For example, at the fourth iteration, 4 n  , o1NRO2 computes the derivative between the points 0.714326565546070 and 0.750124937531224, giving the value 2.928903006154588, whereas ho1NRO2 computes the derivative between the points 0.707107843135664 and 0.707107841411218 (with h=1.724446×10 -9 ), giving the value 2.828431456205209, which is a much more local approximation of the derivative. Note that NRO2 evaluates the derivatives locally, from the analytic expression of the derivative function, which is at the fourth iteration the derivative value at the point 0.707107843137255, equal to 2.82843137254902. The sequences of derivatives used at each iteration are the following, respectively for NRO2, o1NRO2, The new h-dependent method, ho1NRO2, is now tested by varying the initial estimation point 0 0 x and the discretization step h (h-convergence, ranging from 9 1 10   to 1 1 10   ) in the case of the quadratic function. The results are displayed in Fig. 3. As expected, the number of iterations required to hold the stop criterion depends on the estimation point 0 0 x . Also, a clear dependence on the step value h is observed, even if a good approximation is obtained whatever the step value h is used (Fig. 3a), the convergence speed depends on the h value (Fig. 3b). The method requires fewer iterations as h becomes smaller. For 0 0 x very close to 0 x and a very small h value, ho1NRO2 only needs 3 iterations to match the tolerance and reach the precision imposed by the user. x close to 0.7071 (see Fig. 1a). Note that the initial estimation value is chosen far from the exact root to make the N value not too short to see the evolution of the convergence. Following Def. 1, Eq. (6), for each h value, we aim to find the k value for which the quantity 1 k nn EE  tends to a constant, which is thus the order of convergence of the iterative scheme. Figure 4 represents the result of this study, for the entire variation of h (in logarithmic scale) (top) and the lower interval of variation of h (in linear scale) (bottom). It can be seen that k is h-dependent, as seen in Theorem 1, following an increasing progression as h is lowered. Moreover, 1.618 k  for 6 5.817 10 h   , which means that ho1NRO2 has a lower order of convergence than o1NRO2, and 1.618 k  for 6 5.817 10 h   , which means that ho1NRO2 has a higher order of convergence than o1NRO2. The order of convergence of ho1NRO2 is as high as NRO2, 2 k  , for the smallest values of h, 7 1 10 h   . These considerations match the conclusions of Theorem 1. They are very useful since one can control the behavior of the new method by changing the h value. For infinitesimal values of h, the algorithm behaves as good as NRO2 does, but with the advantage of avoiding the analytic evaluation of the derivative of the function defining the nonlinear equation, which is a very useful computational advantage.      Figure 7. Sequence of error of the ho1NRO2 algorithm vs. n for three exponent k values, with h=1.724446×10 -9 Figure 8. Sequence of error of the ho1NRO2 algorithm vs. n for exponent k varying from 1 to 2.5 by 0.1, for h=1.724446×10 -9 The CPU time required by ho1NRO2, carried out by running the ad-hoc functions developed on Matlab ® R2017a (see Section 2.5, Fig. 1) on a 128 GB RAM computer with the 64 bits Windows 10 operative system working with an Intel ® Core ™ i7-6800 K CPU @ 3.40 GHz, to solve the problem of the quadratic function (Section 3.1) does not show variations for NRO2, o1NRO2, or h01NRO2; it is 0.046875 s, even by changing the initial estimation moderately. This is most likely because this CPU time is due to the prints and other interactions with the screen, which means that the CPU time due to the computations for each algorithm is negligible. the quadratic function (Section 3.1). VM requires the same number of evaluations but 3 more operations per iteration than ho1NRO2, because of the evaluation of h n .
The results presented in the above section allows us to observe that the new iterative technique ho1NRO2 for finding the roots of nonlinear functions developed and implemented in this paper used with infinitesimal values of the new parameter h (discretization step for the approximation of the derivatives of the function) has a much better behavior than o1NRO2 (Secant method); it accelerates the convergence of the Secant method. This good behavior is the one of NRO2 (Newton-Raphson method). This improvement is due to the disassociation procedure made from the iterative progression and used in ho1NRO2 that allows us to obtain approximate results of the derivative evaluated numerically very close to the analytic value used in NRO2, whereas when no disassociation technique is employed, like with o1NRO2, the distance between two points used in this approximation can be large, causing bad approximations and lower convergence speed (ho1NRO2 with h tending to zero ensures an infinitesimal distance between the points involved in the evaluation of the derivative whatever the two consecutive iterative approximations are distant the one from the other, which always offers the user to calculate the approximation of the derivative locally with two proximate points, and not globally as when no disassociation is applied). Thus, by applying the disassociation procedure, the numerical derivative used in o1NRO2 tends to the analytic derivative used in NRO2, and ho1NRO2 tends to NRO2. The analysis of the results vs. h shows that for infinitesimal values of h the order of convergence of ho1NRO2 is higher than for o1NRO2 and reaches the one of NRO2. But the new technique h01NRO2 is not only able to skip up the loss in convergence speed produced when NRO2 is substituted by o1NRO2. Besides this important feature, it must be stressed that another advantage of ho1NRO2 is that it does not need any analytic evaluation of the first derivative of function f, whereas NRO2 requires the analytic evaluation of this derivative, whether the analytic deduction of the derivative is straightforward or not. The introduction of the new degree of freedom h makes the new algorithm very useful.
Theorem 1 proves that the order of convergence k of the algorithm depends on h and that, whereas it is only 1 for finite h values, for infinitesimal values of h it is equal to 2, like the Newton-Raphson method, higher than the Secant algorithm. Moreover, ho1NRO2 presents the advantage of requiring only one initial estimation, like the Newton-Raphson algorithm and unlike the Secant method, and avoids the analytic expression of the derivative of the nonlinear function, like the Secant algorithm and unlike the Newton-Raphson method, which makes the evaluation of the derivatives much easier. Thus, this new algorithm gets rid of the shortcoming of the Newton-Raphson method (the evaluation of the derivatives) and the Secant method (its order of convergence). Moreover, its implementation is straightforward. These advantages make it easy to use in many cases and allow us to obtain the approximate root quickly with huge precision. The introduction of the new parameter h into the iterative method provides the user with control that ensures that the approximate derivatives tend to their real local values. The test of the new algorithm by solving several nonlinear equations shows a huge concordance between the results obtained and the theoretical aspects given in Theorem 1.
These conclusions suggest studying the use of similar h-disassociation techniques in other existing iterative schemes in the future.