First-principles Derivation of a Third-order Method for Solving a Two-dimensional Nonlinear System

In this paper, we adopt a ‘first-principles’ approach to deriving a cubically convergent unipoint iterative method for a two-dimensional system of nonlinear equations. We demand that the Jacobian and Hessian of an iteration function be identically zero at the fixed point, and these conditions allow us to determine various terms in the iteration function. We present analytical expressions for the inverses of two matrices appearing in the algorithm, which allows the iteration function to be written explicitly. We demonstrate the cubic convergence rate by means of a few numerical examples, and we determine the asymptotic error constant for these examples.


Introduction
In this paper, we intend to derive an iterative method for a two-dimensional real-valued nonlinear system that has cubic convergence for simple roots.Many papers derive such methods based on a multipoint approach, wherein the method is presented ab initio, and then shown to be of cubic order (Amat & Busquier, 2007;Cordero et al, 2009;Darvishi & Barati, 2007;Hueso et al, 2007Hueso et al, & 2009;;Kou, 2007;Nedzhibov, 2008;Traub, 1964;and references therein).Our approach will be different: we will define an iteration function G, and by imposing conditions appropriate for cubic convergence, we will construct the method.We have dubbed this a 'first-principles' approach.Furthermore, we will find analytical expressions for any inverted matrices present in the algorithm, so that we will be able to write G in an explicit form that requires no matrix inversion at all.We emphasize that our approach is a unipoint approach, so that G is written explicitly in terms of the previous iterate.Lastly, we demonstrate the algorithm by means of a few numerical examples, and we suggest a few applications.Our paper has a tutorial quality; as such, it represents a contribution to mathematical pedagogy in the field of numerical methods.

Let
denote the solution of the system where and A and B are matrices to be determined.Intuitively, one might expect that f 2 f 1 should also appear in F 2 ; we will see later that this would lead to a singularity.Indeed, since f 2 f 1 = f 1 f 2 (since f 1 and f 2 are real and continuous), the inclusion of f 2 f 1 in F 2 is superfluous, at the very least.
We intend to use G (x) in functional iteration (note that G (p) = p), so consider the following Taylor expansion: where and we have used the notation in which i, j, k = 1, 2. We refer to G as the Jacobian of G, and G as the Hessian of G.
Functional iteration on G implies since G (p) = p, and where t denotes the iteration count.Clearly, for cubic convergence, we require The first of these conditions leads to the well-known result where J is the Jacobian of F, so that The second condition G (p) = 0 enables us to determine B, and such analysis is the subject of this section.
Now, since G is 2 × 1 and F 2 is 3 × 1, we must have that B is 2 × 3, i.e.B has six elements.This means that we need to impose six independent conditions in order to determine these elements.The condition G (p) = 0 implies which provides eight independent conditions, two too many.However, assuming that g 1 and g 2 are continuous functions of x 1 and x 2 , we have that so that we actually only have six independent conditions So, we have which gives Computing the relevant second-order derivatives and evaluating at p, and imposing the conditions ( 16), yields It is a consequence of the evaluation at p that all terms proportional to either f 1 or f 2 have vanished (since f 1,2 (p) = 0).Now, if we define the matrices equations ( 19)−( 24) can be written as This gives Moreover, analytical expressions for J −1 and M −1 are easily determined: Note that J −1 and M −1 are singular only when At this juncture we can comment about the exclusion of the term f 2 f 1 from F 2 , as mentioned earlier.Had f 2 f 1 been included (so that F 2 was a 4 × 1 vector), the matrix M would have had the form and, since the second and third rows are identical, this matrix is singular.
It is worth checking to see that this method reduces to the expected form for the scalar case.In the scalar case we have and so as expected (see Schröder's formula in Traub, 1964).In the above, Z 2 = 0 because Z 2 is composed of elements from the second row of A but, in the scalar case, there are no such elements.

Implementation
Given the analytical expressions derived above, the implementation of the algorithm is clear: using f 1 (x 1 , x 2 ) and f 2 (x 1 , x 2 ) , we construct an explicit expression for G (x 1 , x 2 ) , which is then used for functional iteration.In this way, there is no need to explicitly invert matrices, or to evaluate functions through the passing of arguments.The function G can be constructed using computer algebra software, as we have done in our own work, and thereafter functional iteration is applied numerically.

Limitation
We acknowledge a limitation on the quality of this method -the fact that the matrix M scales rapidly with the dimensionality of the system.As the dimension increases, so does the size of M. For example, a three-dimensional system requires that M be 6 × 6, and a four-dimensional system requires that M be 10 × 10.In general, for a d-dimensional system, the Jacobian J is d × d, and Note that the dimensions of M are dictated by those of F 2 : for a d-dimensional system there are d(d+1) 2 elements in F 2 since, if the term f i f j is in F 2 , then f j f i is not (refer to our earlier discussion regarding f 2 f 1 ).By contrast, in the method of Kou (Kou, 2007), no matrix exceeds d × d, although admittedly Kou's method is a multipoint method -an observation that certainly informs the superiority of multipoint methods over unipoint methods, in terms of computational efficiency.

Numerical examples
As an example, consider (41) and where i, j, k, l = 1, 2. For the example considered here, we have where the entries in this matrix are rational approximations.

Applications
Although the method presented here is a generic one, and is applicable to any situation in which a two-dimensional system must be solved, there are a few specific applications worth noting.

Root of a complex function
Say f (z) is a complex-valued function, where z = α + βi, α, β ∈ R. Furthermore, assume that f has the explicit form where f 1 and f 2 are real-valued.The root of f can be found by solving the system for α and β.

Extremum of a two-dimensional function
If we seek the extremum (maximum or minimum) of the function f (x, y) , then we might seek to solve the system for x and y.

Implicit Runge-Kutta method
The two-stage fourth-order Runge-Kutta method (Butcher, 2003) for solving the initial-value problem y = f (x, y) requires solving the two simultaneous stage equations for k 1 and k 2 , where x i , w i and h are known input.The compuational effort required to solve these stage equations could render the method inefficient; hence, a cubically convergent algorithm can contribute positively to the overall efficiency of this Runge-Kutta method.

Conclusion
We have shown how a cubically convergent iteration scheme for solving a system of two nonlinear equations can be constructed, by appealing to fixed-point theory.We have defined a unipoint iteration function and, by demanding that the Jacobian and Hessian of this function should vanish at the fixed point, we have determined the entries of relevant matrices in the method.From a consistency point of view, we have shown that, in the univariate case, the method reduces to the known one-dimensional algorithm of Schröder.Some numerical examples serve to illustrate the cubic character of the method, and we have given values for the asymptotic error constant in these examples.
cubic convergence.The value for e 4 indicates that the error is smaller than the machine precision.Now, since G (p) = 0 and G (p) = 0, by construction, we have in the expansion(10) 1 e 1 e 1 e 1 e 2 e 1 e 2 e 1 e 2 e 1 e 1 e 1 e 2 e 2 e 2 e 1 e 2 e 2 e 2 e 1 e 2 e 2 e 2