Generalized Quasilinearization versus Newton ’ s Method for Convex-Concave Functions

In this paper we use the Method of Generalized Quasilinearization to obtain monotone Newton-like comparative schemes to solve the equation F(x) = 0, where F(x) ∈ C[Ω,R], Ω ⊂ R. Here, F(x) admits the decomposition F(x) = f (x) + g(x), where f (x) and g(x) are convex and concave functions, in C[Ω,R], respectively. The monotone sequences of iterates are shown to converge quadratically. Four cases are explored in this manuscript.

1. Introduction Lakshmikantham and Vatsala (2004) showed that the method of Generalized Quasilinearization can be successfully employed to solve the equation 0 where f ∈ C[R, R], by comparing it to the initial value problem (IVP) on [0, T ].By exploiting the properties of the method of Generalized Quasilinearization (Lakshmikantham and Vatsala, 1998) they constructed a comparable framework to Newton's method to solve for an isolated zero, x 0 of (1).Newton's method simplicity and rapid convergence renders it as a first choice to solve (1), but its limitations have given rise to a broad body of work that continues to this day (Bellman and Kalaba,1965;Kelley, 1995;Ortega and Rheinboldt (1970); Potra (1987)).The generation of Newton-like schemes for the solution to (1) from the method of Generalized Quasilinearization is a logical choice, as Quasilinearization follows a similar methodology to Newton's method albeit for a seemingly different purpose (Bellman and Kalaba,1965).The main iteration formula for Newton's Method is which requires that f x (x) be continuous and also nonzero in a neighborhood of a simple root of (1).This in turn requires knowledge of the monotone character of f (x).The similarities between Newton's method and standard Quasilinearization are enhanced when expressing (3) in the form and comparing it with the iterative scheme afforded by Quasilinearization to approximate (2): The similarities between the method of Generalized Quasilinearization and Newton's method come to light when we look at the Newton-Fourier method as is shown next: let f be defined as in (1), and let D = [α 0 , β 0 ] be a neighborhood of the simple zero of (1).If 0 < f (α 0 ), 0 < f (β 0 ), f x (x) > 0, and f xx (x) > 0, meaning that f is monotone increasing in D, then there exist monotone sequences {α n } and {β n } that converge quadratically to the simple zero of (1) in D. These sequences are generated by the iterative scheme (Lakshmikantham, V. 1998).Although two sequences are required, the Newton-Fourier method is not more expensive than the traditional Newton's method since the derivative computations are the same for both iterates (Potra, 1987).
Comparatively, consider the IVP (2) with the following conditions: where α 0 (t) ≤ β 0 (t), t ∈ [0, T ], and α 0 (t), β 0 (t) are the upper and lower solutions of (2), see Ladde, Lakshmikantham, and Vatsala (1985) and Lakshmikantham & Vatsala (1998, 2005).The corresponding iterative scheme using Generalized Quasilinearization is: where α 0 ≤ x ≤ β 0 on [0, T ].These schemes give rise to sequences of iterates that converge uniformly and quadratically to x(t), the unique solution of (2) on [0,T].For complete details refer to Lakshmikantham and Vatsala (1998).If one considers the case where f is monotone decreasing, then the Newton-Fourier method is no longer applicable, while the solution of a comparable IVP with f monotone decreasing can be approximated with Generalized Quasilinearization.Below we present four cases where iterative schemes are developed to approximate (1) where f can be decomposed into a convex and a concave function.The corresponding Generalized Quasilinearization theorems and proofs can be found in Lakshmikantham and Vatsala (1998, section 1.3).

Statement of the problem
Consider the scalar equation where, f, g ∈ C[Ω, R], Ω ⊂ R, and [α 0 , β 0 ] = D ⊂ Ω. Assume that (??) has a simple root x = r in D. Let f and g be convex and concave functions respectively such that, f x , g x , f xx , g xx exist, are continuous and satisfy The purpose of this paper is to produce iterative schemes that will generate two monotone sequences {α n } and {β n } that converge from both left and right to r.We will show that this convergence is quadratic in the four following cases.
The next step is to show that α 1 < r < β 1 .Let p = α 1 − r.The fact that f (r) + g(r) = 0 together with ( 16) (with n = 0), yields: We again use the MVT with γ 1 , γ 2 ∈ (α 0 , r), and ( 11) to obtain Where we have again appealed to the strictly increasing and strictly decreasing natures of f x (x) and g x (x), respectively.Condition (14) leads to p < 0, so α 1 < r.By letting p = r − β 1 one can easily show that r < β 1 .Now that both the base and n = 1 cases have been proven, by induction we see that (15) holds true, where lim n→∞ α n = r = lim n→∞ β n .
Consider the case p n+1 = r − α n > 0. Combining ( 10) and ( 16) we have: The Mean Value Theorem with η 1 , η 2 ∈ (α n , r) produces the next result where we let p n = r − α n .We also observe that by (11) f x (η 1 ) < f x (r) and Application of the MVT once more yields the results below with The concave-convex behavior of f and g in C[Ω, R] given by ( 11) along with condition (14) yield: where Now, (18) yields, . We arrive at the desired result: Similarly, for β n+1 = β n − r > 0 we obtain the estimate where, M 1 * , M 2 * are defined in ( 19).The combination of the results ( 20) with ( 21) yields: Finally, where, K = 3 2 (M 1 * + M 2 * ).This completes the proof of the theorem.
If F(x) is a decreasing function with an isolated zero x = r ∈ D, one can modify Theorem 1 accordingly as follows.
Theorem 2. Assume that (10), and (11) hold true, and Then, there exist monotone sequences {α n } and {β n } which satisfy (15), and which converge quadratically to r.The sequences {α n } and {β n } are generated by the following iterative scheme: The mechanics of the proof of Theorem 2 are the same as those of Theorem 1.Here we present the highlights of the proof of Theorem 2. Letting p = α 0 − α 1 and combining ( 23) with ( 26) with n=0 yields the same result given for p = β 1 − β 0 while combining ( 24) with ( 27) with n=0: which by ( 11) and ( 25) indicates that p < 0 on both cases, thus α 0 < α 1 and β 1 < β 0 .Likewise, when showing that ]p, where we first let p = α 1 − r followed by p = r − β 1 .So, the n-case becomes 0 which establishes that α n < r < β n , from which (15) follows.Quadratic convergence follows in the same fashion as Theorem 1.
Theorem 3. Assume that (10), and (11) hold true, and Then, there exist monotone sequences {α n } and {β n } which satisfy ( 15), and which converge quadratically to r, the isolated zero of ( 10).The sequences {α n } and {β n } are generated by the following iterative scheme: The proof follows the same strategy as the previous proofs, but with some variation in the results which we address below.
Quadratic convergence now follows.
Where we again used the MVT with α n < σ < r and r < η < β n .By (11) we have that − f x (σ) < − f x (α n ) and −g x (η) < −g x (β n ), using condition (30) we produce the following inequalities: Here, we have used the fact that p 2 n − 2p n q n + q 2 n ≥ 0 ⇒ 3q 2 n 2 + p 2 n 2 ≥ q 2 n + p n q n .Now, place bounds on the derivatives.