A Recurrent Neural Network for Solving Convex Quadratic Program

Caihong Shan (Corresponding author) Department of Mathematics, Yanshan University 438, He Bei street, Qinghuangdao 066004, China E-mail: caihongshan2006@126.com Huaiqin Wu Department of Mathematics, Yanshan University 438, He Bei street, Qinghuangdao 066004, China E-mail: huaiqinwu@ysu.edu.cn The research is supported by the National Natural Science Foundation of China (10571035) and the Educational Science Foundation of Hebei province (Z 2007431) Abstract In this paper, we present a recurrent neural network for solving convex quadratic programming problems, in the theoretical aspect, we prove that the proposed neural network can converge globally to the solution set of the problem when the matrix involved in the problem is positive semi-definite and can converge exponentially to a unique solution when the matrix is positive definite. Illustrative examples further show the good performance of the proposed neural network.


Introduction and model formulation
In this paper, we are concerned with the following quadratic optimization program: ,and n R c ∈ .It is well known that quadratic optimization problems arise in a wide variety of scientific and engineering applications including regression analysis, image and signal progressing, parameter estimation, filter design, robot control, etc.In many real-time applications these optimization problems have a time-varying nature, they have to be solved in real time.The main advantage of neural network approach to optimization is that the nature of the dynamic solution procedure is inherently parallel and distributed.Therefore, the neural network approach can solve optimization problems in running time at the orders of magnitude much faster than the most popular optimization algorithms executed on general-purpose digital computers.At present, there are several neural network approaches for solving quadratic programming problem.Next, we describe the proposed neural network.
By the duality theorem of convex programming, ) , ( * * y x is an optimal solution to Eq.( 1) and ( 2), respectively, if and only if ) , ( * * y x satisfies the Karush-Kuhn-Tucker conditions 0 , 0 , 0 0 , 0 , 0 We see that the above Eq.( 3) may be transformed into the linear projection equation of the following form .and Ω P is a projection operator which is defined by We can see that the optimal solutions of (1) and its dual (2) can be obtained by solving the project equation ( 4).
We propose a neural network for solving (1) and (4), whose dynamical equation is defined as follow: , is an equilibrium point of the proposed neural network, then * * , y x is optimal solution to Eq.( 1) and Eq.( 2), respectively.On the other hand, if * * , y x is optimal solution to Eq.( 1) and Eq.( 2), then is an equilibrium point of the proposed neural network.

Preliminaries
This section, we introduce the related definitions and lemmas for later discussion.
, then any nonempty set of the form is said to be a level set of g.Definition 2. A system is said to have globally exponential convergence rate with degree η at * u if every trajectory staring at any initial point where 0 c and η are positive constants independent of the initial points.
Lemma 1 (Gronwall).Let u and v be real-valued non-negative continuous functions with domain where 0 a is a monotone increasing function.If for , where 1 Ω is unbounded.Then for all level sets of g are bounded if and only if With the lemma 1 and 2, we can give the existence and uniqueness of the solution to Eq. ( 5).Theorem 2. For each , there exists a unique and continuous solution ) (t u of Eq. ( 5), defined in . Therefore, using Lemma 1 we have

Convergence result
In the present section, under the assumption that . We prove the convergence of the proposed neural network.Theorem 3. Let M is positive semi-definite, then the neural network ( 5) is stable in the sense of Lyapunov and globally convergences to the solution subset of the problem (4).
Proof.First, definite if and only if u is a solution to problem (4).Thus the equilibrium point of the system in Eq.( 5) correspond to solutions to problem (4) because be the solution of the initial value problem associated with (3).We then consider the following Lyapunov function .Thus by Lemma 3 we see that all the level sets of V are bounded.On the other hand, using the technique of the proof from the literature, by the properties of the projection operator we have for all u is a solution of the problem (4), for all in the second inequality and then adding the two resulting inequalities yields V is a global Lyapunov function for the system in (5) and the system (5) is stable in the sense of Lyapunov.Since and the function ) (u V is continuously differentiable on the bounded and closed set 0 Ω , it follows from Lasalle's invariance principal that trajectories ) (t u will converge to E , the largest invariant subset of the following set: Which is a nonempty, convex, and invariant set containing in the solution set * Ω .So Therefore, the proposed neural network converges globally to the solution set of the problem (4).Remark 1.If A is positive definite then M is positive definite, too.Thus, form the proof of theorem 3 we can get the neural network ( 5) is globally exponentially convergent.
by Schwarz inequality we obtain , and hence Therefore, the proposed neural network is globally exponentially converges to the solution subset of the problem (4) if M is positive definite.

Simulation example
In order to demonstrate the effectiveness and efficiency of the proposed neural network, in this section, we discuss the simulation results through an example.The simulation is conduct on Matlab, the ordinary differential equation solver where Its exact solution is

Conclusion
In this paper, we have presented a recurrent neural network for solving convex quadratic programming problems, in the theoretical aspect, we have proved that the proposed neural network can converge globally to the solution set of the problem when the matrix involved in the problem is positive semi-definite and can converge exponentially to a unique solution when the matrix is positive definite.Illustrative examples further show the good performance of the proposed neural network.
Figure 1.Transient behavior of the neural network (5) in example 1 (a) the initial point Figure 1 (a) and (b) show the transient behavior of the neural network for those starting point, respectively.