An Application of Pontryagin ’ s Maximum Principle in a Linear Quadratic Di ff erential Game

Received: December 16, 2010 Accepted: January 7, 2011 doi:10.5539/jmr.v3n2p145 Abstract This paper deals with a class of two person zero-sum linear quadratic differential games, where the control functions for both players are subject to integral constraints. The duration of game is fixed. We obtain the necessary conditions of the Maximum Principle and also optimal control by using method of Pontryagin’s Maximum Principle. Finally, we discuss an example.


Introduction
In this study, a class of two person zero-sum linear quadratic differential games (Basar & Olsder, 1999) is considered.The Pontryagin 's Maximum Principle is a powerful method for the calculation of optimal controls.Because of it has the important advantage not requiring previous evaluation of the payoff functional.Here we describe this method for a non-autonomous linear quadratic differential game and illustrate its use with an example.The present paper is closely related to the following works.
Note a two player linear quadratic differential game was investigated by (Jodar, 1991).In this work an explicit solution of a set of coupled asymmetric Riccati type matrix differential equations was found.(Amato et al., 2002) considered a class of two player zero-sum linear quadratic differential games.They studied two cases, the finite horizon case and the infinite horizon case.In the finite horizon case, sufficient conditions for the existence of closed loop strategies were obtained based on the existence of the solution of suitable parameterized Riccati equations.Then the infinite horizon case was studied where the closed loop strategies are also required to guarantee asymptotic stability of the whole system.
A problem of linear quadratic optimization was investigated by ( Rozonoer, 1999 ).In this study necessary and sufficient conditions for existence of optimal control for all initial positions were obtained.Using Pontryagin maximum Principle and Bellman's equations, some general hypotheses have been proposed.(Sussmann and Willems, 1997) gave an historical review of the development of optimal control .They studied the development of the necessary conditions for a minimum, using the Euler-Lagrange equations to the work of Legendre and Weierstrass and, eventually, the maximum principle of optimal control theory.(Lewis, 2006 ) studied the linear quadratic optimal control problem by using method of Pontryagin 's Maximum Principlewas in autonomous systems.(Mou and Yong, 2006 ) was devoted to a thorough review of general two-person zero-sum linear quadratic games in Hilbert spaces.
In (Wang and Yu, 2010), the Maximum Principle for a new class of non-zero sum stochastic differential games was considered.A necessary and sufficient condition in the form of Maximum Principle for open-loop equilibrium point of the games was established.
The above results can be obtained using two classical approaches: Maximum Principle (Pontryagin et al., 1962) and Dynamic Programming (Bellman, 1957).However our work is base on Maximum Principle of Pontryagin.For a comprehensive review, see (Athans & Falb, 1966).
The paper is organized as follows: in Section 2, we describe the statement of the problem.In Section 3, we give some conditions and Theorem (fixed interval).In Section 4, we derive the main result and one example.Finally the conclusion is given in section 5.

Problem formulation
We consider the linear quadratic differential game described by and by the payoff functional where Moreover, Q i is assumed to be symmetric and R ii are symmetric positive definite.

Conditions And Scheme Of The Method
In this section we consider a differential game governed by the equation where the function f : R n × R m × R m → R n satisfies conditions that ensures existence, uniqueness and extendability of the solution of the initial value problem for (4) with x(t 0 ) = x 0 .The corresponding control functions u 1 (t), u 2 (t) for Pursuer and Evader, respectively, must satisfy the following constraints.
where ρ and σ are positive numbers.

Boltyanskii Tangent Cone To A Set At A Point
Let S be an arbitrary subset of R n and x 0 ∈ S .The vector v ∈ R n is called tangent vector to S at x 0 if and only if there exists a function φ : R → R n such that x 0 + v + φ( ) ∈ S for all sufficiently small > 0 and φ( ) → 0 as → 0. Clearly, if v is a tangent vector to S at x 0 and k > 0, then kv is also a tangent vector to S at x 0 .We introduce Boltyanskii tangent cone to set S at a point x 0 by C S (x 0 ).For simplification, it is denoted by C. The dual cone C ⊥ of a cone C in the linear space R n is defined as the set

Transversality Condition
It is an additional necessary condition for optimality for a problem with the terminal constraint.Instead of being of the form x(t 1 ) = x 1 considered form x(t 1 ) ∈ S , where S is a given subset of R n .

The Maximum Principle With Transversality Conditions For Fixed Time Interval Problems
We suppose that the following conditions are valid C 1 .n and m are nonnegative integers.
C 2 .t 0 and t 1 are real numbers such that t 0 < t 1 .
C 3 .The following continuous functions are given.
are continuously differentiable, and their partial derivatives with respect to the x coordinate are continuous functions of (t, x, u 1 , u 2 ).
C 5 .x 0 is a given point of R n and S is a given subset of R n .C 6 .The set of all trajectory-controlled triple T CT define on [t 0 , t 1 ] is the set of all triples (x(•), If L is a Lagrangian, the corresponding payoff functional is C 8 .We assume set OT CT be set of all optimal control trajectory system (4) such that C is a Boltyanskii tangent cone to S at the point x = x * (t 1 ).
We use the following Theorem (see Pschenichnii, 1980 andSussmann, 2006) for proving the main result.
Theorem (fixed interval) Assume that m, n, t 0 , t 1 , f, L, x 0 , S satisfy the conditions C 1 − C 5 , T CT and J are defined by C 6 − C 7 and (x * , u * 1 , u * 2 ) holds C 8 .x is defined by C 9 and C satisfies C 9 .Define the Hamiltonian H The variable p is the costate and also there exists a pair (λ, λ 0 ) such that
The maximum Hamiltonian is the function defined by (vi).The transversality condition holds −λ(t 1 ) ∈ C ⊥ .
The statement of the problem considered in the Proposition is followed from Theorem (fixed interval) ( see Athans & Falb, 1966).The sets S are sub manifolds.Part (vi) of Theorem (fixed interval) give the transversality conditions when the initial and final point are constrained to lie in subsets.
In Proposition, the problem being solved is a very particular case.The initial constraint set is a point, i.e., S 0 = {x 0 }.The final condition, however, is completely unconstrained, i.e., S 1 = R n .If one applies the transversality condition, this implies that the adjoint vector must vanish at the final time since it must annihilate every vector in R n .

Main Result
Proposition.(The Maximum Principle for A Linear Quadratic Differential Game) Let ( 1)-(3) be the linear differential game , Proof.By the transversality conditions of the Maximum Principle we have λ * i (t 1 ) = 0, i = 1, 2. By Theorem ( fixed interval), the Hamiltonian is which is a quadratic function of u i with a negative-definite second derivative.Thus the unique maximum occurs at the point where the derivative of the Hamiltonian with respect to u i vanishes.Hamilton's equations are where i = 1, 2, j = 1, 2, ..., n, k = 1, 2, ..., n, s = 1, 2, ..., m.In matrix form, three equations for optimal control behavior are ẋ = A(t)x + B(t)u i λi = −Q i (t)x − A(t) T λ i (t) 0 = R ii (t)u i + B T (t)λ i (t), i = 1, 2.