One New Method on ARMA Model Parameters Estimation

Xiaoqin Cao (Corresponding author) Department of Science, Yanshan University Qin Huangdao 066004, China E-mail:fanjing850328@163.com Rui Shan Department of Science, Yanshan University Qin Huangdao 066004, China E-mail: srlhy@hotmail.com Jing Fan & Peiliang Li Department of Science, Yanshan University Qin Huangdao 066004, China Abstract The estimation of ARMA model parameters really belongs to the least-square problem,in ARMA model because the residual are calculated by given time series,the time series and parameter are nonlinear.However it is difficult to calculate the derivative of objective function.This paper substitutes derivative with difference,then calculate the first derivative and the second derivative of objective function.Finally we prove that, under suitable hypotheses, the proposed algorithm converges globally.


Introduction
For nonlinear least-squares problem,we minimize t ε in the sense of the sum of squares.We can take full advantage of their special structure to design a more effective method.For example,Gauss-Newton method,damping Gauss-Newton method & LM method and so on.However Gauss-Newton method just use the information of the first derivative ( k V ) of the function to get the approximation of the Hesse matrix,however neglect the nonlinear items of the 2 ( ) S β ∇ ,therefore the performance and convergence of the Gauss-Newton method will certainly be affected.becauseof these problems,we present a new method.
In ARMA model,consider x − ..., therefore t X and β are nonlinear.However when we only know the form of ARMA model,we are unable to directly express x − ... Therefore we substitute derivative with difference,and then calculate the approximation of the gradient and the Hesse matrixof the objective function.
Trust region methods are a class of optimization methods to guarantee the overall convergence of technologies.Although the trust region method can be traced back to Levenberg(1944),Marquardt(1963),Goldfeld,Quandt & Trotter(1966),the modern trust region method is raised by Powell (1970).He clearly posed the trust region subproblem and the convergence theorem.These measures show the trust region has the greater advantages than the linear search method.We make a small improvement of the traditional trust region method,consider the non-linear degree of the nonlinear least-squares problem,and correct the Hesse matrix of the objective function from two respects to make the algorithm has a better nature.

Objective function
For ARMA(n,m) model Substitute ( 1) with the form of the matrix

Modern Applied Science
May, 2009 205 ( , ) The so-called parameters estimation is to select the appropriate model parameters β ,so that the residual sum of squares of the model is minimal For ARMA(n,m) model,because ( , ) t f X β is nonlinear function,this problem is called non-linear least-squares problem,we can adopt a variety of iterative methods of optimization theory to calculate it,finally obtain model parameters β which make the objective function ( ) S β minimal.

Determination of initial value
At the iterative calculations of nonlinear least-squares,we should give the initial value of iterative calculation.

Determination of parameters initial value 0 β
The selection of the parameters initial value 0 β is extremely important,which relates to the convergence speed of iterative calculation,this paper uses the long autoregression model of AR ( ) The transfer function of the equivalent system described by AR ( ) Where, i the transfer function of the equivalent system described by ARMA(n,m) model is Because of the transfer function described in the system are equivalent,( 4) and ( 5) should be equal,namely Compare the same power coefficient of B on both sides of (6), we have ( ) For the first n equations of the formula,when 0 j θ is known,this is the linear equations about 0 i ϕ , 0 i ϕ can be easily When j m > ,we set 0 0 j θ = .For the last formula in (7),we separately set 1, 2, , k n n n m = + + + L ,and n m p + = ,written by matrix form this is the linear equations about 0 j θ , 0 j θ can be easily solved.Therefore,we first solve (9) to obtain 0 j θ ,then solve (8) to obtain 0 i ϕ ,this is the calculation principle of long autoregression model.

Determination of residuals initial value t ε
Residual value can be determined by the follow formula 2.3 The calculation of the gradient ( ) g β and Hesse matrix ( ) We conduct Taylor expansion for t x in ARMA model at 0 2.3.1 The calculation of the vector Where,the initial value of the residual Where, i ,combining ( 12) with ( 14),we have (1 ) ( 1 , ) x are unknown,which are calculated by 1 x − ...,however we are unable to show ε − ,then the gradient ( ) g β and Hesse matrix ( ) B β are unable to be calculated.We substitute derivative with difference to calculate the gradient ( ) g β and Hesse matrix ( ) If the non-linear degree of the this problem is relatively high,we approximate Hesse matrix by BFGS formula;If the non-linear degree of the this problem is relatively low,we approximate Hesse matrix by first derivative.

Trust region method
Trust region method is iterative method,at each iteration,we solve trust region subproblem ( ) or its approximation, k Δ is trust region radius.
At this iterative point k β ,we solve the trust region subproblem to obtain the trial step k d ,we consider the actual reduction given by and the predictable reduction will be given by ( ) Define their ratio Trust region method adjust the radius of trust region by the information of their ratio .

Algorithm
Step 1 given time series { t x }(t=1,2, L N),model order n,m and the initial value of the model parameters 0 Step 2 calculate { Step 3 solve the subproblem (18) and obtain k d Step 4 Set ,calculate ( ) ,go to step 3; else if ( ) ( ) ,go to step 3; else go to step 5.
Step 5 if ,stop;else go to step 6.

The analysis of the convergence property
Assumptions: 1) The level set 2) Hesse approximation matrix k B is uniformly bounded according to norm,namely 3) The function S is second continuously differentiable and has the lower bound at level set.
Lemma 1 if k d is the solution of (18), then it satisfies Lemma 2 Assume { k β } are iterative sequence generated by the algorithm,then ( ) k S β is monotonous and non-increasing. Proof:assume By the mean value theorem,there exists ,by Lipschitz conditions ,we have Proof:Assume the conclusion does not hold,that is to say here exists 0 ε > ,for all k ,we have This proposition is proved.
monotonous and non-increasing.Next we analyze the global convergence of this algorithm.Theorem 1 If the assumptions hold,then the sequence generated by the algorithm satisfies lim inf 0