Asymptotic Properties for Weighted Averages of Longitudinal Dependent Data

Brahima Soro1,2, Ouagnina Hili2 & Youssouf Diagana1 1 Laboratoire de Mathématiques, Informatique et Applications, Université Nangui Abrogoua, 02 BP 801 Abidjan 02, Côte d’Ivoire 2 Laboratoire de Mathématiques et des Nouvelles Technologies de l’Information, Institut National Polytechnique Félix Houphouët-Boigny de Yamoussoukro, Côte d’Ivoire Correspondence: Ouagnina Hili, BP 1911 Yamoussoukro, Côte d’Ivoire. E-mail: o hili@yahoo.fr


Introduction
We present a set of asymptotic normality results of real-valued function that we assume to be formed by weighted averages of longitudinal data.Since it's well known that the most commonly nonparametric kernel type estimators are written as kernel weighted averages, our results take in account a large class of estimators (Nadaraya, 1964;Watson, 1964;Stone, 1977;Müller, 1984).
Recently, Yao (Yao, 2007) has given general normality results for some function of kernel averages formed by longitudinal independent data.He has applied his general result to covariance function estimator to derive its asymptotic distribution.Soro & Hili (Soro & Hili, 2012) have generalized the results of Yao (Yao, 2007) to three-dimensional context.The data were equally independent.
In this paper, we extend the two-dimensional general result (Yao, 2007) to dependent longitudinal data.Our main results are the asymptotic normality of a sample averages of some function that we suppose to be formed by longitudinal data.We suppose that the data are strongly mixing.The results we provide are applicable to covariance function kernel estimator to derive its asymptotic distribution under alpha-mixing conditions.
In Section 2, we introduce the model as well as assumptions that are necessary in deriving the main results of this paper.Section 3 presents main results of the paper.

Model and Some Assumptions
Let {(X ir , U ir , T ir ), 1 ≤ i ≤ n, 1 ≤ r ≤ N} be n × N random variables, identically distributed as the random triple (X, U, T ) with values in R × R × T, where T is such that T = [0, T ] with T < ∞.
For the multi-index of integers λ = (λ 1 , λ 2 ) and k We consider a model for repeated measurements, which is typically used for longitudinal data treatment : (1) In the model (1), U ir is referred to the r-th observation of the random variable X i , made at the random time T ir .Assume that • the number of observations N(n) depends on the sample size n.For simplicity, N(n) will be noted N.
• X takes values in a probability space (Ω, A, P) whereas U is a real random variable.
• the observation times T ir are i.i.d. with a marginal density f 0 (t).
For r s, and (r, s) (r ′ , s ′ ), we define the joint probability densities as follow.
Let f 1 (t 1 , t 2 ) be the joint density of (T r , T s ), g(t, u) the density of (T r , U s ), g 1 (t 1 , t 2 , u 1 , u 2 ) be the joint density of (T r , T s , U r , U s ) and g We consider averages of longitudinal data of the form: for 1 ≤ ℓ ≤ L, where h n,K is a bandwidth, K : R 2 −→ R is a kernel function and ψ ℓ : R 2 −→ R are real functions.Let Let N (t 1 ,t 2 ) be a neighborhood of (t 1 , t 2 ) ∈ [0; T ] 2 .Now, we introduce the following basic assumptions that are necessary in deriving the main result of this paper.
(H1) (i) The kernel K is symmetric with a compact support. (ii where C is a non null constant. (H2) The bandwidth h n,K satisfies, , where a is a positive constant, as n −→ +∞.
The collection {ψ ℓ } ℓ=1,...,l of real functions ψ ℓ : R 4 −→ R satisfies: Let F b a be the sigma algebra generated by the random variables {X i , Y i. } b i=a .The stationary process {X i , Y i. } b i=a is called strongly mixing (Rosenblatt, 1956) Comments on the assumptions.Assumptions (H1) and (H2) are technical conditions for the proofs.Assumptions (H3) and (H4) are regularity conditions for joint probability densities.Assumption (H6) is mixing condition verified by the process.

Main Results: Consistency and Asymptotic Normality of Kernel Averages
Before establishing the main results of the paper, we first prove the consistency and the asymptotic normality of weighted averages (2).

Consistency of Kernel Averages
In this part of paper, we establish the consistency of (2).The result is given in the following theorem.
Proof.To establish the consistency of ( 8) we have to consider the following decomposition We denote by in probability) and we also recall that when (9) goes to zero, we have Θn,ℓ (t 1 , t 2 ) • Let prove that the second term in (9) goes to 0 when n goes to +∞.We have Then, And it follows that • Now, we prove that Var ( Θn,ℓ (t 1 , t 2 ) Using the definition of the variance, we have Concerning I 1 , We have by changing variables Given that triples {Y i j , Y ik , Y il } and {Y i j ′ , Y ik ′ , Y il ′ } are independent and equidistributed then we can write Let Λ n be a positive sequence tending to ∞ to be specified later on.Define Split ( 14) into two separate summations over index in S and not in S .That is where For ( 16), using Hölder's inequality, |Cov Since Card(S ) ≤ nΛ n , then Clearly, if taking Λ n = (ln ln n) 2 ln n, h n,K = ln ln n ln n in (18), one obtain so Turn to I 22 .Applying Davydov's Lemma (see Hall & Heyde, Corollary A.2), and assumption (H6) we have Using ( 21) Reducing the double sum above to a single sum, it follows that Since δ > 2, it is easy to see that (2 − 1/δ) > 0, by (H2) nh |λ|+2 n,K −→ ∞ and applying (H6) in ( 22) it follows that 13), ( 19) and ( 23).This conclude the proof of theorem 1 2

Asymptotic Normality of Kernel Averages
Here, we give the asymptotic normality of (2) in the following theorem.
We have where Γ ℓ i,r,s is defined in (12).Denote We now introduce Bernstein's big-block and small-block decomposition.We partition the set {1, 2, ..., n} into 2k n + 1 subsets with large blocks of size u n and small blocks of size v n and we set , where ⌋ and ) . The symbol ⌊.⌋ is integer part.Using (H2), one has Let U m , V m and W m be defined as follows: Then, we obtain the decomposition Now, let start the proof of theorem 2. The main idea is to show that as n −→ ∞, Remark: Relations (32) and (33) imply that S n,2 and S n,3 are asymptotically negligible; (34) and ( 35) show that the summands {U m } in S n,1 are asymptotically independent, verifying that the sum of their variances tends to ϑ ℓ (t 1 , t 2 ).Expression (36) is the Lindeberg-Feller's condition for asymptotic normality of S n,1 under dependence.Asymptotic normality of Z n is a consequence of equations ( 34)-( 35): • Proof of ( 32) Concerning J 1 , we have Using the second-order stationarity and the fact that {Ξ irs } and {Ξ ir ′ s ′ } are independent, (39) becomes First, we have Secondly, Thirdly, replacing (41) and ( 42) in (40), it follows At the end, J 1 in ( 38) is such that −→ 0, by ( 26). ( 44) then we reduce the sums and we write Combining ( 44) and ( 45), it follows that E[S 2 n,2 ] −→ 0 and S n,2 −→ 0 in probability.
This achieves the proof of (32).
• Proof of (36) We first establish the asymptotic normality (37) for the particular case where ψ ℓ is bounded.The case of ψ ℓ possibly unbounded is then establish by using a truncation argument.Let τ n be a fixed truncation point.We can replace ψ ℓ (T ir , T is , U ir , U is ) with the truncated process ) , ) .
Lemma (see the appendix in Masry (2005)).Note that U m is {F i 1 ,...,i un }-measurable with i 1 = m(u n + v n ) + 1 and i u n = m(u n + v n ) + u n .Note that using that V m = exp(iuU m ) as in the Lemma of Volkonskii & Rozanov, we have • Proof of (35)Replacing u n by v n we have Var(U m ) = var         m(u n +v n )+u n m(u n +v n )+u n ∑ i=m(u n +v n )+1 i i ′ m(u n +v n )+u n ∑ i ′ =m(u n +v n )+1 Cov(Ξ n,i , Ξ n,i ′ ) = u n ϑ ℓ (t 1 , t 2 )(1 + o(1)).