On Second-order Approximations to the Risk in Estimating the Exponential Mean by a Two-stage Procedure

We consider the problem of minimum risk point estimation of the mean of an exponential distribution under the assumption that the mean exceeds some positive known number. For this problem Mukhopadhyay and Duggan (2001) proposed a two-stage procedure and provided second-order approximations to the lower and upper bounds for the regret. Under the same set up we give second-order approximations to the regret and compare our approximations with those of them. It turns out that our bounds for the regret are sharper. We also propose a bias-corrected procedure which reduces the risk.


Introduction
Let X 1 , X 2 , X 3 , . . .be a sequence of independent and identically distributed random variables from an exponential distribution having the probability density function f (x; λ) = λ −1 exp(−x/λ)I(x > 0), where the mean λ ∈ (0, ∞) of the distribution is assumed unknown parameter.Here and elsewhere, I( • ) would stand for the indicator function of ( • ).As an application, consider the lifetime of a system component which can be usefully represented by an exponential random variable.Exponential distributions have been widely used in many reliability and life testing experiments, and so investigated by many authors (see Balakrishnan & Basu (1995), for instance).Under the assumption that the mean λ exceeds some number λ L where λ L (> 0) is known to the experimenter, Mukhopadhyay and Duggan (2001) considered the problem of minimum risk point estimation for λ via a two-stage procedure and derived second-order lower and upper bounds of the regret function.In this paper we consider the same problem under the same setup as Mukhopadhyay and Duggan (2001).For a review of sequential estimation problems one may refer to Mukhopadhyay (1988), Gosh, Mukhopadhyay and Sen (1997) and Mukhopadhyay and de Silva (2009).
On the basis of the random sample X 1 , . . ., X n of size n, we want to estimate the mean λ by the sample mean X n = n −1 n i=1 X i under the squared error loss plus linear cost where c (> 0) is the known cost per unit sample.Then, the risk is given by R n (c) = E{L n (X n ; λ)} = λ 2 n −1 + cn which is minimized when n = n 0 = c −1/2 λ. (2) The associated minimum risk is R n 0 (c) = 2cn 0 .The goal is to achieve this minimum risk as closely as possible.
Unfortunately λ is unknown, so we cannot use the optimal fixed sample size n 0 , thus making it necessary to find a sequential sampling rule.Mukhopadhyay and Duggan (2001) dealt with this minimum risk point estimation problem under the assumption that λ > λ L and explored a two-stage estimation methodology under the loss function (1).They then developed second-order bounds for the associated regret.In this paper we use the two-stage procedure below proposed by Mukhopadhyay and Duggan (2001) under the assumption that λ > λ L .The initial sample size is defined by where m 0 (≥ 1) is a preassigned integer and [x] * denotes the largest integer less than x.Based on the pilot sample X 1 , . . ., X m , we calculate the sample mean X m and define If N > m, then one takes the second sample X m+1 , . . ., X N .By using the total observations X 1 , • • • , X N , we estimate λ by X N .The risk is given by E{L N (X N ; λ)} = E{(X N − λ) 2 + cN} and the regret is defined by The purpose of this paper is to provide second-order approximations to the regret ω(c) as c tends to zero and compare them with the results of Theorem 3.2 of Mukhopadhyay and Duggan (2001).Our bounds for the regret are proved to be sharper than those of them.We can also show that the purely sequential procedure in Woodroofe (1977) is more efficient than the two-stage procedure in regret under a certain condition.In order to reduce the risk we propose a bias-corrected procedure.In Section 2 we present the main results with second-order approximations to the regret and compare them with those of Mukhopadhyay and Duggan (2001).Section 3 gives brief simulation results.In Section 4 we describe conclusions.In the appendix we supply all the proofs of the theorems in Section 2.

Second-order Approximations
In this section we provide the main results with second-order approximations to the regret for the two-stage and bias-corrected procedures.The following theorem gives a second-order approximation to the regret.
Comparison. (i) We compare our bounds for the regret with those of Mukhopadhyay and Duggan (2001) who provided the following lower and upper bounds for the regret: where Then from Theorem 1 we get the bounds for the regret and since 0 < λ L /λ < 1. Therefore our bounds are sharper than those of Mukhopadhyay and Duggan (2001) for sufficiently small c.
Thus if λ L is sufficiently small compared to λ then the purely sequential procedure should be used.Now we shall consider the bias of X N .
Taking the bias of X N into account, we propose the following bias-corrected procedure: Then the associated risk is given by The following theorem shows that the bias-corrected procedure reduces one sample cost compared with the two-stage procedure (3) and ( 4).Thus the bias-correction is a little bit more effective in reducing the risk for sufficiently small cost.

Simulation Results
In this section we shall present brief simulation results.We consider the case λ = 1 and let m 0 = 3, 10 in (3).The two-stage procedure N defined by ( 3) and ( 4) and the purely sequential procedure N * in ( 7) was carried out with 1,000,000 independent replications for λ L = 0.2, 0.4, 0.6 when n 0 = 30, 50 and 100, namely c = 0.0011, 0.0004 and 0.0001, respectively.In six tables, N, XN , δN , ω1 (c , respectively, where while s( • ) denotes the standard error of the estimator of ( • ).For 0 < λ L < ( √ 5 − 1)/2 = 0.618 the inequality (λ/λ L ) − (λ L /λ) > 1 holds.The tables seem to show that (i) the regret becomes smaller as λ L grows larger, (ii) our bias-corrected procedure reduces one sample cost in risk, (iii) m 0 in (3) almost does not affect the regret and (iv) our theorems and the results in comparison in Section 2 are verified.

Conclusions
For the problem of minimum risk point estimation of the mean of an exponential distribution under the assumption that the mean exceeds some positive known number, we used the two-stage procedure proposed by Mukhopadhyay and Duggan (2001) and provided the second-order approximation to the regret as cost tends to zero.We found that this approximation gives the sharper lower and upper bounds for the regret than those of Mukhopadhyay and Duggan (2001).We also proposed the bias-corrected procedure and showed that this procedure is a little bit more effective than the two-stage procedure in reducing the risk for sufficiently small cost.Furthermore, it turned out that the regret decreases as the lower bound λ L for the true value λ increases.

Appendix
In this appendix we shall give all the proofs of the results in Section 2. Let Then ( 4) becomes N = max{m, T + U c }.The two-stage procedure defined by ( 3) and ( 4) belongs to the general procedure of Mukhopadhyay and Duggan (1999).In the notations of Uno and Isogai (2012), set Then Theorem 1 of Uno and Isogai (2012) yields Throughout this appendix K denotes a generic positive constant, not depending on c.
Lemma 1 The following statements hold.
By using Lemma 2.2 (i) of Mukhopadhyay and Duggan (1999) and Lemma 4 (ii) of Uno and Isogai (2012), we have Lemma 3 is needed to show the second-order approximation to the regret.Let Y i = 2X i /λ.Then Y 1 , Y 2 , . . .are independent and identically distributed random variables according to the chi-square distribution χ 2 2 with two degrees of freedom and It follows from the Marcinkiewicz-Zygmund inequality (Gut (2005)) that where K is a positive constant, depending only on p.The Cauchy-Schwarz inequality, (10) and Lemma 1 (iii) imply We shall show the first assertion.By using Lemma 1 (i), 0 ≤ U c ≤ 1, (10), (11) and Cauchy-Schwarz's inequality, we get which shows the first assertion.Next we shall prove the second part.Lemma 3 of Uno and Isogai (2012) shows that E(U c ) = 2 −1 + O(c 1/2 ) as c → 0. Taking this result into account, we have . By using Lemma 1 (i), ( 10), ( 11) and 0 ≤ U c ≤ 1, we have as c → 0 Thus, combining the above results with (12) yields the second part.Finally we shall show the third statement.In the same way as the second part we have It is easy to see Also we get as c → 0 ). Hence from the above relations and (13) we have the third statement.This completes the proof.