Application of VaR Methodology to Risk Management in the Stock Market in Iran

In this paper, the performance of RiskMetrics model for prediction of 1-day and 10-days value at risk were preceded in three confidence levels of 95%, 97.5% and 99%.The main data are TEDPIX Index that their fluctuations can be indicated market risk of Tehran Stock Exchange. Time series of this index has been applied from 21 March 2001 to 20 March 2010 with the total 2172 observations. As well, for validation of models, Kupiec test and Christoffersen test have been applied. The finding of this paper is that Risk Metrics model are good alternatives in modeling volatility and in estimating VaR. Also the results indicate that in Kupiec test for both periods, the accepting models number are equal, but in Christoffersen test, the results indicate that upon increasing the time period, the accepting models number are decreased.


Introduction
The term "risk management" originated in the 1950s.It had long been used to describe techniques for addressing property and casualty contingencies.However, it was not until 1990s, after a series of financial disaster (Orage county (December 1994), Barings (February 1995), Metallgesellschaft (December 1993)) that financial institutions came to realize the importance of financial risk management as a discipline.The main cause of these financial disasters is the lack of proper risk management.In virtually all cases management did not know what risks the institution was taking.The new " risk management" that involved the 1990s had a new meaning the entire process of identifying, evaluating, controlling, and reviewing risks to make sure that the organization is exposed to only those risks that it needs to take to achieve its primary objectives.Risk cannot be eliminated.However, through good risk management, it can be (1) Transferred to another party, who is willing to take risk; (2) reduced, by having good internal controls; (3) Avoided, by not entering into risky businesses; (4) Retained, to either avoid the cost of trying to reduce risk or anticipate higher profits by taking on more risk, and; (5) Shared, by following a middle path between retaining and transferring risk.(Gallati, 2003) The benefits of risk management are (1) Risk management helps to increase the value of the firm in the presence of bankruptcy costs, because it makes bankruptcy less possible.(2) The presence of informational asymmetries means that external finance is more costly than internal finance, and good investment opportunities can be lost.Risk management helps alleviate these problems by reducing the variability of the corporate cash flow.(3) Risk management helps investors achieve a better allocation of risks, because financial institutions would typically have better access to capital markets.(Dowd, 2005) Value at Risk (VaR) has been became the most widely accepted tool to measure market risk, and it is now a standard in the industry.Intuitively, VaR is the maximum loss that the value of an asset (or a portfolio of assets) can suffer with a given probability and during a specified time-horizon.In statistical terms the VaR can be thought of as a quantile of the returns distribution (Chamu' morales, 2005).Generally, the calculation methods for being in risk is divided in three categories; variance-covariance, Monte Carlo simulation method, historical simulation method.In this research, the performance of RiskMetrics model for predicting the market risk in Tehran Stock Exchange is investigated.
The rest of the paper is organized as follows.Section 2 presents a detailed literature survey that discusses VaR estimation models Section 3 provides a description of various VaR methods, while section 4 describes the evaluation framework.Section 5 presents preliminary statistics for the dataset, explains the estimation procedure and presents the results of the empirical investigation.Section 6 concludes this study.

Literature Review
During the 1990's, Value-at-Risk (VaR) was widely adopted for measuring market risk in trading portfolios.Its origins can be traced back as far as 1922 to capital requirements the New York Stock Exchange imposed on member firms.Leavens (1945) offered a quantitative example, which may be the first VaR measure ever published.Markowitz (1952) and, three months later, Roy (1952) independently published VaR measures that were surprisingly similar.Tobin (1958) calculated VaR measures.William Sharpe described this VaR measure in his Ph.D. thesis and a (1963) paper.The measure is different from, but helped motivate Sharpe's (1964) Capital Asset Pricing Model (CAPM).
By the 1980's, a need for institutions to develop more sophisticated VaR measures had arisen.Markets were becoming more volatile, and sources of market risk were proliferating.By that time, the resources necessary to calculate VaR were also becoming available.Processing power was inexpensive, and data vendors were starting to make large quantities of historical price data available.Financial institutions implemented sophisticated proprietary VaR measures during the 1980's, but these remained practical tools known primarily to professionals within those institutions.During the early 1990's, concerns about the proliferation of derivative instruments and publicized losses spurred the field of financial risk management.JP Morgan publicized VaR to professionals at financial institutions and corporations with its RiskMetrics service.Ultimately, the value of proprietary VaR measures was recognized by the Basle Committee, which authorized their use by banks for performing regulatory capital calculations.An ensuing "VaR debate" raised issues related to the subjectivity of risk, which Markowitz had first identified in 1952.Time will tell if widespread use of VaR contributes to the risks VaR is intended to measure.(Halton, 2002) Tse (1991) and Tse and Tung (1992) investigated Japanese and Singaporean data and found that an exponentially weighted moving average (EWMA) model produced better volatility forecasts than ARCH models.Pafka and Kondor (2001) analyzed the performance of RiskMetrics, a widely used methodology for measuring market risk.Based on the assumption of normally distributed returns, the RiskMetrics model completely ignores the presence of fat tails in the distribution function, which is an important feature of financial data.Nevertheless, it was commonly found that RiskMetrics performs satisfactorily well, and therefore the technique has become widely used in the financial industry.They found, however, that the success of RiskMetrics is the artifact of the choice of the risk measure.First, the outstanding performance of volatility estimates is basically due to the choice of a very short (one-period ahead) forecasting horizon.Second, the satisfactory performance in obtaining Value-at-Risk by simply multiplying volatility with a constant factor is mainly due to the choice of the particular significance level.Fan et al. (2004) have calculated the value at risk in the confidence level of 95% for two Shenzhen and Shanghai stocks applying the exponentially weighted moving average and simple moving average.Through RMSE, optimal decay factor for Shenzhen stock index is gained equal to 0.86 and for Shanghai stock index equals to 0.88 which is less than amount determined via RiskMetrics (0.9 < λ <1).They know that the fluctuation of the stock market of China is large and that the fluctuation of Shenzhen market is larger than that of Shanghai market, which is correspondence to the reality.The value of λ which was calculated with EWMA method could better reflect the fluctuation of stock market of China and the memory length of the markets.So and Yu (2006) have estimated value at risk in the different confidence levels through RiskMetrics, IGARCH (1, 1), GARCH (1, 1) and FIGARCH (1, d, 0) on 12 stock indexes and 4 exchange rate.They have understood that the estimation of value at risk has less dependency to the volatility models in the exchange market in proportion to the stock market.Galdi and Pereira (2007) examined and compared efficiency of EWMA model, GARCH model and stochastic volatility (sv) for Value at Risk (VaR).The empirical results demonstrated that VaR calculated by EWMA model was less violated than by GARCH model and sv for a sample with 1500 observation.Patev et al. (2009) studied volatility forecasting on the thin emerging stock markets, and their study primarily focused on Bulgaria stock market.Three different models which are RiskMetrics, EWMA with t-distribution and EWMA with GED distribution were employed for investigation.The study results suggested that both EWMA with t-distribution and EWMA with GED distribution have good performance for modeling and forecasting volatility of stock returns of Bulgaria market.They also concluded that EWMA model can be effectively used for volatility forecasting on emerging markets.Most studies in the VaR literature focus on the computation of the VaR for financial assets such as stocks or bonds, and they usually deal with the modeling of VaR for negative returns.Recent examples are the books by Dowd (1998) and Jorion (2000) or the papers by van den Goorbergh and Vlaar (1999), Danielsson and de Vries (2000), Vlaar (2000) and Giot and Laurent (2003).

VaR Models
Value at risk (VaR) is defined as the maximum loss on a portfolio that can be expected with a certain level of confidence over a certain interval of time.Specifically, the VaR over the following holding period is defined as the solution to where 1  t z is the change in portfolio value over the holding period and  is one minus the VaR confidence level.Clearly, the implementation of VaR depends on the distributional assumptions made about returns.If 1  t z is drawn from any distribution whose first two moments are finite, the VaR of the portfolio can be written as where  is the standard deviation of the distribution of 1 is the -quantile of the standardized (i.e. unit variance) distribution (Guermat, C.,&R.D.F. Harris, 2002).VaR calculation methods are usually divided into parametric and non-parametric models.Parametric models are based on statistical parameters of the risk factor distribution, whereas non-parametric models are simulation or historical models.


variance-covariance method  historical simulation method  Monte Carlo simulation method (Ammann and Reich, 2001) The most widely used of these is the variance-covariance approach, popularized by the introduction of RiskMetrics by J.P. Morgan(1996), a comprehensive database of estimated asset return variances and covariances used for the calculation of VaR (Guermat, C.,&R.D.F. Harris, 2002).In this research, RiskMetrics model has been utilized for prediction of market risk.

RiskMetrics model
RiskMetrics model uses exponentially weighted moving average (EWMA) and simple moving average for predicting the volatility.One of the issues in relation to the simple moving average is selection of sampling window size.Whatever the window size is smaller, the estimations have less stability, and whatever the window size is bigger, the stability of estimations increases.On the other hand, selection of a big set means the use of farer observations and probability more unrelated for estimation of volatility.So in this way a balance shall be applied between estimations stability and data relation.In this research, the sizes of 100 and 50 have been applied for simple moving average model.This method will be denoted as MA (m).In our empirical part, we use MA (50) and MA (100).The applied formulas are showed as follows: One way to capture the dynamic features of volatility is to use an exponentially weighted moving average of historical observations where the latest observations carry the highest weight in the volatility estimate.The applied formulas are showed as follows: In formula (4), the parameter is often referred to as the decay factor.This parameter determines the relative weights that are applied to the observations (returns) and the effective amount of historical observations used in estimating volatility.
When we estimate volatility, we pay attention to three important issues that arise.

Calculation of the tolerance level & the effective data length
Volatility forecasts based on the EWMA model requires that we choose an appropriate value of the decay factor . As a practical matter, it is important to determine the effective number of historical observations that are used in the volatility forecasts.We can compute the number of effective days used by the variance (volatility) forecasts.To do so, we use the metric Setting   K equal to a value _ the tolerance level ( L  ) _ we can solve for K, the effective number of days of data used by the EWMA.The formula for determining K is Equation ( 6) is derived as follows Solving Eq (8) for K we get Eq (6).
So, we know the relationship between the tolerance level, the decay factor, and the effective amount of data required by the EWMA (JP Morgan, 1996).

Determining the decay factor
The definition of the time t + 1 forecast of the variance of the return, r t+1, made one period earlier is that is, the expected value of the squared returns one-period earlier.The above result hold for any forecast made at time t + j, 1  j (JP Morgan, 1996).
If the variance forecast error is defined as it follows that the expected value of the forecast error is zero, i.e.
Based on this relation a natural requirement for choosing λ is to minimize average squared errors.When applied to daily forecasts of variance, this leads to the (daily) root mean squared prediction error which is given by Where the forecast value of the variance is written explicitly as a function of λ .In practice, the optimal value of λ is obtained by searching for the smallest RMSE over different values of λ .This involves seeking the decay factor that produces the best forecasts i.e., minimizes the forecast measures (JP Morgan, 1996).Based on RMSE criteria, Morgan have given some optimal decay factors of daily VaR predicting for some financial instruments of some country in a technique document named RiskMetrics.The decay factors of different instruments in different countries vary significantly.This implies that in a certain country with a certain background of economy and culture, the market memory length for every instrument is different from others (Fan & et al, 2004).
In the exponentially weighted average model, the effective size of sampling window is determined based on the decay factor (λ). Whatever the decay factor is bigger, the sample's effective size is bigger.The sample's effective size is calculated based on the following relation: γ is tolerance level.Tolerance level indicates the weight which is condonable.It is obvious that exponentially weighted moving average model allotted a weight to all the data.Total weights equals to 1.If for instant, a rather small part of the weights (e.g.0.01) is waived, the major part of the weights (e.g.0.99) is allotted to the specific numbers of the data.Therefore, selection of decay factor means selecting the sample effective size.In this research, in the 1% tolerance level, optimal decay factor has been calculated through RMSE and an amount of 0.85 has been gained.This amount is lower than the amount suggested by RiskMetrics system (0.9 < λ < 1).For further discussion refer to Fan et al (2004), Morgan, J.P. (1996) and Alexander (1998).

Evaluating VaR Models
After estimating the model and before applying in practice, the reliability of which shall be investigated carefully.
As well, while using the model its performance shall be evaluated regularly.One of the model's validation tools is Backtesting.Here for validating the models, Kupiec and Christoffersen tests have been applied.
Asymptotically, this test is χ² -distributed with one degree of freedom.Its power is generally poor.So we turn to a more elaborate criterion (Angelidis et al., 2004)

Conditional Coverage
Probably the most widely known test of conditional coverage has been proposed by Christoffersen (1998).He uses the same log-likelihood testing framework as Kupiec, but extends the test to include also a separate statistic for independence of exceptions.In addition to the correct rate of coverage, his test examines whether the probability of an exception on any day depends on the outcome of the previous day.The testing procedure described below is explained, for example, in Jorion (2000), Campbell (2005), Dowd (2006) and in greater detail in Christoffersen (1998).The test is carried out by first defining an indicator variable that gets a value of 1 if VaR is exceeded and value of 0 if VaR is not exceeded: (Nieppola, 2009) (13) Turning to the independence prediction, let nij be the number of days that statejoccurred after state i occurred the previous day, where a state either does or does not involve an exceedance, and let πij be the probability of statej occurring after stateioccurred the previous day.Under the hypothesis of independence, the test statistic is also distributed as a χ2(1),and noting that we can recover estimates of the probabilities from (15) It follows that under the combined hypothesis of correct coverage and independence (the hypothesis of correct conditional coverage) the test statistic is distributed as χ2(2).The Christoffersen approach enables us to test both coverage and independence hypotheses at the same time.Moreover, if the model fails a test of both hypotheses combined his approach enables us to test each hypothesis separately, and so establish where the model failure arises (e.g., does the model fail because of incorrect coverage, or does it fail because of lack of independence?).(Dowd, K., 2006)

Results
In the present research, the performance of RiskMetrics model is investigated in predicting market risk in Tehran Stock Exchange.The main data of this research including time series related to the TEDPIX Index.TEDPIX is Tehran Stock Exchange's Dividend & Price "total return" Index.Therefore, time series of this index has been   RiskMetrics model uses exponentially weighted moving average (EWMA) and simple moving average for predicting the volatility.In this research, the sizes of 100 and 50 have been applied for simple moving average model.This method will be denoted as MA (m).In our empirical part, we use MA (50) and MA (100).Also in EWMA model, in the 1% tolerance level, optimal decay factor has been calculated through RMSE and an amount of 0.85 has been gained.This amount is lower than the amount suggested by RiskMetrics system (0.9 < λ < 1).And then through RiskMetrics model, one-day and 10-days value at risk have been estimated in the confidence levels of 95%, 97.5% and 99%.
For reviewing the models validation, Kupiec and Christoffersen tests have been applied.The results of each one of these models have been displayed in the following tables.
We observe in the Table 1 that RiskMetrics model in the confidence level of 97.5% could pass the Kupiec test successfully.Apart from the SMA (50) at 90% confidence level, no dependence between exceptions according to Independence test occurs, atleast not in statistically significant terms.Also, Based on the Christoffersen test; simple moving average model (100) and exponentially weighted moving average model have passed this test in the confidence levels of 95% and 97.5%.But in the simple moving average model ( 50), only confidence level of 97.5% has been certified.
The results of Kupiec test In Table 2 indicate that comparing the performance of RiskMetrics models in 1-day and 10-days, time period doesn't affect the accepting models.In other word, like as 1-day predictions in all three models, the confidence level of 97.5% has been accepted.Apart from the EWMA at 97.5% confidence level, no dependence between exceptions according to Independence test occurs, at least not in statistically significant terms.Christoffersen test indicate that MA (100) model in the confidence levels of 95% and 97.5% and MA (50) model in the confidence level of 97.5%, and EWMA model in the confidence level of 95% are certified that in comparison with 1-day prediction, EWMA model has not been certified in the confidence level of 97.5%.

Conclusions
In this research, the performance of RiskMetrics model for prediction of 1-day and 10-days value at risk were preceded in three confidence levels of 95%, 97.5% and 99%.
of days over a T period that the portfolio loss was large than the VAR estimate, where (11) Here, N is the observed number of exceptions in the sample.The failure number follows a binomial distribution, N~ B (T, P), and consequently the appropriate likelihood ratio statistic, under the null hypothesis that the expected exception frequency T N = P, equals applied from 21 March 2001 to 20 March 2010 with the total 2172 observations.The needed data has been collected from the database of Tehran Securities & Exchange Organization.The observations from 21 March 2001 to 20 March 2006 have been considered as the model estimation data and the observations from 21 March 2006 to 20 March 2010 have been considered as the model's validation data.Daily log returns has been calculated as below:

)
The models were validated through Kupiec test and Christoffersen tests.The finding of this paper is that RiskMetrics model are good alternatives in modeling volatility and in estimating VaR.Also the results indicate that in Kupiec test for both periods, the accepting models number are equal, but in Christoffersen test, the results indicate that upon increasing the time period, the accepting models number are decreased.The critical values of the LRuc and LRind statistics for 5% significance level are 3.84.2. The critical values of the LRcc statistics for 5% significance level are 5.99.3. * Indicates that the model passes the coverage test.The critical values of the LRuc and LRind statistics for 5% significance level are 3.84.2. The critical values of the LRcc statistics for 5% significance level are 5.99.3. * Indicates that the model passes the coverage test.