Stock Price Analysis Under Extreme Value Theory

The objective of this paper is to provide a practical tool for stock price evaluation and forecasting under Extreme Value Theory (EVT). Three existing models are reviewed; these models include: Mordern Portfolio Theory, Black-Scholes, and Jarrow-Rudd models. It was found that these models may not be effective tools where option contract is not part of the investment regime. The data used in this research consist of the daily close price from a period of 30 days from 100 companies in the SET100 index. From the sample distribution F(X), extreme values were identified. A tail index E was calculated to verify the distribution for each security was identified. Using EVT, the threshold value was estimated and used as a tool for risk assessment for each stock. It was found that Thailand’s SET100 consists of two groups of stocks according to price distribution. The majority of the stocks are Weibull distributed and the remaining stocks are Fréchet distributed. Using Fisher-Tippett-Gnedenko’s Generalized Extreme Value to calculate price volatility, the Weibull group shows the mean value of H = 0.57 , and the Fréchet group shows H = 0.05 . The findings may be used as a tool for risk assessment in stock investment. This finding rejects the general assertion that most financial data are fat-tailed distribution. The finding of this paper implies that investors face two categories of stocks: low and high price volatility. The idea of sector diversity becomes secondary. Empirical evidence shows that stocks from different sectors may have the same distribution and stocks of the same sector may have different distributions. Therefore, price volatility index is a better indicator for risk management.


INTRODUCTION
Investors in the stock market face the challenge of stock price analysis and forecasting. Investors may use software package to analyze and forecast stock price movement. Top-10 software available in the market include: MetaStock, TradeStation, eSignal, TC2000, Quantshare, Market Analyst, Ninja Trader, ProfitSource, ChartSmart and VectorVest. These programs are confined to basic statistical tests which may not be applicable to every scenario faced by investors. Many methods for stock price forecasting had been developed (Olaniyi, 2011). These methods include random walk theory, regime switching theory, cointegration and chaos techniques (Granger, 1992). However, these methods are too technical to be accessible by investors (Larsen, 2007). For that reason, there is a gap between investor's needs and practical tools available in the market. This paper proposes a new perspective in stock price analysis and forecasting through the use of Extreme Value Theory.
EVT may be used as a tool for investment risk management. Risk has been broadly defined as the effect of uncertainty on objectives (ISO 31000: 2009).
However, some writers attempts to differentiate risk and uncertainty. For instance, one writer asserts that risk is quantifiable and uncertainty is not measurable (Knight, 1921). Other writers claim that both risk and uncertainty are measurable. This line of literature asserts that risk is measured by percentage possibility; uncertainty is measured by percentage probabilities (Hubbard, 2007). In finance, risk refers to the outcome of the return on investment which differs from the expected value. (Holton, 2004). If the return is higher than what has been expected, it is called an upside risk (Horcher, 2005). If the return is lower than what has been expected, it is called a downside risk (McNeil et al., 2005). One method of minimizing the effect of downside risk is the use of portfolio (Markowitz, 1952). Markowitz' modern portfolio theory suggests that a portfolio comprising of debt and equity may minimize the effect of return's deviation. However, Markowitz approach minimizes both downside and upside risk. Where downside risk represents a loss and upside risk represents a gain, an effective risk management tool should minimize the loss and optimize the gain. Markowitz could not meet these requisites. This paper proposes that EVT may fill this gap.
EVT is an effective tool for risk management because it could quantify risk into percentage probability through the use of threshold p-quantile analysis. Generally, a threshold is defined by the predetermined upper and lower bound of the confidence interval called 1   with a critical value of 1 Z   . For example, if the confidence interval is 95%, the threshold is 1 1.65 Z    for two-tailed test. However, this approach assumes that the data is normally distributed. In real life, not all data is normally distributed. This assumption is faulty because it does not reflect reality. Empirical tests of SET100 also confirm that the assumption of normality of erroneous. EVT avoids making such an assumption by verifying each stock price series for its distribution through the use of tail index. By using empirical data to justify the modeling, EVT stands out as a more scientific method in risk analysis.

LITERATURE REVIEW
Three models are reviewed as the foundational materials of the current literature for investing in the stock market. These models include: Modern Portfolio Theory (MPT), (ii) Black-Scholes Model (BSM), and (iii) Jarrow-Rudd Model (JRM). This paper asserts that these three models are not adequate tools for risk management for stock investing, especially in emerging markets. This paper proposes EVT as a supplemental tool. All models presented in this paper rely on statistics as the building block for stock price and market analyses. The fundamental tenet remains that "statistics is an applied science and deals with finite samples" (Berkson, 1980, p. 458).

Modern Portfolio Theory
Markowitz introduced Modern Portfolio Theory (MPT) as a means to reduce risk in investment (Markowitz, 1952(Markowitz, & 1959. MPT is a mathematical formulation for risk reduction by many assets in an investment holding. Under MPT risk is defined as total portfolio variance. MPT assumes that investors are rational; the market is efficient; and the data is normally distributed (Elton & Gruber, 1997). The theory begins with the definition of expected return: where p R = return of the portfolio; i R = return of asset i , and i w = weight of the asset, i.e. proportion of asset in the portfolio. Since the return of the asset may fluctuate, the portfolio has a variance: Under this approach, the volatility of the portfolio return is simply the standard deviation. The standard deviation of the portfolio is given by: MPT argues that risk or the effect of volatility of the portfolio return may be reduced by holding a combination of assets that are not perfectly correlated: 1 1 ij     . This rationale lays the foundation for the concept of risk reduction through assets diversification in hope of maximizing  and minimizing 2  (Marling & Emanuelsson, 2012). MPT assumes that investors are rational. However, in practice, it has been shown that investors are not rational (Koponen, 2003). Empirical evidence shows that investors are generally overconfident and often caused asset price to be inflated (Kent et al., 2001). Other assumptions of MPT also had been challenged. For instance, MPT's assumption of normal distribution of returns had been criticized (Doganoglu et al., 2007). It was shown that asset returns are non-elliptical (Chicheportiche & Bouchaud, 2012). One writer rejects MPT as unworkable (Taleb, 2007). Witt and Dobbins (1979) wrote that in real life, no one uses MPT. The problem with MPT stems from the fact that it makes assumptions that do not reflect price and investor behavior in the stock market. As a risk management tool, the concept of asset diversification seems to work for downside risk; however, when it comes to upside risk, MPT is not helpful. MPT allows investors to reduce the effect of the fluctuation of prices or return rates. As such, it does not serve as a forecasting tool. Investors in the stock market need a tool that could provide price analysis, as well as forecasting. MPT provides a management tool for the end; investors need a tool to manage the means. To that end, MPT left a gap in the literature.

Black-Scholes Model
Twenty years after MPT, a new model called the Black-Scholes equation was introduced (Black & Scholes, 1973). The Black-Schole model allows the investor to reduce risk through hedging. Hedging is the taking of position in one market in order to reduce risk exposure in another market. Hedging is used when the firm is faced with financial constraint. Effective hedging minimizes the variability of the firm's cash balance (Mello & Parsons, 2000).
The Black-Scholes model assumes that the portfolio consists of two types of assets: risky asset called stock and riskless asset called bond (Sircar & Papanicolaouy, 1998). The stock price fluctuates in a random walk with drift. The random walk of the stock price manifests geometric Brownian motion. It is assumed that the stock does not pay any dividends. The model also makes certain assumptions about the market. It assumes that the market does not have arbitrage. It is possible to borrow money at riskless rate. The buying and selling may occur for any amount without cost, i.e. no market friction.
The model assumes that the stock price is normally distributed with cumulative ( ( ) N x ) and probability density functions ( '( ) N x ) as: Black-Scholes argues that the call price is given by: where C = call price; S = price of stock; ( , ) C S t = call option of S stock at time t; K = spot price; r = risk free rate; and d = cumulative probability of a standard normal point or implied volatility. A "call option" is the right to buy. The "put option" is the right to sell. According to (6) the stock price is a function of the difference between the distribution of the stock price general movement and the spot price accounted for volatility ( i d ) where: The put price is given by: To express (9) in terms of i d , the formula may be written as: Black-Scholes model is an improvement over the oversimplified argument of MPT. The improvement comes from the introduction of the distribution of the past and spot price movements. However, Black-Scholes could not accommodate future price volatility (Gencay & Salih, 2003). Black-Scholes also shares a weakness with MPT in assuming normality of price without verifying the actual distribution of the price. Other assumptions, such as investors borrowing at riskless rate and the absent of market friction are equally impractical.
The weakness from the assumption in Black-Scholes may best summarized by two economists who wrote that: "Essentially, all models are wrong but some are useful" (Box and Draper, 1987, p. 424) and " … all models are wrong; the practical question is how wrong do they have to be to not be useful" (ibid, p. 74). Writers admit that normal distribution is not always found in practical context (Geary, 1947). Nevertheless, we continue to make such an assumption. This paper attempts to lessen that tendency and proposes a more robust approach to data analysis by verifying the type of distribution through empirical tests. Such an attempt in the context of stock price analysis has been witnessed in the Jarrow-Rudd Model.

Jarrow-Rudd Model
In 1982, the Jarrow-Rudd Model (JRM) was introduced as a new tool for stock price analysis (Jarrow & Rudd, 1982) This new method also employs price distribution as the building block of the model. The call price is distributed as F which is a function of a second distribution called A . By analyzing the central moments of these two distributions, a price forecasting may be obtained. The JRM method is given by: Edgewood series with terms based on higher order cumulants and derivatives (Corrado & Su, 1996).
If the above equation (11) drops the last term ( ) K  , then a shorter version of the formula is given by: where: JRM does not make any assumptions about distributions F and A . These two distributions are verified by empirical data. This approach is an improvement over MPT and BSM. However, JRM and BSM are employed in markets where option or future contracts are available. In the emerging markets or for day-traders where spot price is the overriding issue, JRM and BSM may be inadequate in emerging financial markets where options contracts are not available.

Extreme Value Theory
According to the National Institute of Technology and Science: "Extreme Value Distribution usually refers to the distribution of the minimum of a large number of unbounded random observations" (NIST, 2013). Under this definition, the maximum of minimum values of the series are separated from the original observation and reanalyzed separately. A threshold value is used for removing the minimum or maximum values. These removed items are then re-examined for their distribution and characteristics. The distribution of the removed items may be estimated through the tail index. The tail index can provide information about the underlying distribution (Kostov & McErlean, 2002, p. 5).
There are two approaches to estimate the trhreshold value in Generalized Extreme Value (GEV) theory. The first method uses the maxima block of points. This is called annual maxima series (AMS) approach (Hosking et al., 1985, andMadsen et al., 1997). The second method uses a specified points as the threshold beyond which points of values are considered extreme (Leadbetter, 1991). This is known as Peak Over Threshold (POT) approach. Although AMS and POT had been used in analysis natural disaster events, for market behavior they must be adapted to the nature of the event. For example, AMS may not be appropriate for financial risk management due to its requirement of longer period of observation. The POT method may be more appropriate for financial risk management due to its use of threshold value which could be defined by investors. The POT method is also known as Partial Duration Series (PDS) approach. Under PDS, the data set is assumed to take a particular distribution (Madsen et al., 1997). The question of "which distribution should PDS assume" remains unsettled. For instance, Shane and Lynn (1964) assume that PDS is Poisson distributed. Zelenhasic (1970) proposed that the exceedance is gamma distributed. Another group of writers, such as Miquel (1984) and Ekanayake and Cruise (1993), proposed that the exeedance is Weibull distributed. In Rosbjerg et al. (1991), it was suggested that lognormal distribution characterizes the exceedance. Lastly, there are researchers who suggests that the exeedance set is distributed generalized Pareto (Van Monfort and Witter, 1986;Hosking and Wallis, 1987;Fizgerald, 1989;Davidson and Smith, 1990;Wang, 1991;and Madsen et al., 1995). This paper makes no assumption of data distribution. The distribution is verified by empirical evidence under the tail index.
The reason why writers cannot agree of a definite distribution of extreme series may come from the fact that both AMS and POT methods removed exceedance points from the original set This has an inherent problem with sample size requirement. The size of i Y varies from one study to another depending on the size of the original i X ; thus, the resulting distribution of ( ) G Y is not definitive. If i Y is small, it may approximate chi-square distribution. If i Y is large enough, under the law of large number, the series may approximate a normal distribution. However, if i Y is volatile it may be Gumbel, Fréchet, Weibull or any one of the continuous distributions; there is no definite answer as to the type of data distribution of i Y . In order to reconcile these differences and uncertainty, for smaller sample this paper suggests a two step process: (i) employing the standard score equation to verify the existence of outlier points; and (ii) if the outliers are found, use the entire original observation as the basis for EVA. This approach overcomes the issue of inadequate sample size for verifying the distribution of the points in excess of the threshold and allows research to work with smaller sample size, i.e. weekly or monthly stock price movement. In so doing, we do not "assume" any type of distribution, but empirically verify the distribution type through the tail index and p-quartile percentage probability.
Under GEV, no assumption about the data distribution is made. Prices of stocks are tested and verified for their distribution. Whereas MPT focuses on investors and sanctifies them as rational, EVT does not look at investors. In EVT, the stock price itself becomes the unit of analysis. As a tool for univariate and nonparametric testing, EVT allows investors to determine the threshold level for price level and the scale of price volatility as indicators of risk. Thus, EVT becomes increasingly relevant in risk management (Embrechts et al., 1999, p. 32).

METHODOLOGY
Probability distribution under GEV is used to propose a tool for stock price analysis for purposes of risk management. GEV distribution is the generalized form of three extreme value distributions: Fréchet, Weibull and Gumbel (Gilli & Kaellezi, 2006;p. 5). One hundred companies in the SET100 Index were used as the main sample F . From this main sample a group of exceedance G were identified for tail index analysis. From the tail index  , the correct type of distribution is assigned to each company's price data.

Data
The data used in this paper comes from SET100. SET100 is an index of stock prices traded at the Stock Exchange of Thailand (SET). SET100 is an index comprised of 100 companies. The close price of the individual index components for 30 trading days was used as the data set. One company in the index has no data information. The remaining 99 companies were used as the final sample. Additional data consists of the SET100 index values for the same period.

Sampling and Sample Size
During this study period, the Stock Exchange of Thailand has 686 listed companies. SET classifies these companies into 9 industries and 16 sectors. It maintains three active indices: SET Index for the entire market; SET100 Index for 100 leading companies and SET Index for leading 50 companies. The data from SET and SET100 were used in this study. SET50 was not included because companies in SET50 are also listed in SET100.
The effective sample size after the removal of defective data is comprised of close price of 30 consecutive trading days for 99 companies. Individual stock price was collected from April 16, 2015 to June 2, 2015. The monthly market index SET was collected from January 2012 to May 2015. The rationale for using a longer period for market data collection is to assure the stability of data distribution at a market level. The individual stock data was confined to 30 trading days due to the time frame of information needed for short-term risk assessment. A longer period would not be practical due to potential price volatility and the aging of the data.
Since this study involves multiple stocks with diverse price level, the adequacy of sample size was calculated by using the Central limit Theorem (CLT). The Central Limit Theorem states that a given set of randomly occurring events The Lindeberg-Levy's CLT method is given by: The term ( ) x  is the standard normal CDF evaluated at x. This is a pointwise estimation. The convergence of X   is uniform in z because: Lastly, the Lyapunov's CLT method is given by: This paper introduces a new approach for CLT as a mean for sample size calculation based on the rational that if the sample fairly represents the population distribution then T Z  ; if so, then ( . By simplification, we obtained S   or 2 2 S   . Since the square of the standard deviation is the variance, and the variance represents the shape of the curve, it follows that under the condition T Z  , the sample and population variances are equal. This equivalence is a range within a specified confidence interval, not a point in space. This approach to CLT differs from Lingeberg, Lindeberg-Levy and Lypunov because it does not use location comparison ( X   ), but using shape analysis ( 2 2 S and  ). Additionally, unlike prior CLT approaches which use the value zero as the test reference, the proposed variance analysis uses the error created by the overlapping shapes between the observed and theoretical curves as the reference point. The proposed CLT is given by: where E = standard error given by / E n   . The proposed CLT is based on the argument that the expected difference among the sample and population variances must not exceed a threshold level of standard error. Thus the term | 2 2 S   | in equation (7)    recommended that the threshold level ( 0 q ) should be obtained by the sum of the expected mean plus the product of the k count and the sample's standard deviation, thus: where [ ] E Q = observed mean of the sample; [ ] S Q = sample standard deviation; and k = predefined frequency factor. This method of identifying exceedance level 0 q has been used in flood studies (Rasmussen & Rosbjerg, 1991) and precipitation research (Madsen et al., 1994). However, in financial data analysis where the time period is short and sample size is small, this method is not practical. For instance, if we deal with a sample of five items: : (1, 2,3, 4,10) i X . The mean is 4.00 and the standard deviation is 3.54. The k count is 1 since one value (10) in the series stands out as an "apparent extreme." Using equation (14) i X is distributed as ( ) F X in space 1  and the exceedance is expected to distribute as ( ) G X in space 2  , it is not possible to determine the distribution type for ( ) G X because the k count is 1 or one point in space 2  . This is the inherent limitation of the 0 q approach.
This paper proposes that extreme values should be determined by the use of standard score equation: where i X = daily close price of individual stock; X = mean close price of individual stock; and S = standard deviation of the daily close price.
where  = location;  = scale; and  = shape. If is undefined. (Bensalah, 2000). However, if 0   , then (20) is reduced to: The parameter  is the tail index of the distribution. This index may be used to classify the type of extreme value distribution. If 0

  , the H distribution is
Gumbel distribution, also known as Type I where x   and 0   . The Gumbel distribution is given by: In Fréchet distribution with sample size n and parameters:  and  (Abbas & Yincai, 2012). The maximum likelihood estimation of  is: The next step was to classify the type of extreme value distribution of the series through the use of the tail index. There are two methods for the tail index estimation: the Pickands method (Pickands, 1975), and the Hill method. (Wagner and Marsh, 2000). Firstly, the Pickands method is given by: where m = number of observations whose tail is to be observed and k = sample size. Secondly, the Hill method is given by: ; recall that  is the estimated population standard deviation and Z is the standard score of the series. Both methods follows the same conditions in providing the decision rule for classifying the type of extreme value distribution:

FINDINGS
Contrary to the general assertions found in the current literature that financial data, specifically stock price data, is a fat-tailed (Fréchet) distribution, empirical test of data from Thailand's Stock Exchange shows that the stock market price distribution contains no extreme values under the standard score formula approach for verifying exeedance. The tail index of the market data shows that SET is a mixed of Fréchet and Weibull distributions when . This empirical evidence also contradicts Markowitz' and Black-Scholes' assumptions of normality in stock price distribution. This finding has significant implication on how market analysts and investors should approach risk management in stock investment.

Extreme Value Identification
Using the Z-score method under 0.95 CI, the market data shows that there are no extreme values. The market index over a period of two years for the Thai Stock Exchange is stable. Nevertheless, it is still necessary to verify the distribution for SET, SET50 and SET100 indices. Since the standard score calculation shows that there is no extreme values, the entire series for 14 months were used to verify distribution. All three indices were Weibull distributed with the tail indices of 1.05, 1.02 & 1.11 respectively. Individual stocks were tested for extreme values over a period of 30 trading days. Out of 100 companies in the SET100 index, 97 companies show extreme values in 30 consecutive trading days between April 16 -June 2, 2015. Three companies were removed for incomplete or defective data.

Tail Index and Distribution Verification
Three sets of calculations were made for the tail index at the macro-level; these indices include the tail index for SET, SET50 and SET100. Under the Hill method, it was found that the tail indices were -1.05 for SET, -1.02 for SET50 and -1.11 for SET100. The stock market in Thailand is Weibull distributed.
A second set of tail index calculation was used to identify the tail for component stocks of SET100. Among these 100 companies, 26 companies were confirmed Fréchet distributed; 71 were Weibull distributed; one was distributed Gumbel and two companies were removed for data incompleteness or defect.

Fisher-Tipett-Gnedenko's GEV's Scale and Risk Indicator
Using GEV parameters, the scale of the exeedance is determined by: where  = estimated standard deviation of the exceedance;  = mean of exceedance;  = tail index; and u = threshold value. (Moscadelli, 2004). The threshold value used in this case is the critical score at CI(0.95) or 1.65 for two-tailed test. From the scale  , the upper bound of the estimated price is determined by U X    and the lower bound L X    . The range is simply R U L   . This range is the bound within which price may fluctuate without being classified as risk: upside or downside. Thus, the risk indicator k Z is obtained by: which is a reformulation of equation (13). An upside and downside risks are defined by value outside of the boundary 1.65 1.65  The results in Table 2 are used to verify statistical significance of the upside and downside risk for the two groups of stocks: Fréchet and Weibull distributed. Firstly, the discrete probability of upside risk for the Fréchet group is calculated with the determination of the Laplace Rule of Succession (Durrett, 2013): where s = combined number of upside risk at 0.95, 0.90 and 0.80 CI; n = number of stocks showing exceedance distributed Fréchet. The probability of upside risk is 0.64 p  and the probability of non-upside risk is 0.36 q  . The test statistic follows the De Moivre-Laplace Theorem (Balazs and Balint, 2014): From  There are 11 stocks in the Weibull group that shows upside risk. There is now downside risk indication in this group. The Z value under the De Moivre-Laplace Theorem for the Weibull group is ((1 71(0.125)) / 71(0.125)(0.875) Z   or 2.82 Z   ; compared to -1.65, the upside risk in the Weibull group is statistically significant. which is higher than the standard reference value of 1.65. The upside risk for the Weibull group is statistically significant. No stocks in the Weibull group manifest downside risk. Among the Fréchet group of stocks show a total of 9 downside risk at CI 0.95, 0.90 and 0.80.

DISCUSSION
It has been asserted that normality has been assumed for the sample; however, the standard score equation has been used for verifying the presence of exceedance. These two conditions are not contradictory. The examination of the sample distribution consists of two stages: (i) identifying the exceedance threshold; and (ii) verify the sample distribution through the tail index. The sample is drawn from a nonfinite population whose characteristic is defined by the law of large number which states that at an adequate size the population and the sample means are equal. The probability distribution of such population takes the form of a normal distribution; thus, the Z-equation is used in this stage-1. The sample is taken from this non-finite population. The use of the Z-equation reflects the original condition from which the sample is drawn. Once the exceedance is found in the sample, the entire sample is then subjected to distribution verification by using the tail index in stage-2.
Two additional observations are made. First, EVT is an effective tool for risk assessment. This efficacy is evidenced through distribution analysis under EVT. The parameters of the distribution functions allow investors to gauge the threshold of risk level or volatility level and manage risk accordingly. These parameters include distribution location, shape, and scale. Second, the result of empirical test from SET100 data shows that MPT's and BSM's assumptions of normal distribution is not practicable.
If both the upside and down risks are statistically significant, then the stock is volatile. Volatility in this context is defined as 1.65 1.65 k Z    . Thus, in this study, the Fréchet group of stocks shows both significant upside and downside risk. Stocks in this group are considered volatile. They are more appropriate for investors who are risk affine or have higher tolerance for risk. Stocks in the Weibull group shows significant upside risk but has no down side risk. These stocks are considered nonvolatile. They are more appropriate for investors with lower tolerance for risk. The method used to arrive at these conclusions is a contribution to stock investment practice.
The reading of individual stocks must be read with the market's movement. In the present case, SET100 as a whole is distributed Weibull. Uner Fisher-Tippett-Gnedenko's GEV equation both Fréchet and Weibull distributed data may be generalized under one general equation ( , , ) H    . Therefore, if the upside and downside risk indicators for SET100 are determined, a 2 2  table could be constructed for comparing the individual stock price to the market index. The chi square test is given by: The test under (24) indicates whether the individual stock's upside and downside risks are significantly different from that found in SET100's distribution.
The result under equation (24)  , it is concluded that individual stock's exceedance and SET100 are significantly different. Therefore, SET100 could not serve as an indicator or guide for stock price movement. This finding provides an important implication because the index and its components do not reflect one another. This is antithetical to the idea of stock market index: a reference against which individual stock prices are compared. The experience of Thailand's stock market shows that the individual stocks identified by EVT are significantly different from that of the index of which they are components.

Risk Assessment Tool
EVT verifies data distribution type through empirical testing. There is no need for making an assumption about data distribution. The investor could verify the type of data distribution through the use of tail index  . Conventionally, risk has been defined as the variance of the returns from the asset. However, in stock price movement analysis, risk is defined as the volatility of the price itself. For this reason, the shape, location and scale of the stock price distribution are key indicators for risk assessment. The location of the mean on the distribution curve indicated expected price level. The shape of the curve indicates the characteristic behavior of the stock price. The scale of the curve indicates the level of volatility of the price. These three parameters may be used as risk assessment tools. EVT provides these tools a practical and accessible to investors at large. No complicated computer software or cumbersome mathematical formulae are necessary. By following series of simple calculations outlined in this paper, investors could assess risk and make investment decision according to the value of u Z for upside risk and d Z for downside risk. These simplified calculations could be accommodated by common Excel spreadsheet.

Implications on Modern Portfolio Theory
Thailand's Stock Exchange is comprised of 9 industries and 16 sectors. It is tempted to accept MPT's concept of portfolio diversification by holding assets drawn from various industries and sectors. However, empirical testing shows that the entire SET100 index components have two types of distribution: Fréchet and Weibull. The type of distribution does not depend on sector or industry. This finding implies that MPT's concept of equity-only portfolio diversification has no merit for the stock market in Thailand unless the portfolio mixes stocks and non-stock assets. This assessment does not apply to the case where MPT advocates a combined holding of stock and bonds. Whether this conclusion could be made about other markets, further research and testing are required.

Sample Size of Exceedance
The application of EVT is a two-steps process: (i) taking the main sample distributed as ( ) F X within which a threshold point u is designated; and (ii) collecting all points that exceeds u which distributed as ( ) G X . The problem arises when ( ) G X is too small to provide meaningful extraction of the tail index in order to verify the distribution of ( ) G X . For instance, if ( ) G X comprises of two points no distribution could be stipulated. To solve this problem, it is suggested that if the original sample ( ) F X is small and the finding of extreme point for ( ) G X is also unreasonably small, the entire ( ) F X should be used for the tail index calculation. This approach would be more appropriate in stock price analysis since investors trading on spot market would often deal with small sample size. This approach is akin to using the finding of exceedance, no matter how small, as a diagnosis. Once exeedance values are found, the application of EVT on the entire sample is applied. This method is practical for day-traders and short-term investors in the equity market.
This paper urges that no assumption of distribution should be made and that data distribution should be verified through the tail index; yet the identification of the exeedance is obtained through the Z score equation. The underlying assumption of the Z equation is normal distribution. This apparent contradiction may be explained.
Recall that sample n distributed as ( ) F X was taken from population N distributed as ( ) Z  . By definition for adequately large i.i.d. N , ( ) Z  is distributed normally. This logic was offered by the deMoivre-Laplace theorem (MLT) (Balazs & Toth, 2014). Under MLT, discrete data would approximate normal distribution as the sample size n approaches infinity, thus: In our case N is non-finite or N   , thus if the original distribution of N from which sample n is taken was (0,1) N then ( ) ( ) F X Z   . Therefore, the use of the standard score: ( )/ Z X X S   to identify exceedance in set n reflects ( ) Z  distribution type of N from which n was drawn. The distribution of n is also verified by CLT under the variance difference method: (17), supra.

CONCLUSION
This paper reviews three existing risk management tools in stock investment, namely Markowitz's Modern Portfolio Theory (MPT), Black-Scholes Model (BSM), and Jarrow-Rudd Model (JRM). Using SET100 index and its components from Thailand's Stock Exchange as a case study, evidence shows that MPT has a weakness for intraand inter-sector diversification due to the lack of diversity when the data distribution is the unit of analysis. If the portfolio is a mix of debt and equity, MPT might perform differently; however, such an issue is beyond the scope of this paper. BSM assumes that the market is normally distributed. However, in practice the market is not normally distributed. In this study, price data of the SET contains stocks that are distributed Fréchet and Weibull. BSM has been criticized in the literature and empirical evidence in this study also echoes those criticisms over the model's assumption of normality. Finally, a review of JRM saw an improvement over MPT and BSM by using empirical distribution. However, like BSM, JRM is more applicable to markets where hedging and future contracts are available. Such a requisite is more applicable for advanced and developed markets, such as NYSE, NASDAQ, FTSE, CAC40 or NIKKEI. In emerging markets, such as SET or other markets in the ASEAN region where option contracts may not be available, JRM may still be out of reach as an investment management tool. This paper proposes a fourth alernative under extreme Value Theory (EVT). Under EVT, this paper advocates four-steps process in investment risk management for stock traders: (i) use 30 trading daily sessions as the references sample ( ) F X from which extreme values are identified; (ii) with predefined risk tolerance level under percentage confidence interval, fix a threshold value beyond which the event is classified as extreme; (iii) collect all extreme events into a separate group called ( ) G X ; use the tail index calculation to verify the distribution of ( ) G X and impute such distribution onto ( ) F X ; and (iv) use the shape, location, and scale parameters under Fisher-Tippett-Gnedenko's General Extreme Value (GEV) to assess risk level or investment decision, i.e. buy or sell orders. This 4-steps process may be a practical risk management tool in stock investment in emerging markets where option contracts are not available.