Saturday, September 06, 2008

Market Efficiency Literature Review

Market efficiency is one of the corner stone of modern Financial economics. This theory started with the largely unknown work of Louis Bachelier (A french mathematician). However, little attention has been allocated for this proposition until a US Economist, Samuleson, proposed it again in his dissertation in 1964. Though it has raised several controversies since its adoption. In 1965 Eugene Fama published his dissertation arguing for the random walk hypothesis. In Fama (1970) survey documents several studies that could not reject the significant power of this theory. Jensen (1978) also believes that "There is no other theory with as much economical background as the Efficient Market hypothesis". Some studies depicts the market as an intelligent agent with an invisible hand (similar to the concept of Adam Smith) that is able to correct the speculative movements in price processes as described in Poterba and Summers (1988).

This theory is based on the operational and allocational efficiency of capital markets that have the following properties:
  1. Agents are rational and seek the optimization of their expected utility
  2. All agents are price takers
  3. Assets are divisible and marketable.
  4. Market is frictionless without transaction costs and taxes
  5. Information is costless and arrives to all agents at the same rate and time

As a result all shares on the market trade at their fair value. This price incorporates all known information and history price effects. No-arbitrage is realizable using public information.

Fama (70, 76) contributed a great deal to this literature by establishing an extremely important classification for this market efficiency:
  • Weak-Form Efficiency: No abnormal return can be realized based on previous price information and future price movement are totally random.
  • Semi-Strong-Form Efficiency: No investor can earn excess return by trading on the publicly available information because the market adjust rapidly to the information and in an unbiased fashion.
  • Strong Form Efficiency: Share price reflects all available information and no investor can earn excess return.

Although the strong form efficiency imposes that no investor can beat the market even though a company might bring in an astonishing innovation. For example, if right now a company develops an AIDS treatment and know that it is 100% valid, an investor would not be able to realize an excess return knowing this information because the market should have already adjusted to it. However, the history has proven that some investors have consistently beaten the market such as Warren Buffer, George Sorros and Peter Lunch. In addition, Fama and French (87) argue that prices are negatively serially correlated and this could yield a possible 25-40% of predictability for price variations.

The literature argues for and against the theory from different point of views. Several classical papers document the presence of anomalies in the market pricing of shares. Other papers discuss the validity and the presence of information in the market movements.

Anomalies in return have been reported during the past fifteen years. Although, they should be called regularities as suggested by Berk (1995). These anomalies were documented in the following papers:
* Puterba and Summers (1988)
* Lakonishock and Smidt (1988)
* Lo and MacKinglay (1988)
* Karafiath (1988, 1994)
* Berk (1995)
* Barker and Lyon (1997)
* Fama and French (1996)

Benz (1981)
This paper was one of the first that documents empirical irregularities with the market pricing. It describes two important aspects of the prices:
  • The logarithm of a stock price is an inverse predictor of its return
  • When risk is controlled for by using an asset pricing model (CAPM for example) the marketvalue has explanatory power over the part of return that is nto explained by the model.

Puterba and Summers (1988)
investigates the presence of a transitory price component in the price process. They notice that the presence of a negative serial correlation in the price process means that some previous erroneous market moves had been corrected or the negative serial correlation arises from variation in the risk factors over time. They aim in this paper at examining the hypothetical transient price component or the validity of mean reverting movement. In addition, they wanted to test whether the mean reverting movement are due to shifts in required return or resulting from changes in the interest rate.
They could not reject the random walk hypothesis using VR tests but they find significant transitory price component that is responsible for a major part of price variance over time. The standard deviation of US price variance is 15-25% and this accounts for more than 50% of monthly return variance. They found out also positive serial correlation for prices on the short run but negative serial correlation over the long run. They suggest that noise trading provides a plausible explanation for transitory price components.

Lakonishock and Smidt (1988)
uses 90 years of daily return on DJIA to test for the existence of persistent seasonality patterns in the returns from 1898 until 1986. They find evidence of persistence anomalies of returns around the turn of the week, around the turn of the month, around the turn of the year and around holidays. The rate of return on Mondays was negative and the price increase around the turn of the month exceeds the total monthly price increase. The price increase from last trading day before Xmas to the end of the year is over 1.5%. However, there was no special pattern for end of the month if the month is not at the end of the year or end of quarter. Possibly these patterns are due to the inventory adjustment of different traders at the end of fiscal periods, timing of reporting by firms, seasonal patterns in cash flow to individual and institutional investors, tax-induced tradings, hedge funds last minute trading, and window dressing induced by periodic evaluation of portfolio managers.

Lo and Mackinglay (1988)
tests the RW hypothesis for weekly stock return by comparing the variance estimators derived from data sampled at different frequencies. They find out that RW model is generally not consistent with the stochastic behavior of weekly return especially for smaller cap stocks. Unlike FF (87) and Poterba and Summer (88), they find out that portfolio returns exhibits positive serial correlation but the individual stocks show negative correlation. The rejection can not be completely explained by infrequent trading or time varying volatilities although they are largely due to the behavior of small stocks. In addition, they concluded that the price stationary mean reverting model discussed in Poterba and Summers (87) and FF (87) can not account for all the variations observed in the empirical survey of weekly returns.However, they assert that the rejection of the RW does not mean that market price are in-efficient but it should impose limits on the acceptable pricing models.

Karafiath (88 and 94)
approached the issue from a different point of view and contributed some methodological innovations to the testing methods. In Karafiath 88 paper, he introduced the concept of using dummy variables in the even study procedure because it offers a convenient procedure to obtain cumulative prediction errors and related test statistics all in one step. In his 94 paper, Karafiath uses Monte Carlo simulations to investigate whether FGLS (Feasible Generalized Least Square), (Weighted Least Square) WLS, or (Consistent Estimator Least Square) CLS accounts better for heteroskedasticity and crosssesional correlation in return than (Ordinarily Least Square) OLS. The paper concludes that FGLS is well specified if the number of the time series observation is much larger than the number of securities but this model does not have greater power than the WLS (which is the FGLS with off-diagonal elements of the covariance matrix set to zero). The OLS is well specified in the MC simulation as well and the CLS have similar power to OLS. In summary, WLS, CLS, OLS are well specified under the simulation and WLS has better power than OLS or CLS. This extra power decreases as the number of securities increases.

Berk (1995)
examines size related anomalies and suggests that the observation violating the RW hypothesis should be treated as regularities in an economy in which all asset returns satisfy any of the adopted asset pricing models (APM). In addition, the paper asserts that size of the firm can account for some of the return risk of a firm and is usually recognized as the most prominent contradiction to the AP paradigm. Schwert (83) notes that observed relation between the anomaly variables and return implies that these variables proxy for risk. Little success in explaining these regularities and their interaction with risk and return. The author assumes, for the sake of argument, that all companies have the same size (same expected value) and the end of period cash flow is the same. However, the risk of every firms CF is different and this means that the market value of each firm is different. Riskier firms have lower market value and should yield higher expected return on holding their assets. Consequently if the market value is used as a measure of risk, it will predict a component of return. As a conclusion, the author thinks that it is misleading to refer to the size relation with return as an anomaly. On the other hand, the author thinks that it would be an anomaly if a negative relation is not found between size and expected return and this is why size should be used in cross sectional regression to detect mis-specifications of the model.

Fama and French (1996)
Based on previous conclusions in the literature, FF (93) developed an innovative model for risk-return relations using three factors that incoporate risks, size and growth (E(Ri) = b[E(Rm) - rf] + s* E[SMB] + h*E[HML]). This model could not prove its viability had not the size represent a major factor in risk and return prediction. FF (96) asserts that this model would not be able to predict return on all securities especially when there is a momemtum effects. However, the authors conclude that size, E/P, growth, CF/P, B/M, long term past return, and short term past return play all an important role in predicting future movement of prices. Hence, they are not anomalies and should be considered as essential factors even CAPM does not incorporate them. With this model, most of the anomalies disappear from the return process.

Barber and Lyon (1997)
analyse the power and specification of test statistics in event studies designed to detect long term abnormal returns. They find out that test statistics based on abnormal returns calculated using a reference portfolio are mis-specified because of three main reasons:
  • New listing bias: New companies are in and out of the index on a monthly basis and this might happen after the event
  • Re-balancing bias: The compounded return of the reference portfolio is rebalanced every month but for individual companies in the tested sample are not
  • Skewness bias: Long term abnormal return are right skewed.
The cumulative abnormal return is mostly affected by the new listing bias, and, therefore, the long run cumulative abnormal return is positively biased in general. On the other hand, the long run buy and hold abnormal return is affected by the re-balancing bias and the skewness bias which is a negative bias.

In order to correct for these misspecifications, the authors suggest matching the sample firms to control firms of similar size and book-market ratios and this method will get rid of all three type of biases.

Tuesday, September 02, 2008

Download Google Chrome

Google Chrome: A new browser war

Official Google Blog: A fresh take on the browser

Finally google is stepping ahead with its browser war. They want to get directly into this market instead of supporting other IE's opponent.

Chrome will come with most of what in Firefox and Opera and would present a brand new JS engine. Lately, Firefox guys were talking about a new JS machine for Gecko as well. I am not sure how similar these two are.

Even of Google Chrome does not take up, it might be a good start for Google to get into this market and buy Firefox later on or empower this browser business unit to take over Firefox and re-brand it.

What will be Microsoft response to this? Faster RC for IE 8.0 that has several bugs with its engine especially the rendering of CSS. Is this push would force Microsoft to throw in another IE 6.0 into the market and make people's life harder as it was the case with Vista?

Let us see

However, there is one risky part in Chrome. It will create a separate process for every tab and every plugin. Immagine a heavy-browsing addict like me who opens 30+ tabs for everyreading?
This means at least 60 processes in one session. Can Vista handle that.