Thursday, December 20, 2007

Advanced Topics in Investments and Corporate Finance

https://docs.google.com/document/d/18qqi4gBsEsqYjz6dWDjI-4nC0Fz4yl9aoIeG0SWlEac/edit?usp=sharing









Advanced Topics in Investments and Corporate Finance
Critical article reviews










Presented by: Bilal Abdul Kader








April 21st, 2007
Table of Contents

Asset Pricing with Conditioning Information: A New Test    3
Summary of the paper    3
Critical review    4
Are the Fama and French Factors Global or Country Specific?    5
Summary of the paper    5
Critical review    6
Does idiosyncratic risk really matter?    9
Summary of the paper    9
Critical review    10
The Debt-Equity Choice    13
Summary of the paper    13
Critical review    14
The Expiration of IPO Share Lockups    17
Summary of the paper    17
Critical review    18
Partial adjustment to public information and IPO under pricing    20
Summary of the paper    20
Critical review    21

Asset Pricing with Conditioning Information: A New Test

Kevin Wang
The Journal of Finance, Volume 58 Issue 1 Page 161 - February 2003

Summary of the paper

This paper presents a new methodology to test the performance of the conditional CAPM of Sharpe-Lintner(1965), its extension in Jagannathan and Wang (1996), and the three-factor model of Fama and French (1993). In order to avoid the functional form misspecification of the stochastic discount factor, the risk premium and the betas, the test uses a non parametric approach to incorporate the conditional information about the pricing kernel. Theoretically, conditional CAPM should perform better than the unconditional CAPM.  However, this increased performance is not statistically significant in the empirical tests. There is statistical evidence that a market portfolio could be efficient (mean-variance) conditionally, but the same portfolio can not be on the unconditional Markowitz’s efficient frontier.
Assuming that there is no risk adjusted excess return and all the alphas are set to zero under the null hypothesis, the return time variation might affect the estimation procedure of the beta coefficient. Furthermore, if the covariance of the stock beta and the market risk volatility, then the stock’s alpha would not be zero under non conditional settings. The Fama and French three factor model (1993) captures well the volatility fluctuations and book-to-market variations in the market, and uses this information to better estimate the alphas. Other newer papers (Zhang (2002) for example) argue that the size or momentum effects can be explained by the beta variations.
In this paper, Wang challenged the unconditional CAPM and get more favorable results from the nonparametric conditional CAPM as compared with the FF93 model or the unconditional CAPM. The author conducted a Monte Carlo simulation to back up their empirical results as well. Testing the conditional CAPM and the conditional Fama and French model with momentum portfolios, Wang noticed that conditional expected returns of the winners are higher than those of the losers. In addition, he concludes that the conditional CAPM residuals are positively correlated with the stock market process. Finally, the author could not present a irrefutable explanation for resulting pricing errors even though he incorporated the Jagannathan’s labor income risk factor and the FF BTM factor.

Critical review

It should be noted that it is not easy to find flaws in this paper because it well structured and the methodology is well established and supported. In the literature review section, most of the relevant important papers were presented in a way that converges towards his model.
One of the main drawback in modeling the time variant betas in a conditionally CAPM framework is the problem of misspecifications. When a linear model is used to estimate a non-linear correlation, it leads to missing some of the information and misrepresent other variations. As a result, the estimated factors would diverge from the true unbiased factor. One of the most critiques of using linear models is Ghysels (1998) where he argues that statistical inference is affected by the misspecification of the risk dynamics captured by beta. He discusses also several models with imperfect beta specifications, and proves that some of them suffer large pricing errors because of the incorrect assumptions about the beta. In some situations, the betas processes are very stable over time and might be overstated by linear models such as the conditional CAPM. As a result, Ghysels recommends reverting to the unconditional CAPM model for pricing when one can not specify a correct model.
In is known that model misspecification injects serious bias into the estimation of the coefficients and results into biased estimators of the betas. If a parametric model correctly specifies the betas, it would be the most efficient estimator because it captures all the time variations of the real betas. However, when the parametric model can not correctly capture the process variations, the nonparametric model would present an attractive solution because it does not need to impose any ex-ante restrictions on betas.
In order to deal with the misspecification errors, Wang used a stochastic discount factor (SDF) model and set a non parametric model in order to conduct his test. SDF models become popular lately in academic research. Basically, it is assumed that SDF factors are linearly integrated into the model and different restrictions are imposed on the distributions and the nature of the SDF models. Then, the general method of moments (GMM) from Hansen (1982) is used to estimate the parametric model and test its statistical significance. The test presented in this paper consists of checking whether:
  • All coefficients are actually time varying which means a linear CAPM is a best fit
  • Some coefficients are zero or stable over all periods.
  • The model should be parametric in order to represent the data. Examples of such models are presented in Akdeniz et al. (2003) and Ghysels (1998). The former test for threshold model and the latter checks for structural breaks.
In this paper, the excess return was modeled using a nonparametric conditional CAPM. The model was extended also to incorporate multi factors at one side and to capture time variations as well. Wang estimated the coefficients functions by using the Nadaraya-Watson regression. In addition, Wang carries out a simple non parametric test for the pricing error. However, Wang admits that several difficulties would emerge from using a linear model.
One possible approach to overcome this drawback is to use a non linear model with GMM estimation. Another possible extension to the presented model is to accommodate a generic distribution and to relax the assumption of a linear SDF because GMM estimation works with higher order polynomials easily.


Are the Fama and French Factors Global or Country Specific?

John M. Griffin
The Review of Financial Studies, Volume 15, Issue: 3, Pages: 783-803, 2002

Summary of the paper

In the 1993 paper, Fama and French (1993) conclude that excess market return, a size factor (SMB), and a book to market equity factor (HML) can explain the expected return for any given market. In another paper, FF (1998), the authors extend their model to be able to explain the international stocks return movement using a two factors model that include a world market factor and a world book-to-market equity (WHML) factor. They argue that this model has stronger explanatory power than the world CAPM.

In this paper, the author uses a time series analysis to assess whether the stock price process follows a country specific model or it is dictated by the global factors model. In the current global market, risk management can not reply on the local conditions only but it has also to take the global variations into account. Failing to use the best estimating model cause different errors in portfolio evaluation and would jeopardize the risk analysis decisions. As a matter of fact, the FF models attract high attention in both academic literature and in the market. However, the interpretation of the FF factors as risk factors raises several controversies. At one side, Haugen (1995) rejects FF proposition for risk bearing and suggest that the book-to-market effect are due to investor overreaction to recent firm news and indirectly pushing prices higher for “growth” companies, and lower for “value” stocks. On the second side, Ferson et al. (1999) rejects using the empirical regularities as explanatory risk factors. On the international side, Daniel, Titman and Wei (2001) observed that characteristics loading explain returns better than factor loading. In addition, the author tried to estimate a model that mixes the local three-factor model with the international model.

The author uses the integration hypothesis in order to test his unconditional model and builds his sample using companies from US, Canada, UK, and Japan because of the high correlation between these markets in the past 30 years. The author finds out that the domestic model offers more explanatory power than the global model especially for time-series variations. The local model yields more accurate pricing as compared to the international model. Mixing local and global factors in one model increase the statistical significance of the model but do not add economical strength and, consequently, the author argues that there is no benefit of extending the three-factor model to reflect the global characteristics.

Critical review

The paper is clear and well articulated. However, some points raise some concern about the rigorousness of the presented observations, the models, the effect of currency exchange fluctuations, and the difference in wealth across countries. The author presented three empirical models that are supposed to be similar in term of results. However, none of the models capture average return when used as an asset pricing model. In all of the three cases, the author could not find enough evidence that the intercepts are jointly equal to zeros and thus the presented models could not explain all the variations in the dependent variables using the assumed factors. The intercept is still capturing a significant portion of the estimation error.

Another area of the research that still needs further investigation is the usage of international models to price local assets. One of the assumptions of the author is that when using the international model for local pricing, the global factors are irrelevant ant the model is simplified to the local version. However, the empirical results do not support this approach. On the other hand, the domestic model yields lower errors for local stocks than the international model which should simplify to a local model for country specific stocks. This means that the model is still allocating some of the risk to the international factors while these factors should be irrelevant.

On the global scale, the author argues that international models that incorporate domestic and foreign factors would increase the quality of fit of the model (R2). The quality of fit does improve but it has no economic foundation and this raises several questions about the validity of the inference. In some cases, the data exhibits certain trend but this does not mean that in all cases, the stocks would follow this trend. Similar results emerge from the portfolio regression as well. This means that the augmented three factor model did not improve the quality of the return forecasting. In general, the author notices that mixing the local and global factors in one model does not add value to return expectation. However, the risk factors included should be defined and agreed upon before getting a conclusive result.
The other critique arises from the fact that the author, compare foreign factors with simulated factors. In order to match the foreign factors with the local factor in the same model, the author simulated some artificial normal observations that represent the actual observed foreign levels of return. Theoretically, for a large number of observations, a simulated sample would not differ much from the observed difference if the real observations follow certainly the same distribution from which samples are drawn. For market returns, the most documented evidence is that market returns do not follow a normally identically distributed process. This point can be corrected by using a fat tail t-distribution or other variants that filled thousands of papers and journals. In the current paper, this anomaly was crystal clear in US where the simulated factors performed better than the actual market returns. In addition, the simulated securities return has barely increased the statistical significance by 2% when foreign factors were involved.

Another interesting remark on the paper comes from the utilization of different data sets for different countries in the joint test. There is some empirical support for the theory of market integration (Campbell and Hamao (1992)); however, there is also other evidence to reject it. Stulz (1995) presented a thorough review and notes that there is no obvious selection for any of the competing hypothesis. There are observed differences between the level of integration recoded in FF (1998) model and Stulz (1992) model. With that in mind, it would need extreme cautious before inferring from this test because different databases have different approaches in their market watch. In addition, some countries are lacking behind US in term of upward cycle or downward cycle. For example, the business cycle in Canada lags behind that of US by 3-6 months. Hence, one can not run a time series analysis without correcting for this lagging. Gomez et al. (2003) argues that local investors tend to compare their performance with their local peers and thus a research on the international excess return should take this behavior into consideration because country-specific risk valuation is associated primarily with deviations from the country's average wealth level. Hence, the foreign factors are relevant only when the concern of investors is more towards the performance on an international scale only while for local performance the local model is supposed to yield more accurate forecasting.
Furthermore, the author of the current paper expects SMB and HML factors to be highly correlated between the countries, however, the empirical results proved otherwise. King and Segal (2003) observed that Canadian firms trade at a discount to US listed firms across a wide range of valuation measures. This means that the theory of integration does not hold well in this case. The authors observed also the lower cost of equity between US and Canada which means that returns on equity in both countries can not be matched at the same level. This violates the market integration assumption in this model or violates the basic test assumptions that SMB and HML should represent the same underlying state variable. If they are not the same, this means that they can not be compared across countries and markets.

Cai and Warnock (2005) analyze the position of international and local investors in U.S. equities using the global factor model. They show that at least some of the local exposure to foreign volatility turns out to generate global diversification benefits. The augmented model in this paper did not account for this kind of exposure even though that Cai and Warnock (2005) proves that the share of international investments in local investors’ increases and reduces without eliminating the observed local bias against international diversification. As a result, the proposed model looses more economical ground because it does not capture this empirical fact as well.

In general, the paper presented a clean and neat model to forecast for excess returns in the context of local and international environment. Even though the author admit that the augmented model which mixes local and international factors does not improve return forecasting, it should be noted that the model did not account for several factors which clarifies better why the three factor model (domestic) still perform better than the FF world model.



Does idiosyncratic risk really matter?
Turan G. Bali, Nusret Cakici, Xuemin (Sterling) Yan, and Zhe Zhang
Journal of Finance, vol. 60(2), pages 905-929, 2004

Summary of the paper

The asset pricing concentrated on the strong relation between risk and return for the past few decades. Most of the assets pricing models estimate return as a factor of risk that is modeled usually by the variance of the return. The authors do not agree that variance is the only measure of risk and they doubt that investors always require higher return for risky assets. In this paper, te authors deny the robustness of the Goyal and Stanta Clara (2003) empirical results which relates the return of the value-weighted average (VWA) portfolio to the volatility of an equal-weighted average (EWA) stock. In addition, the authors argue that the positive correlation between the return and risk of both portfolios is driven by the liquidity premium, and it would not hold when the median stock volatility is used in the predictive regression models or when using idiosyncratic volatility that is free of systematic risk.
In this paper, the authors replicate GS’ approach for different test periods. For the period from 1963:08 to 1999:12, the results were similar. However, the empirical results of another extended testing period (1963:08 to 2001:12) prove that the positive trade-off between risk and return does not exist. Stressing the model with different portfolios, the authors conclude that there is no statistically significant relation between the EWA volatility and the VWA returns on the NYSE/AMEX stock. As a result, the authors suppose that EWA variance can not predict the future market returns because the EWA volatility measures do not reflect microstructure variations which can inflate the variance. Furthermore, the authors think that the using VWA volatility is more reasonable to test the idiosyncratic risk and return for a VWA market portfolio.

In order to check for the robustness of the GS results, the authors presented a different approach to measure the volatility of the portfolio by incorporating the size, liquidity and the price level. Excluding small, illiquid, and low-price stocks, the authors do not find any evidence of the ability of the volatility of a portfolio to predict returns out of sample.
The authors propose several risk measures and compute then using CRSP data from July 1963 until December 2001. In addition, the volatility was computed according to Campbell et al. (2001) in order to measure the contribution of the volatility in explaining future return. In few words, all of the proposed measures of risk could not predict the return on the longer period but some models offer significant signals for the return of a month ahead.

Critical review

This paper is well written and the arguments are chained logically and the whole analysis converges towards a focal aim that was presented in the authors’ proposition. It is true that such paper would create a shock for the plain finance student who believes that the efficient frontier theory is almost sacred. Had this paper been published few years ago, it would have saved the international market some drastic recession and bubbles. Practitioners and academic researchers believe that risk is one of the major component of a stock returns and most of the presented asset pricing model so far revolve around the notion that return should be proportional to the risk attached to the volatile asset. If systematic risk is not the only component that matters as the authors propose, then several financial theories would be invalidated and need to be updated. For example, Markowitz (1959) is one of the most widely cited financial theories that relate its validity to the direct positive relation between risk and returns. Sharpe – Lintner CAPM model (1964-1965) build its super known identity on the notion of diversifiable and non diversifiable risks. CAMP declares that the marketplace reward investors with certain level of return for taking certain level of systematic risk that was measured using the variance of the portfolio.

With this paper in mind, things should move from now on because variance and standard deviation would not be the preferred risk measure for investors. Researchers have to look further beyond the variance to assess the risk associated with a portfolio. Similar to the work of Levy (1978), Merton (1987), and Malkiel and Xu (2002) who extend the CAPM in order to relate stock returns to the market systematic risk and to a market-wide measure of idiosyncratic risk. GS observe that idiosyncratic risk can be diversified and annulled in EW portfolio variance measure. However, it powerfully exists in the EW stock variance and makes around 85% of its measure.

The way the authors calculate the variance of the error term in the model:  , might raise questions because the residuals are split between the error term and the intercept. It would be stricter and more rigorous if they omitted the intercept to see how much of the return is not captured at all by the systematic risk measure.

The authors excluded less liquid stocks from the calculations in order to concentrate on the trading risks. This might lead to a loss of information because for some portfolios the volume of trading of stocks would go up and down depending on the expectations of the stocks. A stock might be very liquid a month and then stays flat for another month. In addition, the methods used by the authors yield some negative variance when the autocorrelation of returns is less than −0.5 for a period. Then in the second period, the variance is dominated by the current risk only and, thus, the estimated total variance would be negative. This would not happen for large portfolios but it would occur when dealing with single stocks. The authors, as GS do, ignore the term where the variance is negative and estimate the variance as the sum of squared returns only. This approach violates their initial assumption that there are different component of risks that are not captured by the systematic risks and thus, the results are not 100% rigorous as they intend to be.

In their Forecasts of Value-Weighted Portfolio Returns on the NYSE/AMEX/Nasdaq Stocks, the authors got very small R2 that might be almost zero or negative sometimes. This is a sign of missing information in the model because the sum of squared errors would be larger than the sum of squared totals. This might mean there is no link between risk and return as the authors mentioned or it might be due to the fact that forecast is worse than using the flat mean of return.

Moving to the innovative contribution of the paper which investigates the association between average idiosyncratic risk and the VW returns on the NYSE/AMEX/Nasdaq stocks, the authors found out that EWA idiosyncratic volatility contribute positively and significantly to market returns prediction. However, for an extended period, this relation disappears. Although the regression is valid, it would be a better approach to see if there is a non linear relation between the idiosyncratic risk and the return. Also the t-statistics might not be the best test in this situation. A potential extension of the paper is to use the Campbell, John, and Yogo (2006) for stock return predictability because this new tests corrects the deficiencies of the t-tests.

The authors hypothesize that the previously recorded prediction is partially attributable to a liquidity premium. In order to test this hypothesis, the authors construct an illiquidity measure based on the ratio of the absolute percentage price change per dollar of monthly trading volume. This measure represents the share price movement when a dollar of trading volume is spent on this asset which can serve as a rough measure of price impact. According to this test, the VAR would not be a significant predictor after the introduction of the illiquidity factor. Consequently, the authors declare that part of this forecasting power is due to the illiquidity measure and not to the risk measure. One drawback in this reasoning comes from the fact that having two correlated regressors does not move the prediction power from one of them to the other in all cases. It might be due to the fact that the new added regressor is negatively covariant with the first regressor.
Last but not least, the authors introduce another alternative measure of volatility in order to check further the robustness of GS results. This measure involves price, size and liquidity. In this step, the authors excluded low priced stocks that are illiquid and very small in size before running the same regression again to estimate the idiosyncratic risk. The results indicate that the EWA stock volatility cannot contribute to the forecasting of excess return on the VW NYSE/AMEX/Nasdaq stock portfolios. Another replication of this regression is needed on a longer testing period with other test statistics in order to see whether the results hold or not.

Finally, the authors have covered all the sides of their research and tests and accounted for several scenarios and proved to the maximum degree of confidence that risk is not a strong predictor or future return. They confirmed however, the power of risk to predict short term return. However, they concluded that this process is of short memory and the prediction power faces a cut off after the second month.

The Debt-Equity Choice

Armen Hovakimian, Tim Opler and Sheridan Titman
Journal of Financial and Quantitative Analysis Vol 36 (1), Mar (2001) 1-24

Summary of the paper

Even though future projections are the main stock evaluation factor, the firm historical performance has a significant effect when determining its capital structure. This is well documented empirically in several panel studies about leverage and debt capacity that reveals the importance of historical performance in explaining firms’ capital structure differences. This paper looks into the importance of past performance, firm characteristics, and the market conditions and their consequences on the capital structure decisions. The authors collected debt ratios for a set of firms selected from Compustat for the period of 1979 – 1997. These companies have issued debt or equity during the sample period. A first regression is then used to see the relative dependence of the debt ratio on several financial factors used in earlier cross- sectional studies. The residuals of the fitted model are then used in another logit regression to test their explanatory power in deciding whether a company should issue debt or equity for any future financing round. This logit regression does not only serve to predict the financing decision, but it also includes other independent variables that capture the effect of asymmetric information and market conditions on the debt/equity choice. In contract to other empirical work, the authors find out that the companies might not concentrate on their target debt/equity ratio all over the course of action, and they noticed that firms might change the target ratio over time because of stock price movement or profitability fluctuations. The authors concluded that deviations from the target ratio play a more important role in the repurchase decision than in the equity issuing decision. Finally, they argue that the empirical evidence suggests having a separate decision for the form of financing and another independent decision for the size of the financing.

Critical review

One attractive aspect of the study is to split the regression and use a two-step approach. However, a first glance reader would put some question marks on the selection of the debt/equity ratio and the selection of the sampled firms and sample size.
Concerning the two step approach, it would help the authors dissect the residuals into two categories: errors due to the type of financing and errors due to the size of the financing. It is true that separating the effect of each factor improves the estimation procedure and decreases the regression errors. However, it might also introduce different type of specification errors or multi-collinearity problems because of the regression technique used. Having a single firm appearing in different sections of the analysis would yield another bias also and might yield to several conclusions that contradict with previous literature. One has to admit that fitting the model against different subsets of firms could yield advanced conclusions that would not be as obvious in the total sample. For example, the firms could be split according to their size, transparency of information, and dividends payment. Small company tends to suffer more than large firms from the asymmetric information dilemma. Fitting the second regression to small firms aside and to large firms aside would provide deeper insights about the importance of information symmetry for small (large) firms when making their financing decisions.
Furthermore, the dataset raises other concerns. The specified period covers two booming market cycles in the eighties and mid nineties where seasoned equity issuance is high. However, it does not cover a serious recession like the 2000 bubble. According to Modigliani- Miller proposition II, the effect of debt on the capital structure of companies would be the most when the firm is distressed. It should be noted also that the authors did not observe the bubble at the time they performed their analysis. However, some precautions should be defined to account for negative or positive outlier, and it should be either excluded from the sample or handled properly. The study disregarded, also, a large sample of companies that tend to issue both equity and debt altogether instead of relying exclusively on one of them. It is true that extreme cases would simplify the analysis and a later stage analysis would relax this limitation to include firms with both debt and equity. However, the two-step regression might be biased and yield high estimation errors. The magnitude of the errors would increase drastically for large issues. This is another proof that the size of the issue plays an important role and should be included in the model (as a dummy for example) in order to capture the effect of the size factor.

Other researches have fitted a similar model with one regression only such as the Mackie-Mason (1990) that includes R&D expenditure in the logit model because the literature suggests that technological firms (or any high R&D firms) rely less on debt financing than other conventional companies because high R&D companies would be already highly levered.
On the other hand, the authors concentrated their study around the debt ratio (Total debt / Debt + Equity). This ratio might be biased because the market value of debt might be lower than the book value leading to overestimating of the ratio. In fact, the book value of short term debt and long term debt could be biased because companies frequently report some forms of short-tem debt as long-term debt before it is converted effectively. Managers would rather relies on the market valuation of long term debt and equity rather than on the book values because investors would evaluate the issued debt based on the market information and this is usually different from the book values for most companies. Other researchers argue that the market recognizes the ratio of total book debt over total assets and in this case, the authors might look into the effect of using this ratio on the overall performance of the model.
Historically, a lot of empirical research on the firms’ debt-equity financing choice relied on a standard or modified probit model to run the analysis. The current model used in this paper excluded the companies that used both debt and equity and this caused a loss of information. A proposed solution to this error is to use a bivariate probit model similar to the one presented by Meng and Schmidt (1985) which takes care of the correlation between the two independent variables in the fitting of the dependent variable. The same specification error propagate into the second step regression because other intangible reasons might have caused the company to issue debt or equity and, on the same pace, affected the company to set the size of the financing as well. Another diagnostic test using a two stage Bivariate Probit-Tobit model would reject the current paper conclusions because it discards the correlation between the factors affecting the choice of financing or stock repurchase and the size of issue or the purchase.

Last but not least, the current paper debt-equity-choice analysis does not test whether and how much the deviation from the target ratio affects the proportion of each type of financing. A random firm, can issue debt, for example, or repurchase its bonds when it needs to decrease (increase) its debt capacity regardless of what happened in the past. At a later stage, different types of debt can be issues (more senior or more junior) depending on its current financial strength or the market attractiveness for debt or for equity. A possible extension of the paper is to study the effect of the market policy for purchase/repurchase on the future target ratio aside from the dilemma of issuing equity or debt.

Finally, the current paper proved that the choice of equity and debt financing is correlated with the previous company capital structure and established ratios. However, the connection was not clear enough strong enough to quantify and pre-estimate the expected decisions that might be taken according to different events in the course of action of the firm financing.

The Expiration of IPO Share Lockups

Laura Casares Field, Gordon Hanka
Journal of Finance Vol 56 (2), Apr (2001) 471-500

Summary of the paper

Initial Public Offering (IPO) lockup agreements are contractual caveats binding for a limited period of time after a company has initially gone public. During this period (usually between 90 to 180 days), company insiders can not sell any of their shares. This paper examines the stock price movements close to the expiration of the lockup period, and it tries to explain the validity of this regulation by testing several propositions and hypothesis.

The study covered 1,948 IPO lockup agreements from 1988 through 1997. However, the authors admit that the data set they got does not cover all the odds of trading because it includes primarily information about large shareholders and venture capitalists. It does not provide enough information, however, on the sales initiated by employees, shares exchanged by investors, or indirect hedging sales using forwards, puts, collars, or borrowing against shares.

The researchers found out that venture capitalists push more for sales than other pre-IPO investors. On the other side, the study reveals significant abnormal returns after the lockup expiration. Their results show a negative abnormal return of -1.5% around the expiration date and a 40 percent increase in trading volume. As venture capitalists strive for faster sales of shares, the observed abnormal return and the trading volume were three folds in VC backed startups than in other firms that did not have VC investors. In addition, the authors observed a consistent declining trend in the bid and ask prices.

However, the authors assert that the decreasing bid price, the increased trading costs, or the short term price pressure can not yield this abnormal return. The cross sectional studies show that the abnormal return could be due to the decreasing demand or to the fact that insider sales usually exceed the expectations. As the dataset does not cover all sales, the results help understand a portion of early share sales only and still need further work to capture the big picture around the lockup expiration.

Critical review

The first obvious critique to this paper is the limitations of the dataset. As admitted by the authors, SDC information contradicts with the IPO prospectus whether regarding the number of shares sold, the lockup duration, or the locked up shares. It would have been much safer to use the IPO prospectus data instead of corrected SDC data. The authors prove that SDC data was not correct in 45% of the times. In an attempt to correct for this gap, the authors used the non-sold shares as a proxy for locked up shares. This approach is reasonable knowing that there is high positive correlation between the locked up shares and the non sold shares. However, one can not be 100% sure of the quality of this proxy. In order to prove the validity of this assumption, the authors test their results with the noisier measure and they find out that there no significant differences in the results. One can not reject their findings based on the imperfection of the data. However, the authors do not have enough evidence to draw conclusions from this data as well because the correlation might be to due to another factor that is not included in the model.

The model used to estimate the abnormal return would raise other concerns as well. Abnormal return is not an isolated phenomena and it might happen because the stock records high growth or because it was doing better than the market. An interesting extension of the current model is to see the correlation of the stock volatility with the rest of the market (beta). Unbiased beta estimate can be calculated by regressing the IPOed company daily stock (for the 90 days prior to the lockup expiration date and 11 days after that day) on the market index return. Another approach estimates the beta by regressing the time series return of the firm’s stock on the market return within a window of 200 days starting 11 days after the expiration date (Brau et al. (2004)). Correcting for the beta effects should improve the quality of the test and improve the rigorousness of the test.  

Back to the realized abnormal return, the authors could not explain the fact that the return was negative even though they reviewed different propositions. The academic literature suggests that investors are willing to pay a premium for a newly introduced stock associated with a lockup agreement for several reasons. Ibbotson and Ritter (1995), for example, argues that the prevailing market price is fair enough because 1) agency problems are minimized when insiders retain large shares within the firm’s capital structure, and 2) insiders’ asymmetric information would not be useful during the lockup period. Other empirical researches assert that retained insiders’ ownership around the IPO create value for the firm.

Furthermore, the under priced stock at the IPO date would not increase because of asymmetric information that is not known for investors. However, lockup agreements might expire in a short period sometimes and, thus, the agreements cannot eliminate all informational asymmetries between insiders and outsiders. As a result, investors would be reluctant to buy the shares and the price pressure would depreciate the share price and yield a negative return. Another possible explanation of the negative abnormal return is found in Browning (1999) that observes the worries of investors who expects a bulk sales around the lockup expiration deadline. The authors may study the length of the lockup period in order to assess its effects on the significance of the abnormal return probabilities. Basically, the longer the lockup period is, the higher the price would go. However, this hypothesis should be tested against real market data.

As a matter of fact, agency costs and problems are directly related to the asymmetry of information in the market, and this asymmetry might be affected by the length of the lockup period. As the current dataset does not provide enough evidence about this phenomenon, several market factors can proxy for the information ambiguity. Hanley (1993) suggests the reputation of the underwriter as one proxy because established underwriters can not afford to promote risks IPOs and loose their credibility in the market. Garfinkel (1993) suggests using growth opportunities as a proxy for information transparency because high growth firms are move volatile and insiders tend to stress the bright side of the future price rather than disclosing their worries about the downside. The third and final proxy could be the offer price as suggested by Tinic (1988) because the offer price, though under evaluated, must reflect all the observed risks of the underlying assets.

Market analysts and practioners have accounted also for the price movement around the lockup period. Brav et al (2000) observed empirical evidence that most analysts tend to issue positive ratings for stocks around the lockup expiration date. Tolia and Yip (2003) studies the price behavior around the lockup expiration date in order to see whether there is any correlation between the hot (cold) IPOs and the abnormal return at lockup expiration.
Finally, it is obvious that the presented paper introduced another dimension to assess the abnormal return of newly IPOed firms. However, the presented methodology needs enhancement in order to accommodate the effects of the market movement, investors’ behavior, agency problems and asymmetry of information.

Partial adjustment to public information and IPO under pricing

Bradley Daniel J and Bradford D. Jordan (2002)
Journal of Financial and Quantitative Analysis 37: 4, 595-616

Summary of the paper

This paper studies a series of initial public offerings (IPO) and tries to see whether it is possible to predict the under pricing phenomena using publicly available information.The authors reviewed 3,325 IPOs from the nineties period. It should be noted that the second half of the nineties was one of the hottest IPO periods.
The authors used a simple OLS regression to analyze the collected data. On the left hand side of the regression, the initial return represents the percentage change of the first close price against the opening offer price. On the right hand side, five independent variables are used: share overhang, average initial daily return for a window of 30 days before the offer date, file range amendments, and a dummy for VC backed firms. The authors tested the model by adding a combination of seven extra variables and finally augmented the model by two other variables that are not observable before the offer date to represent the final offer price and the final file range.
The authors argue that it is possible to predict 35%-50% of the variation of the IPO underpricing based on prior public information, and they conclude that offer prices react to public information in a faster and stronger manner that than previously observed. For instance, the estimated regressor of the share overhang is statistically significant and explains about 8% of IPO underpricing movement. As this is a positive coefficient, firms with higher levels of shares overhang face more underpricing than others. Underpricing is directly and significantly affected by file range amendments as well. In addition, underpricing prediction is easier in some industries (high tech. and biotech.) than other conventional industries. Furthermore, the market information can not fully adjust the offer price which means that the offer price needs few days before reaching its efficient level.

Critical review

The paper presentation is simple and the used model is clearly documented. However, there are some concerns about the dataset, the regression model and the methodology. On the one hand, the dataset is limited to a boom period where IPOs were growing and the pricing was not very efficient. In addition, the usage of OLS yields some drawbacks because OLS is very sensitive to outliers and IPO offer prices are full of outliers. For the dataset, I think adding several years from the eighties would offer more insights because the author did not have access to the data from the 2000 boom as of the time of the writings. It would be a good idea to replicate the paper now after the boom of the millennium and the fast growth of the technology sector currently. In addition, the authors did not back their selection of variables with strong economical ground, and they did not check for the significance of other established factors into their study. For example, Ljungqvist and Wilhelm (2003) argue that pre-IPO structure changes affect the underpricing of the offer as well as insider selling behavior.

Selection of the observations should have more randomness in order to give the results sound academic credibility. Although it is totally legitimate to check the power of prediction in every industry, the main problem is that the three selected industries (biotech, IT, and semiconductors) saw a huge boom during these years and some investors were not rational when dealing with these industries. Hence, the analysis should average these results with other industries in order to see whether the data reflects the dynamism of the market or it is just mere over-fitting. The authors can correct this bias by adding a dummy for growth industries which can be one for these selected industries and zero otherwise in order to control for the bias of sector selection which affect the test for VC involvement.

Regarding the methodology used, the authors do not cover several issues that might affect the results or the quality of the results. First of all, OLS should be applied on a continuum of a dependent variable. This is not the case with IPO underpricing observations. A mixed distribution such as the one offered by Asquith, Jones, and Kieschnick (1998) can be used as Aggarwal (2000) asserts because it capture the characteristics of the initial IPO returns around the offer period. Sometimes, the offer price is stabilized by the underwriter in order to boost the demand or to protect against a sudden fall down as Hanley, Kumar, and Seguin (1993) notices. The authors of the current paper might have to correct for the discrepancies between stabilized and non stabilized IPOs. Logit or Probit regressions might be a better model for such regression because the logarithmic distribution exhibits more continuity than raw data. As mentioned before, using OLS to regress the underpricing on the independent factors needs extra caution because of the many sources of bias. The authors did not mention a check for heteroskedasticity or multicollinearity in the initial data, nor did they try their model for another sample.

In addition, the authors left some gaps, in their analysis, that needs further investigation. Some of the findings were reported based on the graphical representation of the data. Absolute results on a figure have less value unless they are backed by statistical significance tests. For example, the correlation between underpricing and the initial lagged average return is inferred from the plot. Although, the plot proves the close correlation of the two processes, it can imply a statistical inference. One of their student t-test reported a significant coefficient of UP1 factor with a cumulative distribution of 1.92 which should not be significant at 5% level.

No comments: