The conditional Fama-French model and endogenous illiquidity: A robust instrumental variables test
Authors:
François-Éric Racicot aff001; William F. Rentz aff001; David Tessier aff003; Raymond Théoret aff004
Authors place of work:
Telfer School of Management, University of Ottawa, Ottawa, ON, Canada
aff001; Affiliate Research Fellow, IPAG Business School, Paris, France
aff002; Département des Sciences Administratives, Université du Québec en Outaouais (UQO), Gatineau, QC, Canada
aff003; Ecole des Sciences de la Gestion, Université du Québec à Montréal (ESG-UQAM), Montréal, QC, Canada
aff004; Chaire d’information Financière et Organisationnelle, ESG-UQAM, Montreal, QC, Canada
aff005
Published in the journal:
PLoS ONE 14(9)
Category:
Research Article
doi:
https://doi.org/10.1371/journal.pone.0221599
Summary
We investigate conditional specifications of the five-factor Fama-French (FF) model, augmented with traditional illiquidity measures. The motivation for this time-varying methodology is that the traditional static approach of the FF model may be misspecified, especially for the endogenous illiquidity measures. We focus on the time-varying nature of the Jensen performance measure α and the market systematic risk sensitivity β, as these parameters are essentially universal in asset pricing models. To tackle endogeneity and other specification errors, we rely on our robust instrumental variables (RIV) algorithm implemented via a GMM approach. In this dynamic or time-varying conditional context, we generally find that the most significant factor is the market one, but illiquidity may matter depending on which states or estimation methods we consider. In particular, sectors whose returns embed a market illiquidity premium are more exposed to a binding funding constraint in times of crisis, which leads to deleveraging and a resulting decrease in systematic risk.
Keywords:
Physical sciences – Research and analysis methods – Social sciences – Mathematics – Probability theory – Simulation and modeling – Economics – statistics – Mathematical and statistical techniques – Statistical methods – Applied mathematics – Algorithms – Finance – Probability distribution – Skewness – Economic analysis – Kalman filter – Public finance – Money supply and banking – Financial markets – Econometrics – Mathematical economics – Instrumental variable analysis
Introduction
We cast the five-factor Fama-French (FF) model [1,2], which features static parameters, into the conditional framework of Ferson and Schadt [3] and others [4,5,6,7,8] to account for the time-varying nature of the standard measures of performance α and market risk premium exposure β. We focus on these two parameters because there are widely used by finance academics and practitioners. [9] further discusses this model in the context of the business cycle and the subprime crisis but without venturing into the time-varying dynamics of parameters. Our motivation for a conditional framework is the well-known fact that the static alpha and beta parameters of the CAPM [10,11,12] or the extended FF model may be misspecified [3,4,5,6,7,8] because they neglect the time-varying features of alpha and beta. [13] expresses some doubt about whether a Ferson and Schadt conditional model for time- varying parameters actually performs better than a static parameter model. Nevertheless, [14,chap.11] compares recursive/rolling regressions with the optimal Kalman filtering [15] approach and concludes that the Kalman filter is more accurate in terms of the usual measure of forecast errors but not significantly better. Consequently, we accept the idea that the recursive/rolling regression approach is similar in spirit to the Ferson and Schadt conditional model.
We focus on the time-varying nature of the Jensen [16] performance measure α and the market systematic risk exposure β, since these parameters are essentially universal in asset pricing models and cost of capital estimates (e.g., [17]). The coefficients (sensitivities) of the other risk factors (premiums)—i.e., size effect SMB, value effect HML, profitability effect RMW, investment policy effect CMA, and augmented with the Pástor and Stambaugh (PS) [18,19] tradable illiquidity effect IML—will not be made time-varying in our modeling approach since they are secondary parameters. However, we test for their significance in each portfolio considered in order to determine if factors other than the market one have significance to explain asset returns in our time-series approach.
The five-factor FF model [1] adds the profitability effect RMW and the investment policy effect CMA to their original three-factor model. These two factors are related to Tobin’s [20] q, which was further developed in [21–24] and known as in the Cochrane’s q theory. Tobin’s q is the ratio of the market value of the firm over the replacement cost of assets [17], which is often proxied by the firm’s market-to-book value. Note the analogy of the last definition with the price-to-book value per share (P/B) ratio, which stands as a convenient proxy for Tobin’s q. However, use of a proxy may entail measurement errors. In this regard, Tobin’ q is notoriously known to be measured with errors (e.g., [25]). Furthermore, like many other variables and financial ratios [26], Tobin’s q is highly skewed. Thus, as another empirical issue, the normal distribution may not be applicable.
In addition to these potential measurement errors in Tobin’s q, the illiquidity measure that we rely on—PS [18,19] IML—is also a proxy. Therefore, measurement errors are also expected for illiquidity. In fact, the original gamma version of the PS factor may be seen as a generated variable as it is obtained from a regression. In the econometric literature [27–29], researchers warn that these kinds of specification/measurement errors may result in biased inference and, more seriously in specific cases, in inconsistency. Furthermore, according to [30], traditional liquidity measures are endogenous (e.g., in our case the PS measure), which therefore produces an endogeneity bias when estimating parameters with OLS. In addition, the well-known CAPM risk premium cannot be perfectly measured and is usually proxied by a large-cap portfolio such as the S&P 500 index [31,32]. Efforts to address these measurement problems are therefore on point.
In this article, we first aim to contribute to the applied financial economics (i.e., financial modeling) literature by proposing a robust instrumental variables (RIV) algorithm cast into a time-varying conditional model. The latter can be implemented in a GMM framework to correct for endogeneity and potential misspecification and measurement errors. To the best of our knowledge, we believe that we are the first authors to make this investigation. Secondly, we augment the FF [1,2] five-factor model with an illiquidity measure and transpose it to our time-varying framework. This will allow us to test the performance measure α and systematic risk β exposures in a time-varying context while accounting for cyclicality and other dimensions of risk, including illiquidity.
The RIV that we propose in a time-varying GMM setting are based on higher moments of the observed variables. These instruments appear to be strong (i.e., not weak) and respond to the concerns of several researchers (e.g., [33,34,35,36]) that the two-stage least squares (TSLS) estimator may be inconsistent when using weak instruments. Furthermore, [37] discuss the fact that weak instruments may increase the variance of the IV estimator. Our instruments are based on an optimal combination of the [38,39] estimators following the framework of [40]. This optimal combination may account for heteroscedasticity, which leads to a consistent estimator even when measurement errors are considered [41]. However, to deal with both heteroscedasticity and/or autocorrelation, the [36,42] HAC (heteroskedasticity and autocorrelation consistent) matrix is used as a weighting matrix, and, therefore, the GMM estimator seems appropriate. We propose to extract the aforementioned RIV and use them in the time-varying GMM framework.
This article is organized as follows. The next section describes the five-factor augmented time-varying conditional version of the FF [1,2] model. Then we discuss our proposed robust instrumental variables (RIV) algorithm implemented via a GMM framework referred to as RIV GMM, and we report our results. Finally, we set out our conclusion and propose suggestions for future research.
Five-factor augmented Fama-French conditional model
The five-factor FF [1,2] model may be written as where rit−rft is the excess return on the ith FF sector portfolio observed for period t, rmt−rft is the excess return on the market portfolio, SMBt is the return on the small minus big capitalization portfolio representing the size effect, HMLt is the return on the high book-to-market minus the low book-to-market portfolio representing the value effect, RMWt is the robust minus weak profitability portfolio, CMAt is the conservative minus aggressive investment policy portfolio, and εit is the error term which might be autocorrelated and/or conditionally heteroscedastic (i.e., ARCH process).
The advantage of the five-factor FF model over the earlier three-factor model is the theoretical justification developed for the two new factors RMWt and CMAt based on Tobin’s [20] q cast in the investment function I = f(q). In fact, solving a dynamic programming problem via the Bellman’s [43] principle of optimality, Abel [44] obtains, where q is the expected marginal revenue product of capital, θ is the constant elasticity of the cost of investment, and r is the interest rate. More recently, Chow ([45],chap.8,p.176) developed the Abel model using Lagrangian multipliers instead of the Bellman equation. [46] derive an analytic solution for Tobin’s q based on a stochastic dynamic framework using the classical geometric Brownian motion. [47] write the conditional expected return as a positive function of conditional expected profitability and a negative one with respect to investment, as the firm’s rate of return declines with increasing investment as it slides down on the investment opportunity curve or schedule (IOS).
Our augmented FF model that includes illiquidity IML may be written as where IML is the return of a portfolio long on illiquid and short on liquid securities. We use the PS [18,19] tradable liquidity as—i.e., the level of aggregated liquidity (gamma), and the innovations in aggregate liquidity a proxy for this variable. Note that PS [18] have developed two other liquidity measures—i.e., the level of aggregated liquidity (gamma), and the innovations in aggregate liquidity. However, in an asset pricing model, all the explanatory variables must be tradable [18,19]. Among the three liquidity variables designed by PS [18,19], only IML is tradable while the two others result from regressions. Introducing non-tradable variables in an asset pricing model should biase the alpha.
The variable IML might be considered as a generated variable, therefore resulting in biased inference from the OLS estimator [27–29]. As discussed in the section below on endogeneity, traditional liquidity measures, such as one based on PS [18,19], may be seen as endogenous variables [30]. The endogeneity bias must therefore be handled in some fashion. In this regard, we propose a parsimonious approach that has the virtue of being robust to several types of specification errors. Note that a good econometric model should only include relevant variables and not redundant ones. Parsimonious specifications lead to better forecasts and are less prone to overfitting problems [14]. In this regard, information criteria—like Akaike and Schwarz criteria—include a function that penalizes the number of variables included in an econometric model.
Conditional augmented Fama-French five-factor model
The literature has established that the static nature of the CAPM or the extended FF model may be impregnated with specification errors [3,4,5,6]. Recent applications of conditional models, for instance, include pension fund performance and the analysis of the impact of the informational content of extra-financial performance scores on systematic risk (e.g., [48,49]). However, none of these studies provide specification error tests or discuss possible endogeneity issues.
Note that Ghysels [13] expresses some concern about whether a Ferson and Schadt conditional model for time- varying parameters actually performs better than a static parameter model. Nevertheless, we believe that conditional models, because of their dynamic nature, may help in shedding light on some well-known financial puzzles (e.g., the α puzzle) even though these models might not qualify as being optimal in the Kalman [15] filter sense. Support for this approach comes from Ghysels and Marcellino [14,chap.11] who discuss several measures of comparison based on out-of-sample forecasting and compare the rolling/recursive regression approach to the time-varying parameters model that they estimate with the Kalman filter. Therefore, we adopt the conditional model general formulation of our augmented FF six-factor model because of its parsimonious features. In this formulation, the unconditional α and β in (3) become the conditional α and β in (4) and (5), respectively, where αit is a function of the information set Ω0,t-1 which is a matrix of explanatory variables for our portfolios i = 1 to N and for time periods ranging from t = 1 to T, and φ is its corresponding vector of coefficients. Similarly, βit is a function of the information set Ω1,t-1 which is another matrix of explanatory variables for the same portfolios and time periods, and ω is its corresponding vector of coefficients.
Substituting the conditional alpha and beta (4) and (5), respectively, into (3) yields
In writing the conditional formulation of our model (6), the approach taken here expresses some parameters in terms of past information. In our case, we specify α and β conditional on past information. This is analogous to [49]. By specifying the information sets Ω0,t-1 and Ω1,t-1, we obtain the implementable conditional version of the augmented FF model by transforming (6) into with where Ω0,t-1 is a matrix information set at period t-1 consisting of 2 components which are [rmt−1−rft−1], the excess market return and the term spread [spreadt−1], the spread between the ten-year Treasury constant maturity rate and the 90-day Tbill rate, and where Ω1,t-1 is a matrix information set at period t-1 consisting of 3 components with the first two identical to the components in Ω0,t-1 and the third component being IMLt−1—i.e., the illiquidity risk premium. When proceeding with estimation of the coefficients in (8) and (9), the portfolios once more run from i = 1 to N and time periods from t = 1 to T. VIX—the S&P 500 implied volatility index—is a cyclical variable that, like IML, may capture the change in uncertainty in the market. It is also known as the investor fear gauge. However, in this article, we mainly focus on illiquidity. Therefore, we report results only for illiquidity in our time-varying estimates for beta. The preliminary results for VIX are interesting but outside the scope of this article.
Endogeneity issue with traditional liquidity measures
Broadly, liquidity is defined as the cost of exchanging assets for cash. Since this cost is related to the behavior of market makers, Adrian et al. [30] argue that traditional liquidity measures are endogenous. This implies that we can write our augmented FF model, for instance, as a simultaneous two-equation model where IMLit is a nonlinear function of return volatility σit and possibly a linear function of the other two variables. Moreover, its response to volatility is assumed asymmetric: moderate increases in σit may in fact improve liquidity while large increases have the reverse effects. The dealers models variable in (11) refers to the fact that dealers have been changing their business model over time, from a principal to an agency one [30]. More precisely, dealers have moved from a principal model, where they hold an inventory of assets, to an agency model in which they hold no inventory. This makes it possible to shift the inventory risk to the investors while at the same time creating a narrower bid-ask spread, which necessarily improves liquidity. The impact of this variable on liquidity could thus be positive or negative. The same logic applies to the shock/no shock variable. A shock event might induce liquidity to be endogenously created as investors with different expectations, objectives, options (financial or real) and hedging strategies, constraints, and risk tolerances enter the market to meet their trading needs. Thus, liquidity may improve as buyers and sellers enter in the market simultaneously. In a no-shock period, investors may wait to transact, reducing volume, which would therefore affect negatively traditional liquidity measures even if intrinsic liquidity itself has not been adversely affected.
As previously mentioned, one of our main objectives is to propose a parsimonious way to tackle the problem of endogeneity without relying on a second equation such as (11). This is the subject of the next section. Note also that [30] contend that liquidity (LIQ) could be modeled via a continuous Gaussian model with infrequent jumps. We could therefore translate this suggestion further by assuming a basic model such as [50] or [51] as follows which, after a typical Euler discretization, dP or ΔP, could be a Poisson jump process with λ, the average number of jumps per year and k, the average jump size measure as percentage of asset price.
Robust IV estimation algorithm of the augmented Fama-French (FF) conditional model
The RIV GMM algorithm discussed in this section is implemented with the EViews programming matrix language. As is well documented, the main problem with the GMM estimator lies in the choice of instruments. As discussed in [37] and further analysed in [41], the instruments chosen should be robust (not weak). Weak instruments can increase the variance of the resulting estimator or worse, may yield inconsistent estimates. Finding instruments is not such an easy task. Lagged values of the explanatory variables are widely used as instruments (e.g., [37]). Nevertheless, economic theory should be the framework used to guide the selection of instrumental variables.
Essentially, the robust instrumental variables (RIV) method we develop here in the context of the conditional FF model, relies on the generalized method of moments (GMM), which we refer to as RIV GMM. We do not resort to lagged values as usually recommended. Instead, we employ average contemporaneous values taken to a power of 2 and 3 expressed in a deviation of the mean in order to reduce the erratic behavior of higher moment instruments. More precisely, we use a weighted average of the powers obtained via generalized least squares (GLS). This average can be considered optimal in the Aitken sense. In this fashion, once the instruments are obtained, they can be incorporated directly into the GMM estimator relying on a weighting matrix. The usual matrix choices are the optimal HAC [36], [42] with automatic lag selection, Hansen [52] or White [53] matrices. More precisely, the HAC procedure provides consistent matrix estimators Ω^ given by Ω^=∑j=1T−1ωj,TΓ^j,T=∑j=1T−1k(j/bT)Γ^j,T, with Γ^j,T equal to the sample autocovariances. In this equation, k is the kernel function and bT is the bandwith. The kernel function is used to determine the weights ωj,T in Ω^ [14]. There exists a large number of kernel functions—e.g., Bartlett, Daniel and Parzen. The bandwith is a positive value parameter which determines how many Γ^j,T are included in the HAC estimator. It determines the maximum number of lags used to compute this estimator. For instance k(j/bT) = 0 for |j|>bT. In this regard, a popular choice for the bandwith selection is the method proposed by Newey-West [36,42].
Our contribution is therefore to further develop the methodology discussed in [41] and in the context of our time-varying model of risk exposures. The methodology developed in [41,54] is static. The algorithm discussed in these previous works is therefore extended to the time-varying application developed in this article. However, rather than presenting the algorithm in a hermetic way as is done in the aforementioned articles, we describe it below in a more parsimonious way, which may enhance the comprehension of the methodology and give rise to new applications.
The gist of our algorithm is as follows. To compute the parameters in (8) and (9), first substitute these two equations into (7), which creates cross-products for some of these variables. Then apply OLS or RIV GMM on (7), thereby creating a model with 12 parameters to estimate. Finally, to compute the profile of the time-varying alpha and beta given by (8) and (9), substitute the estimated coefficients of the OLS or RIV GMM regression, which must be multiplied by the observed values of the independent variables. The result is the desired time series of conditional parameters that we later plot to obtain α and β profiles.
We can now probe deeper into our algorithm. The methodology suggested when one or all the variables are endogenous (e.g., illiquidity in our case) may be illustrated with a simple linear regression with only one explanatory variable where xt is an explanatory variable that may be endogenous, and εt is the innovation. As previously explained, the methodology we propose to tackle endogeneity is based on a weighted average of the squared and third power deviations of xt as instruments. This instrumental variables (IV) approach may be implemented on xt via OLS to obtain the predicted value x^t. To tackle the endogeneity problem, the next step is to substitute this predicted value into (13) Or, by replacing the predicted value of xt by its value in term of instruments and run OLS on (16) or equivalently on (15). Note that (16) clearly highlights the fact that this regression is a nonlinear one because of the product of parameters and could be efficiently estimated in only one step via nonlinear least-squares (NLS).
However, our suggested approach is more akin to two-stage least squares (TSLS). This idea may be expressed parsimoniously using standard econometrics (e.g., [37]), that may provide more intuition of the methodology. Assume that z, the instrument, is based on the predicted value: [a^0+a^1(xt−x¯)2+a^2(xt−x¯)3]. More precisely, the TSLS estimator here can be described via two separate regressions. The first one is to consider a regression of the form where x is regressed on z to obtain the predicted value of x which yields the following IV estimator after, of course, replacing the predicted value of (18) in (17) and running OLS on the result (19) is in the format of the well-known TSLS estimator augmented with our robust instrumental variables (RIV). Now that we have (19), we can express that result in the more general GMM format, which can be stated simply as where W is the weighting matrix to which we previously alluded, G¯=(1/n)Gn=(1/n)∑i=1ngi the moment conditions (i.e., the m moments conditions E[gi(β)] = 0 that the data generating process is assumed to satisfy) which in our case is the suggested RIV, and q is a quadratic function to be minimized with respect to the required parameters estimation. The Appendix further discusses our algorithm showing how to compute a weighted average of the squared and third power deviations of the explanatory variables via generalized least squares (GLS).
Note that the primary version of the suggested RIV does not include lagged values of the explanatory variable, as is often suggested to confront endogeneity. However, nothing prevents the researcher from including those along with the suggested instruments. This in turn will yield an overidentified system (20), and the Sargan [55–57] test or its equivalent J-test can be performed to analyse the identification issue [37]. In its original version, our approach has the virtue of not requiring this kind of testing since it is exactly identified.
Empirical results
Descriptive statistics
Our sample ranges from January 1968 through December 2016. The 12-sector portfolio returns and the market risk factors—i.e., Rm-Rf, SMB, HML, CMA, and RMW—are drawn from the French’s website (https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html). The term spread and the VIX are provided by the database managed by the Federal Reserve Bank of St. Louis (https://fred.stlouisfed.org/). The IML illiquidity measure comes from the Pastor’s database (https://faculty.chicagobooth.edu/lubos.pastor/research/).
Table 1 provides the descriptive statistics for the monthly excess returns for each of the Fama-French portfolios of twelve sectors and for the 12-sector average for the period January 1968 through December 2016. The mean return is close to 1% in our sample, albeit with some minor variation across sectors. The average standard deviation is somewhat over 4% with negative skewness and kurtosis well above 3. However, a minor word of caution should be noted when analyzing statistics such as skewness and kurtosis—i.e., kurtosis is related to skewness, viz., kurtosis ≥ skewness2 + 1 (e.g., [58,59]). Not surprisingly, the average Jarque-Bera (JB) statistic [60], which is asymptotically chi-squared distributed with 2 degrees of freedom with normality as the null hypothesis, is well-above the 1% level of 9.21, ranging from a low of 24.93 for energy to a high of 551.83 for durables. Most sectors show first-order serial correlation. Return autocorrelation may be attributable to illiquid portfolios—like the one associated with the durable sector—or to income smoothing as in the money sector. Care must be taken when estimating regressions with such data. The estimator that we propose in this article is based on cross-sample higher moments. Therefore, the fact that there is substantial kurtosis might be seen as another argument in favor of our robust instruments. In addition, because we transpose our instruments into a GMM setting, all the aforementioned nonspherical issues should be addressed.
Table 2 displays the descriptive statistics for the factors used in our conditional model. The range of these JB statistics is somewhat larger than the range for the sector returns, from a low of 21.82 for the illiquidity factor ILM to a high of 3630.35 for the profitability factor RMW. Nevertheless, all of these JB statistics greatly exceed the 1% level, leading to a rejection of the null hypothesis of normality.
Discussion of IML vs term spread
In this article, we concentrate on the Pástor-Stambaugh (PS) tradable liquidity measure because it appears to have been adopted by financial practitioners [19,17]. This measure, which we denote by IML, is an illiquidity risk premium as it gauges the difference in returns of a portfolio of illiquid securities and a portfolio of liquid securities. Thus, in quiescent periods, we would expect to earn an illiquidity risk premium; whereas, in times of turmoil the liquid portfolio should outperform the illiquid one.
Goyenko et al. [61] examine many different liquidity measures, including the tradable liquidity measure (IML) and its non-tradable version (gamma), both due to Pástor and Stambaugh [18]. They provide a better score to the gamma measure. However, for the estimation of the FF model the factors ought to be expressed in risk premia. To the best of our knowledge, IML is the only available replicating portfolio which accounts for illiquidity.
We also use the term spread—defined as the difference between the federal 10-year rate and the 90-day Tbill rate—as a funding liquidity measure [62]. In Fig 1, the shaded areas represent U.S. recessions. Generally, when the term spread is increasing, the volatility of the Pástor-Stambaugh measure tends to rise. In the subprime crisis (2007–2009), the term spread is high and the volatility of IML is also high but seems to be asymmetric in character: There are more important negative values than positive ones for IML, which leads to a cumulative negative return for the IML portfolio. In fact, the negative values of this measure are the largest in magnitude in our sample during the crisis. Note, for instance, that this observation is in line with Nelson’s [63] celebrated EGARCH model of Black’s [64] leverage effect on the asymmetry of the impact of bad news versus good news on stock volatility.
The term spread is widely accepted in macroeconomics as a leading indicator of the future state of the economy. When the term spread increases, it is anticipated that the economy will recover and vice a versa [65]. In this regard, when the term spread peaks, market participants begin to expect a decrease in long-term interest rates, which contributes to revive confidence and foster interest-sensitive expenditures like investment and the consumption of durables goods. This is the expectation effect associated with the term spread. Conversely, a flattening or inverted yield curve (negative term spread) is a strong indicator of an upcoming recession. Furthermore, it appears that the term spread can be seen as a predictor of PS future volatility. In other respects, implied volatility indices such as the VIX can be considered as a forward-looking view of volatility and a measure of uncertainty or investor fear. Note the confirmation on Fig 1 of our a priori expectation that VIX is at its peak during financial turmoil—especially during the subprime crisis—which also corresponds to periods of higher uncertainty.
Ex ante, investors demand an illiquidity risk premium for holding illiquid assets. However, ex post, one is likely to actually earn the premium when markets are liquid and to be harmed when markets are illiquid. In particular, it seems logical that liquidity is likely to dry up during recessions (for a further discussion of market liquidity in the context of regulation and the global financial crisis, see [66]).
Estimation of the conditional version of the augmented Fama-French model
In this section, we compare the basic OLS estimation of the augmented Fama-French [1,2] model with time-varying coefficients with our proposed RIV GMM algorithm (upper panel of Tables 3 and 4). As explained in [67], a linear model with time-varying parameters like the one that we are estimating in this article is in fact a very general nonlinear model. This follows from White’s theorem cited in [68].
Examining the results in Tables 3 and 4, note that the sectors (Energy and Health) with high and low sensitivity to the illiquidity factor IML are both significant applying OLS estimation. However, when employing our RIV GMM, the significance level for both IML coefficients drops but the signs of the coefficients remain the same as the corresponding OLS estimates. Except for the market factor whose coefficients are all significant at the 1% level, this comment also applies for all other factors with the GMM coefficients almost always becoming insignificant.
Comparing the two estimation approaches for the high and low beta sectors (Durables and Utilities), a different picture emerges. The significance of illiquidity increases as we move from OLS to RIV GMM. This may be indicative of specification errors in the model. Adrian et al. [3] note that standard illiquidity measures are endogenous and are simply proxies for more complex phenomena that could perhaps be modeled with an additional equation. Furthermore, when using the RIV GMM, the coefficient of IML is high, positive and significant for both sectors, which suggests that the stock issued by these industry sectors are quite illiquid. This argument is supported by the coefficients of first order autocorrelation for these sectors which are significant (Table 1). For both sectors, the market risk premium β declines as we move from OLS to RIV GMM.
Interestingly, based our approach RIV GMM (upper panel of Table 4), note that the low beta sector (Utilities), which is less exposed to the market, seems to require a higher illiquidity premium on average than the high beta sector (Durables). The firms in this low beta sector generally have high debt-to-equity ratios which means that in times of crisis this leverage increases, making them more vulnerable and hence less liquid, in spite of of their low market betas. When averaging over the 12 sectors, we get a result similar to the low beta sector. That is, moving from OLS to RIV GMM, beta declines and IML sensitivity increases.
Turning to the parameters for the conditional alpha and beta models (lower panel of Tables 3 and 4), the OLS and RIV GMM estimates for the various alpha parameters α0, c1, and c2 are mainly insignificant. This is not so surprising since these portfolios are not actively managed as in the case, for example, of hedge funds where they should be actively managed to generate some positive alpha. Examining the beta model parameters—i.e., β0, c3, c4, and c5—tells a different story. For OLS estimation, the beta model parameter are overwhelmingly significant. The impact of the market risk premium on beta exposure is usually negative, which suggests that beta tends to increase when market conditions deteriorate. Sectors thus seem to have difficulties in controlling their beta when stock markets drop, at least in the short run. The impact of the two other factors on beta exposure—i.e., the term spread and market illiquidity—differs according to sectors. Funding liquidity, which is represented by the term spread variable parameter estimate c4, is quite significant in all of our experiments. In falling financial markets, a drop in funding liquidity results in a decrease in the market beta of the Energy sector, a sector much exposed to market illiquidity. Indeed, according to Table 3, an increase of 1% in IML result in a rise of 0.08% in the Energy sector’s return. Therefore, given that the Energy’s return embeds an illiquidity premium, it seems relevant to assume that its funding constraint binds when market liquidity worsens. Since market beta comoves positively with leverage, the resulting deleveraging of the Energy sector following an increase in the term spread leads to a decrease in its market beta (Table 3). The Health sector, which is exposed negatively to IML, adopts an opposite behavior regarding financial funding liquidity. For this sector, an increase in IML leads to a decrease in its stock return. Note that this reaction may be due to an expectation effect, an increase in illiquidity resulting in a further expected deterioration of this market dimension, giving rise to a drop in returns (see [54] and references therein). Since the return of the Health sector does not incorporate an illiquidity premium, it does not deleverage after a rise in the term spread. Its funding constraint is probably not binding. A decrease in funding liquidity thus results in an increase in its beta.
The high and low beta sectors—i.e., Durables and Utilities, respectively—have both funding and market liquidity (c4 and c5) significance. The securities issued by Utilities are more akin to bonds than to stocks while the reverse holds true for Durables. This argument is in line with the relatively low beta for the former and the relatively high beta for the latter. We may thus conjecture that Utilities is more concerned with credit risk—which is associated with our funding liquidity indicator (term spread)—than with market risk, which is associated with our market liquidity indicator (IML). Our findings bear out this hypothesis. When market liquidity deteriorates, the Durables sector reduces its beta—i.e., its market risk. In contrast, when funding liquidity worsens this sector does not deleverage, in the sense that it tends to let its beta increase. Its funding constraint thus does not seem to bind when market liquidity worsens. In this respect, IML is not significant in the return equation of this sector. Utilities follows the opposite behavior. When credit risk increases, it seems to deleverage since its market beta decreases following this kind of shock, which suggests that its funding constraint is then binding. Consistent with that, the return of Utilities embeds a significant illiquidity premium which is significant at 10% level. However, since this sector is less concerned about market risk, it does not seem to counteract a rise in its beta following an increase in IML. However, for the 12-sector average, only the funding liquidity is significant.
Summing up, funding liquidity and market liquidity are mainly at play in recessions or crises—i.e., IML and the term spread tend to increase concomitantly during crises and to decrease in tandem outside crises. Therefore, funding liquidity and market liquidity tend to deteriorate simultaneously. Sectors whose returns embed an illiquidity premium—i.e., that become illiquid during crises—are then more exposed to a binding funding constraint. They thus must deleverage in times of crises, which reduces their systematic risk as measured by the market beta. However, a sector which deleverages seems to have difficulties in controlling the impact of IML on its market beta. In contrast, a sector which does not deleverage after a deterioration in funding liquidity tends to take measures to reduce the impact of a rise in market illiquidity on its beta.
Turning to our RIV GMM approach, virtually all of the beta model parameters are insignificant. The conditional model may be seen as an alternative to Kalman filtering to obtain time-varying parameters for the performance measure alpha and the systematic risk measure beta. The evidence of Ferson and Schadt [3] therefore seems to be quite compelling at first glance. However, some literature (e.g., [13]) suggests that this modeling approach may lead to biased estimation and a simple static approach may be preferable. Our RIV GMM results seem therefore to be in line with this literature, but this analysis is not necessarily conclusive. Below, we continue our analysis of the time-varying performance measure alpha and the systematic risk measure beta.
The time-varying performance measure alpha computed on the average of our 12-sector portfolio is quite sensitive to the estimation approach (Fig 2A). Our RIV GMM time-varying conditional alpha seems to be much more sensitive to the business cycle. However, a clearer picture emerges when looking at the time-varying conditional beta (Fig 2B). Note that the subprime crisis is well highlighted as the beta increases more with RIV GMM than with OLS at the start of the subprime crisis but also decreases more precipitously with the IV method. Firms thus deleverage quickly during a crisis, which leads to a reduction in the beta. Note that the crisis is an extreme event that is well modelled by our designed robust IV GMM which is based on higher cross-sample moments of third and fourth degrees. It has been demonstrated [69,70] that the departure from normality can be properly reflected in these higher moments. However, this tracking of the business cycle is an unintended consequence of our design to account for endogeneity issues and/or measurement errors. In our view, this should make these instruments even more appealing to the empirical practitioner.
Continuing this line of thought, we see that the subprime crisis is much more pronounced in the energy sector than the 12-sector average (Fig 3B). The same behavior is also found in the utilities sector (Fig 4B). The reason might well be that financial practitioners working in the energy markets are likely sophisticated because they use derivatives and commodities to manage their exposures. Note also that our RIV GMM beta estimators seem to be moving in a direction opposite the OLS ones during the subprime crisis (Figs 3B and 4B). For instance—especially for the Energy sector—the beta estimated with OLS tends to decrease at the beginning of the subprime crisis, while the beta estimated with RIV GMM tends to increase. This latter result we obtain with GMM is appealing since systematic risk tends to increase at the start of a crisis. Indeed, in addition to a surprise effect, the deleveraging process, which results in decrease in the beta, takes time. The opposite behavior of the betas estimated with OLS and RIV GMM may be due to reverse causation, an issue neglected by OLS but tacked by our robust instruments. Both of the RIV GMM alphas hint at some cyclicality (Figs 3A and 4A), and while it is harder to say definitively why, the connection should not be surprising as alpha should be close to zero in efficient markets.
Turning to the health sector (Fig 5B), note that the OLS dynamic time-varying market beta seems to be higher than the RIV GMM one most of the time, implying that the the health sector firms are less exposed to the market than it initially appears. In addition to being heavily subsidized, the health industry seems to have a captive market and is less prone to cyclicality. It could even be countercyclical, since in a downturn more people may become sick due to stress from job losses and other negative economic conditions and thus may call on their health services more often. In fact, the RIV GMM alpha (Fig 5A) seems to be above the OLS one during the subprime crisis. Furthermore, the RIV GMM alpha is higher on average than the OLS alpha, which is in line with the RIV GMM lower beta.
Finally, examining the durables sector (Fig 6A), we note that the RIV GMM time-varying alpha seems to be much more sensitive to the business cycle than the OLS time-varying alpha, especially during the subprime crisis where the alpha initially went negative before recovering quickly to a positive value. This suggests that the risk management techniques put in place by firms in this sector to fight the crisis—such as deleveraging—enjoy some success. Note that the same result is obtained for the aggregate of our 12 sectors, albeit less pronounced (Fig 2B). The durables sector has the highest average OLS beta. Comparing time-varying betas (Fig 6B), the RIV GMM beta seems more volatile than the OLS beta. Reverse causation is also present here, as the beta estimated by OLS and the one based on RIV GMM tend to move in opposite direction.
Robustness check
As a robustness check, we consider the illiquidity measure developed by Amihud [71]. This measure is the daily ratio of absolute stock return scaled by dollar volume, averaged over each month, i.e., where Di is the number of days of the month, Ri is the daily return on stock i and Voli is its corresponding trading volume. We compute the Amihud (2002) illiquidity measure with the S&P500. The Amihud ratio quantifies the price/return response to a given size of trade. When the Amihud ratio is high, liquidity is low.
Liquidity risk is multidimensional and more than one liquidity measure may be needed to capture different aspects of liquidity risk. For instance, Goyenko et al. [61] show that the Pástor and Stambaugh’s [18] liquidity measure fails to capture the price impact of trade, while Amihud’s [71,72] measure can be considered as a good proxy of this aspect of liquidity risk. To account for this multidimensionality, we first regress equation (10) by adding the Amihud ratio in this equation as an additional illiquidity factor aside IML. This regression allows gauging the significance of this ratio as an explanatory variable of the returns of our 12-sector portfolios and to compare it to the IML measure. Note that Blau and Whitby [73] have also augmented the Fama and French model with the Amihud ratio. Second, we reestimate equation (7) by substituting the Amihud ratio to IML in the equation of the market beta. Note that we only rely on OLS in this section because the results we obtain with GMM when using the Pástor and Stambaugh’s [18] illiquidity measure are not enough conclusive.
Table 5 provides the liquidity betas of our 12 portfolios computed with OLS using the IML and Amihud illiquidity measures. The coefficients of the Amihud ratio are significant at the 10% level or less for six sectors (among twelve). Consistent with [61], we note that the two illiquidity measures do not rank portfolio sectors in the same way. For instance, the health sector is the lowest IML (-0.13) but the highest Amihud (1.94) one. In terms of exposures, the durables sector is second when using IML (0.08) and eleventh (-2.36) when using Amihud. According to the Amihud ratio, the health, utilities, non-durables, and money sectors have the highest positive loadings to illiquidity, while the telecom, manufacturing, durables and business equipment sectors have the lowest (negative) loadings.
Table 6 provides the estimation of the conditional alpha and market beta equations using OLS. In this estimation, we substitute the Amihud ratio to IML as a conditioning factor in the conditional market beta equation. Analogously to our previous estimations with IML, the estimated coefficients for the conditional alpha equation are mostly insignificant. According to the constant, the health and telecom. sectors stand as high performers during our sample period while the durables and money sectors are low performers. These results seem to be greatly attributable to the subprime crisis which was very detrimental to Money and Durables—two very cyclical industries. The health and telecom sectors were more resilient.
In this respect, Pástor and Stambaugh [19] contend that in periods without crises, liquidity factors are estimated with less precision. Moreover, according to these researchers, liquidity betas are relatively hard to estimate—especially during economic expansions. Indeed, liquidity shocks are close to zero in normal states, which results in noisy estimates of liquidity betas. The results obtained over our whole sample period should thus be very dependent on the subprime crisis, which is included in our sample.
Turning to the equation of the conditional market beta, we note that seven sectors out of twelve display a significant Amihud ratio. The sectors with a high Amihud loading in Table 5 tend to decrease their market beta when the Amihud ratio increases—i.e., when illiquidity rises. For instance, the money sector, which has a liquidity beta of 1.16, decreases its market beta when illiquidity increases. The non-durable goods sector, with a liquidity beta of 1.42, displays the same behavior. However, the health sector stands as an exception. Even if its liquidity beta is estimated at 1.94, its market beta decreases when market illiquidity increases. As argued previously, this sector is very specific. Perhaps is it perceived less risky in times of rising illiquidity?
In line with [19], this relationship between the liquidity beta of a sector and the reaction of its market beta to illiquidity is very dependent on crises—especially the subprime crisis. During crises, sectors with high liquidity betas should reduce their market beta—for instance, by deleveraging—since risk is particularly high for them. We also note that this relationship tends to be tighter when the market beta—i.e., systematic risk—of the sector is higher. For instance, the money sector displays high liquidity and market betas. This sector thus bears a high level of risk during crises. It is all the more important for this sector to smooth its risk during bad times. The non-durables sector is in the same situation.The relationship between the market beta and the Amihud ratio thus tends to be driven by the level of the market beta.
Conclusion
The idea behind this paper is to provide a time-varying framework to test illiquidity in the context of the five-factor Fama-French [1,2] model relying on a novel robust instrumental variables (RIV) algorithm to tackle endogenous illiquidity proxies (see [30] for a discussion of the endogeneity issues related to illiquidity proxies). We feature the FF model in a time-varying context using the conditional modeling approach proposed by Ferson and Schadt [3] and others [4,5,6,7,8], which we estimate with OLS and compare it to our RIV in a GMM framework.
In this time-varying conditional model, we generally find that the most significant factor is the market one, but illiquidity may be at play, depending on the state of the economy. We find that our suggested time-varying RIV approach is a parsimonious methodology that is well suited to highlight the cyclical variations of performance and systematic risk measures that are the focus of this article. In particular, we propose a way to divide the analysis according to the states of the economy. More precisely, we look at states of high and low illiquidity and states of high and low beta. This enables us to note that the high beta energy and the low illiquidity utilities sectors have similar time-varying beta profiles, with the OLS betas often moving in the opposite direction of the corresponding RIV GMM betas. This opposite behavior is symptomatic of a reverse causation issue, which is overlooked by OLS but addressed with our RIV GMM. The RIV GMM alphas for both sectors are also more volatile than their OLS counterparts.
We note also that our suggested estimating approach is more suited to highlighting financial crises/economic downturns, particularly the subprime crisis, compared to the OLS approach, the results obtained with RIV GMM being much more cyclical and easier to interpret. This positive consequence may be attributable to the sheer design of our RIV GMM, which is based on a weighted average of higher moments of the explanatory variables that may be able to capture the asymmetric behavior of the financial time series (e.g., the Black [64] leverage effect) that we study in this paper.
Further research could consider international markets along the lines of FF [74] or estimating/testing for measurement errors/endogenous factors or proxies and thus improving the choice of factors in FF [75]. Another direction for future research on international markets is testing for market inefficiencies causing carry trade profitability in foreign exchange (the carry trade strategy is discussed in [76], chap.7). The time-varying coefficient methodology that we investigate should also play a part in this research along the lines of [77]. Another possible avenue would be to consider nonlinear models either with a business cycle transition variable or more generally, a smooth transition approach such as the STAR or Markov regime-switching models also accounting for measurement/specification errors or endogeneity biases. This could enhance the way to capture the business cycle and to examine the stability of the FF and liquidity parameters.
Appendix
Implementing the RIV GMM algorithm
In this appendix, we discuss the implementation of the time-varying nature of the conditional model. First, our algorithm requires stacking the data and because of the lagged values in the conditional model, this requires to insert some NA values at the right places. Second, the robust instrumental variables (RIV) need to be generated. To do this, we use the EViews matrix language to compute our weighted average obtained via GLS of the squared and third power deviations of the explanatory variables, which is given in equation (27) below. Third, once these robust instruments are computed (referred to below as the “riv” instruments), simply substitute them in the GMM formula (31) which is then optimized with respect to the vector of parameters of interest.
Our RIV can be generated as follows. Assume a standard linear regression of the form where X is assumed to be an unobserved matrix of explanatory variables. We also assume that the matrix of observed variables is measured with normally distributed error, X* = X+v. This assumption allows a parsimonious proof of the consistency of the estimators that we use to obtain the robust instruments RIV. Note, however, that the Durbin [38] and Pal [39] estimators that we use to obtain these instruments are analogous in some ways to co-skewness and co-kurtosis, thereby accounting for nonlinearities usually found in financial time series. Also, β^ is computed as where Pz is the standard “projection matrix or predicted value maker” for computing and where Z denotes the optimal combination of the Durbin [38] and Pal [39] estimators using GLS. The matrix version of these Durbin and Pal estimators may be written following [78,41], where z1=[xtk2],z2=z3−3Diag(x′x/T)x′,z3=[xtk3],Diag(x′x/T)=x′x/T • Ik are vectors with k representing the number of explanatory variables, and t the period subscript (t = 1,…,T). The notation • is the Hadamard product. The second and third power (moments) of the de-meaned variables (x) are then computed. This is analogous to computing the second and third moments of the explanatory variables. In short, the instruments are obtained by taking the matrix of explanatory variables (X) in deviation from their mean (x). Next, we obtain the weighted estimator (βH) by an application of the optimal generalized least squares (GLS) to the following combination [41], where Λ = (C′S−1C)−1C′S−1 is the GLS weighting matrix, S is the covariance matrix of (βDβP) under the null hypothesis (i.e., no measurement errors), and C=(IkIk) is a matrix of two stacked identity matrices of dimension k. The methodology is based on the Bayesian approach of Theil and Goldberger [75], which may lead to a Bayesian shrinking process analogous to the well-know Bayesian VAR (see e.g., [14]). This yields estimators that are more asymptotically efficient or at least as asymptotically efficient as using either only the Durbin or Pal estimators. This approach for obtaining Z is implemented in (28) below in deviation form. From (24) we extract the matrix of residuals In (28) the matrix of robust instruments (riv) can then be defined individually as The variable rivit may be viewed as a filtered version of the endogenous variables. It potentially removes residual non-linearities that might be hidden in xit. The x^it in (29) may be obtained via a parsimonious artificial regression approach (e.g., see [79]) as an alternative way of obtaining our riv instead of (24). This approach is designed by applying OLS on the z instruments The z are computed as explained previously. We repeat here the algorithm to compute these robust IV [80] in a more compact fashion: z = {z0,z1,z2}, where z0 = iT, z1 = x•x, and z2 = x•x•x−3x [Diag(x′x/T)].
Implementing the robust instrumental variables [54] riv, our GMM formulation that we refer to as RIV GMM may be written where W is a weighting matrix that can be estimated using the Newey-West [36] HAC estimator.
Zdroje
1. Fama EF, French KR. A five-factor asset pricing model. Journal of Financial Economics. 2015;116:1–22.
2. Fama EF, French KR. Dissecting anomalies with a five-factor model. Review of Financial Studies. 2016;29(1):69–103.
3. Ferson WE, Schadt RW. Measuring fund strategy and performance in changing economic conditions. Journal of Finance. 1996;51(2):425–461.
4. Christopherson JA, Ferson WE, Glassman DA. Conditioning manager alphas on economic information: another look at the persistence of performance. Review of Financial Studies. 1998;11(1):111–142.
5. Ferson WE, Qian M. Conditional evaluation performance: revisited. The Research Foundation of CFA Institute (mimeo); 2004.
6. Kat HM, Miffre J. The impact of non-normality risks and tactical trading on hedge fund alphas. Journal of Alternative Investments. 2008;10(4):8–22.
7. Ang A, Kristensen D. Testing conditional factor models. Journal of Financial Economics. 2012;106(1):132–156.
8. Kursenko A. Empirical tests of multifactor capital asset pricing models and business cycles. U.S. stock market evidence before, during and after the great recession. M.Sc. thesis, Department of Economics, Norwegian University of Science and Technology; 2017.
9. Kursenko A. Empirical tests of multifactor capital asset pricing models and business cycles. U.S. stock market evidence before, during and after the great recession. M.Sc. thesis, Department of Economics, Norwegian University of Science and Technology; 2017.
10. Sharpe WF. Capital asset prices: a theory of market equilibrium under conditions of risk. Journal of Finance. 1964;19:425–442.
11. Lintner J. The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. Review of Economics and Statistics. 1965;46:13–37.
12. Mossin J. Equilibrium in a capital asset market. Econometrica. 1966;34:768–783.
13. Ghysels E. On stable factor structures in the pricing of risk: do time-varying betas help or hurt? Journal of Finance. 1998;53(2):549–573.
14. Ghysels E, Marcellino M. Applied economic forecasting using time series methods. Oxford, UK: Oxford University Press; 2018.
15. Kalman RE. A new approach to linear filtering and prediction problems. Journal of Basic Engineering. 1960;82:35–45.
16. Jensen MC. The performance of mutual funds in the period 1945–64. Journal of Finance. 1968;23:389–416.
17. Pinto JE, Henry E, Robinson TR, Stowe JD, Wilcox SE. Equity asset valuation, 3rd ed. New York: John Wiley & Sons; 2015.
18. Pástor L, Stambaugh RF. Liquidity risk and expected stock returns. Journal of Political Economy. 2003;111:642–685.
19. Pástor L, Stambaugh RF. Liquidity risk after 20 years. Critical Finance Review. 2019;1–24.
20. Tobin J. A general equilibrium approach to monetary theory. Journal of Money, Credit and Banking. 1969;1(1):15–29.
21. Cochrane JH. Production-based asset pricing and the link between stock returns and economic fluctuations. Journal of Finance. 1991;46:209–237.
22. Cochrane JH. A cross-sectional test of an investment-based asset pricing model. Journal of Political Economy. 1996;104(3):572–621.
23. Cochrane JH. Presidential address: discount rates. Journal of Finance. 2011;66(4):1047–1108.
24. Cochrane JH. Macro-Finance. Review of Finance. 2017;21(3):945–985.
25. Erickson T, Whited TM. Treating measurement error in Tobin's q. Review of Financial Studies. 2012;25:1286–1329.
26. Damodaran A. Damodaran on valuation: Security analysis for investment and corporate finance, 2nd ed. New York: Wiley; 2006.
27. Pagan AR. Econometric issues in the analysis of regressions with generated regressors. International Economic Review. 1984;25:221–247.
28. Pagan AR. Two stage and related estimators and their applications. Review of Economic Studies. 1986;53:517–538.
29. Pagan AR, Ullah A. The econometric analysis of models with risk terms. Journal of Applied Econometrics. 1988;3:87–105.
30. Adrian T, Fleming M, Shachar O, Vogt E. Market liquidity after the financial crisis. Annual Review of Financial Economics. 2017;9:43–83.
31. Roll R. A critique of the asset pricing theory's tests part i: on past and potential testability of the theory. Journal of Financial Economics. 1977;4(2):129–176.
32. Benninga S. Financial modeling, 4th ed. Cambridge, MA: MIT Press; 2014.
33. Greene WH. Econometric analysis, 8th ed. New York: Pearson; 2018.
34. Hahn J, Hausman J. Weak instruments: diagnosis and cures in empirical econometrics. American Economic Review. 2003;93(2):118–125.
35. Nelson C, Startz R. Some further results on the exact small sample properties of the instrumental variables estimator. Econometrica. 1990;58:967–976.
36. Newey WK, West KD. A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica. 1987;55:703–708.
37. Heij C, de Boer P, Franses PH, Kloek T, van Dijk HK. Econometric methods with applications in business and economics. Oxford, UK; Oxford University Press; 2004.
38. Durbin J. Errors in variables. International Statistical Review. 1954;22:23–32.
39. Pal M. Consistent moment estimators of regression coefficients in the presence of errors in variables. Journal of Econometrics. 1980;14:349–364.
40. Theil H, Goldberger AS. On pure and mixed estimation in economics. International Economic Review. 1961;2:65–78.
41. Racicot FE. Engineering robust instruments for panel data regression models with errors in variables: a note. Applied Economics. 2015;47:981–989.
42. Newey WK, West KD. Automatic lag selection in covariance matrix estimation. Review of Economic Studies. 1994;61:631–653.
43. Bellman R. Dynamic programming. Princeton, NJ: Princeton University Press; 1957.
44. Abel AB. Optimal investment under uncertainty. American Economic Review. 1983;73:228–233.
45. Chow GC. Dynamic economics: optimization by the Lagrange method. New York: Oxford University Press; 1997.
46. Abel AB, Eberly JC. How Q and cash flow affect investment without frictions: an analytic explanation. Review of Economic Studies. 2011;78:1179–1200.
47. Hou K, Xue C, Zhang L. Digesting anomalies: an investment approach. Review of Financial Studies. 2015;28:650–705.
48. Champagne C, Coggins F, Chrétien S. Effects of pension fund freezing on firm performance and risk. Canadian Journal of Administrative Sciences. 2017;34(3):306–319.
49. Sodjahin A, Champagne C, Coggins F. Leading or lagging indicators of risk? The informational content of extra-financial performance scores. Journal of Asset Management. 2017;18(5):347–370.
50. Merton R. Option pricing when underlying stock returns are discontinuous. Journal of Financial Economics. 1976;3(1–1):125–144.
51. Bates DS. The crash of ‘87: was it expected? The evidence from options market. Journal of Finance. 1991;46(3):1009–1044.
52. Hansen LP. Large sample properties of generalized method of moments estimators. Econometrica. 1982;50(4):1029–1054.
53. White H. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica. 1980;48:817–838.
54. Racicot FE, Rentz WF, Théoret R. Testing the new Fama and French five-factor model with illiquidity: A panel data illustration. Finance. 2018;39(3): 45–102.
55. Sargan JD. The estimation of economic relationships using instrumental variables. Econometrica. 1958;26(3):393–415.
56. Sargan JD. Testing for misspecification after estimating using instrumental variables. London School of Economics (mimeo); 1975.
57. Sargan JD. Lectures on advanced econometric theory. Oxford, UK: Basil Blackwell; 1988.
58. Wilkins EJ. A note on skewness and kurtosis. The Annals of Mathematical Statistics. 1944;15(3):333–335.
59. Schopflocher TP, Sullivan PJ. The relationship between skewness and kurtosis of a diffusing scalar. Boundary-Layer Meteorology. 2005;115:341–358.
60. Jarque CM, Bera AK. Efficient tests for normality, homoscedasticity and serial independence of regression residuals. Economics Letters. 1980;6:255–259.
61. Goyenko RY, Holden CW, Trzcinka CA. Do liquidity measures measure liquidity? Journal of Financial Economics. 2009;92:153–181.
62. Claessens S, Kose MA. Macroeconomic implications of financial imperfections: A survey. Bank for International Settlements (BIS) Working Papers No. 677; 2017.
63. Nelson DB. Conditional heteroskedasticity in asset returns: a new approach. Econometrica. 1991;59:347–370.
64. Black F. Studies in stock price volatility changes. Proceedings of the 1976 Business Meeting of the Business and Economic Statistics Section. American Statistical Association. 1976;177–181.
65. Wheelock DC, Wohar ME. Can the term spread predict output growth and recessions? A survey of the literature. Federal Reserve Bank of St. Louis Review. 2009;91(5,Part 1):419–449.
66. Adrian T, Kiff J, Shin HS. Liquidity, leverage, and regulation 10 years after the global financial crisis. Annual Review of Financial Economics. 2018;10:1–24.
67. Diebold FX, Yilmaz K. On the network topology of variance decompositions: measuring the connectedness of financial firms. Journal of Econometrics. 2014;182:119–134.
68. Granger CWJ. Non-linear models: where do we go next–time-varying parameter models? Studies in Nonlinear Dynamics & Econometrics. 2008;12(3):1–9.
69. Mandelbrot B. The variation of certain speculative prices. Journal of Business. 1963;36:394–419.
70. Taleb N. Dynamic hedging: Managing vanilla and exotic options. New York: John Wiley & Sons; 1997.
71. Amihud Y. Illiquidity and stock returns: cross-section and time series effects. Journal of Financial Markets.2002;5(1):31–56.
72. Amihud Y. Illiquidity and stock returns: A revisit. Critical Finance Review. 2019; forthcoming:1–24.
73. Blau BM., Whitby RJ, Range-based volatility, expected stock returns, and the low volatility anomaly. PLoS ONE. 2017; 12:1–19.
74. Fama EF, French KR. International tests of a five-factor asset pricing model. Journal of Financial Economics. 2017;123:441–463.
75. Fama EF, French KR. Choosing factors. Journal of Financial Economics. 2018;128:234–252.
76. Bekaert G, Hodrick R. International financial management, 3rd ed. Cambridge, UK: Cambridge University Press; 2018.
77. Beckmann J, Glycopantis, D, Pilbeam K. The dollar-euro exchange rate and monetary fundamentals. Empirical Economics. 2018;54:1389–1410.
78. Theil H, Goldberger AS. On pure and mixed estimation in economics. International Economic Review. 1961;2:65–78.
79. Racicot FE. Erreurs de mesure sur les variables économiques et financières. La Revue des Sciences de Gestion. 2014;267-268(3–4):79–103.
80. Nelson C, Startz R. The distribution of the instrumental variables estimator and its t-ratio with the instrument is a poor one. Journal of Business. 1990;63:S125–S140.
Článok vyšiel v časopise
PLOS One
2019 Číslo 9
- Metamizol jako analgetikum první volby: kdy, pro koho, jak a proč?
- Nejasný stín na plicích – kazuistika
- Masturbační chování žen v ČR − dotazníková studie
- Úspěšná resuscitativní thorakotomie v přednemocniční neodkladné péči
- Fixní kombinace paracetamol/kodein nabízí synergické analgetické účinky
Najčítanejšie v tomto čísle
- Graviola (Annona muricata) attenuates behavioural alterations and testicular oxidative stress induced by streptozotocin in diabetic rats
- CH(II), a cerebroprotein hydrolysate, exhibits potential neuro-protective effect on Alzheimer’s disease
- Comparison between Aptima Assays (Hologic) and the Allplex STI Essential Assay (Seegene) for the diagnosis of Sexually transmitted infections
- Assessment of glucose-6-phosphate dehydrogenase activity using CareStart G6PD rapid diagnostic test and associated genetic variants in Plasmodium vivax malaria endemic setting in Mauritania