An ongoing debate in finance is whether “active” investment strategies can outperform “passive” strategies. The empirical evidence in favor of passive strategies which appears in studies published by peer-reviewed scientific journals is overwhelming. For example, in studies of mutual fund performance, passive strategies almost always blow away active strategies. Similarly, the empirical evidence on frequency of trading by “retail” customers is that on average, portfolio performance is inversely related to trading frequency; i.e., the more people trade, the worse they do. Even hedge funds chronically underperform passive investment strategies. For example, the authors of a 2011 Journal of Financial Economics (JFE) article entitled “Higher risk, lower returns: What hedge fund investors really earn” find that hedge fund returns are on the magnitude of 3% to 7% lower than corresponding buy-and-hold fund returns, reliably lower than the return on the Standard & Poor’s (S&P) 500 index, and only marginally higher than the riskless rate of interest.
In my opinion, if you were to read only one book about finance, it would have to be “A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing” by Burton G. Malkiel. Malkiel’s book (now in its 11th edition) provides a compelling argument in favor of efficient markets theory and investing in (passively managed) index funds.
Efficient market theory implies that stock prices follow a random walk. These ideas were originally conceived of by Professors Paul Samuelson and Eugene Fama in the 1960’s, and subsequently popularized by folks like Professor Malkiel. In Finance 4366, we rely extensively upon the notion that prices of speculative assets (e.g., stocks, bonds, commodities, foreign exchange, etc.) follow random walks as we consider the technical details associated with pricing and hedging risk using financial derivatives.
For a non-technical introduction to forward and futures contracts, it’s hard to beat the following video tutorial on this topic:
During last Thursday’s Finance 4366 class meeting, I introduced the concept of statistical independence. This coming Tuesday, much of our class discussion will focus on the implications of statistical independence for probability distributions such as the binomial and normal distributions which we will rely upon throughout the semester.
Whenever risks are statistically independent of each other, this implies that they are uncorrelated; i.e., random variations in one variable are not meaningfully related to random variations in another. For example, auto accident risks are largely uncorrelated random variables; just because I happen to get into a car accident, this does not make it any more likely that you will suffer a similar fate (that is, unless we happen to run into each other!). Another example of statistical independence is a sequence of coin tosses. Just because a coin toss comes up “heads,” this does not make it any more likely that subsequent coin tosses will also come up “heads.”
Computationally, the joint probability that we both get into car accidents or heads comes up on two consecutive tosses of a coin is equal to the product of the two event probabilities. Suppose your probability of getting into an auto accident during 2017 is 1%, whereas my probability is 2%. Then the likelihood that we both get into auto accidents during 2017 is .01 x .02 = .0002, or .02% (1/50th of 1 percent). Similarly, when tossing a “fair” coin, the probability of observing two “heads” in a row is .5 x .5 = 25%. The probability rule which emerges from these examples can be generalized as follows:
Suppose Xi and Xj are uncorrelated random variables with probabilities pi and pj respectively. Then the joint probability that both Xi and Xj occur is equal to pipj.