# Art meets chemistry meets physics meets finance

Equations (1) and (2) in my “Geometric Brownian Motion Simulations” teaching note represent examples of so-called “Ito diffusions”. Interestingly, when looking at graphs produced by random number generators (such as are utilized by the Brownian Motion spreadsheet model used for this teaching note), people tend to “see” patterns in data even when no such patterns actually exist.

Ito diffusions represent a specific type of reaction-diffusion process. The Wired Magazine article referenced below provides a layman’s explanation of reaction-diffusion processes in chemistry, which are characterized by reactive molecules that can diffuse between cells. A special case of a reaction-diffusion process is a “pure” diffusion process, where substances aren’t transformed into each other but nevertheless randomly spread out over a surface. While the reaction-diffusion process makes for much more aesthetically pleasing art, other so-called diffusion processes (e.g., diffusion of thermal energy as characterized by heat equations or movements of speculative asset prices as characterized by Ito diffusions) similarly generate (what appear to the naked eye to be) “patterns” from randomness.

Hypnotic Art Shows How Patterns Emerge From Randomness in Nature

These digital canvases represent British mathematician Alan Turing‘s theory of morphogenesis.

# Intuition about arithmetic and geometric mean returns in finance

I should have posted this last week when we covered this in class, but better late than never!

In sections 6-7 of the Hull’s “Wiener Processes and Ito’s Lemma” chapter and my teaching note entitled “Applying Ito’s Lemma to determine the parameters of the probability distribution for the continuously compounded rate of return“, it is shown (via the application of Ito’s Lemma) that T-period log returns are normally distributed with mean (μ-σ2/2)T and variance σ2T. In the geometric Brownian motion equation (equation (6) in Hull’s chapter),

dS/S = μdt + σdz,

μ corresponds to the expected return in a very “short” time, dt, expressed with a compounding frequency of dt; in other words, it corresponds to the arithmetic mean return). μ-σ2/2 on the other hand corresponds to the expected return in a “long” period of time, T-t, expressed with continuous compounding; i.e., it corresponds to the geometric mean return.

To see the difference between the arithmetic and geometric mean return, consider the following numerical example. Suppose that returns in successive years are r(1) = 15%, r(2) = 20%, r(3) = 30%, r(4) =−20% and r(5) = 25%. If you add these returns up and divide by 5, this gives you the arithmetic mean value of 14%. The arithmetic mean of 14% is analogous to μ . However, the annualized return that would actually be earned over the course of a five-year holding period is only 12.4%. This is the geometric mean return which is analogous to μ-σ2/2. It is calculated with the following equation:

[1(1.15)(1.20)(1.30)(.80)(1.25)](1/5) – 1 = .124.

The “problem” with volatility is that the higher the volatility, the more it lowers the 5-year holding period return. We can create a mean preserving spread of the (r(1) = 15%, r(2) = 20%, r(3) = 30%, r(4) =−20%, r(5) = 25%) return series by resetting r(1) to 0% and r(5) to 40% ; both return series have arithmetic means of 14% but the (r(1) = 0%, r(2) = 20%, r(3) = 30%, r(4) =−20%, r(5) = 40%) return series has a higher variance (.058 versus .039 for the original return series). This increase in variance results in a lower geometric mean:

[1(1)(1.20)(1.30)(.80)(1.40)](1/5) – 1 = .118.

On the other hand, if we lower volatility, then this increases the geometric mean return. To see this, instead of resetting r(5) in the original return series from 25% to 40%, let’s leave r(5) at 25% and instead reset r(4) in the original return series from -20% to -5%. This change generates the following return series: (r(1) = 0%, r(2) = 20%, r(3) = 30%, r(4) =−5%, r(5) = 25%), which has a 14% arithmetic mean and variance of .024. With lower variance, the new return series has a higher geometric mean:

[1(1)(1.20)(1.30)(.95)(1.25)](1/5) – 1 = .131.

# Today’s class and what’s next

Notwithstanding the “mathiness” encountered during today’s class meeting, the result on the difference between arithmetic and geometric mean is of great practical significance.  As we analytically and numerically showed, the geometric mean return is particularly important in demonstrating the adverse effect that excess volatility has on the long-run value of an investment plan.

After covering the arithmetic/geometric mean topic, I attempted to analytically demonstrate how Ito’s Lemma can be used to infer the stochastic process for the “arbitrage-free” price of a forward contract.  This topic appears in the “Application to Forward Contracts” section of the Wiener Processes and Ito’s Lemma chapter. Specifically, given that the instantaneous change in the price (S) of the underlying asset evolves according to the geometric Brownian motion equation $dS = \mu Sdt + \sigma Sdz$, it follows that the instantaneous change in the price (F) of a forward contract that references S must evolve according to the following equation:

$dF = (\mu - r)Fdt + \sigma Fdz.$

The one page PDF document entitled “Determining the Stochastic Process for a Forward Contract from Ito’s Lemma” provides the analytic details as to why and how this result obtains.  Next Tuesday, I will begin class by covering this teaching note and then segueing into our initial foray of the “Black-Scholes-Merton Model” chapter.  We will discuss the Geometric Brownian Motion, Ito’s Lemma, and Risk-Neutral Valuation reading in some detail and finish our time together by working on Risk Neutral Valuation Class Problems.

# On the origins of the binomial option pricing model

In my previous posting entitled “Historical context for the Black-Scholes-Merton option pricing model,” I provide links to the papers in which Black-Scholes and Merton presented the so-called “continuous time” version of the option pricing formula. Both of these papers were published in 1973 and eventually won their authors (with the exception of Fischer Black) Nobel prizes in 1997 (Black was not cited because he passed away in 1995 and Nobel prizes cannot be awarded posthumously).

Six years after the Black-Scholes and Merton papers were published, Cox, Ross, and Rubinstein (CRR) published a paper entitled “Option Pricing: A Simplified Approach”. This paper is historically significant because it presents (as per its title) a much simpler method for pricing options which contains (as a special limiting case) the Black-Scholes-Merton formula. The reason why we began our analysis of options by first studying CRR’s binomial model is because pedagogically, this makes the economics of option pricing much easier to comprehend. Furthermore, such an approach removes much (if not most of) the mystery and complexity of Black-Scholes-Merton and also makes that model much easier to comprehend.

# Historical context for the Black-Scholes-Merton option pricing model

Although we won’t get into the “gory” details on the famous Black-Scholes-Merton option pricing model until sometime later this semester when we cover Hull’s chapter entitled “The Black-Scholes-Merton Model” and my teaching note entitled “Derivation and Comparative Statics of the Black-Scholes Call and Put Option Pricing Equations“, I’d like to call your attention to the fact that the original papers by Black-Scholes and Merton are available on the web:

The Black-Scholes paper originally appeared in the Journal of Political Economy (Vol. 81, No. 3 (May – Jun., 1973), pp. 637-654). The Merton paper appeared at around the same time in The Bell Journal of Economics and Management Science (now called The Rand Journal). Coincidentally, the publication dates for these articles on pricing options roughly coincide with the founding of the Chicago Board Options Exchange, which was the first marketplace established for the purpose of trading listed options.

Apparently neither Black and Scholes nor Merton ever gave serious consideration to publishing their famous option pricing articles in a finance journal, instead choosing two top economics journals; specifically, the Journal of Political Economy and The Bell Journal of Economics and Management Science. Mehrling (2005) notes that Black and Scholes:

“… could have tried finance journals, but the kind of finance they were doing was outside the rubric of finance as it was then organized. There was a reason for the economist’s low opinion of finance, and that reason was the low analytical level of most of the work being done in the field. Finance was at that time substantially a descriptive field, involved mainly with recording the range of real-world practice and summarizing it in rules of thumb rather than analytical principles and models.”

Another interesting anecdote about Black-Scholes is the difficulty that they experienced in getting their paper published in the first place. Devlin (1997) notes: “So revolutionary was the very idea that you could use mathematics to price derivatives that initially Black and Scholes had difficulty publishing their work. When they first tried in 1970, Chicago University’s Journal of Political Economy and Harvard’s Review of Economics and Statistics both rejected the paper without even bothering to have it refereed. It was only in 1973, after some influential members of the Chicago faculty put pressure on the journal editors, that the Journal of Political Economy published the paper.”

References

Devlin, K., 1997, “A Nobel Formula”.

Mehrling, P., 2005, Fischer Black and the Revolutionary Idea of Finance (Hoboken, NJ: John Wiley & Sons, Inc.).

# The Birthday Paradox: an interesting probability problem involving “statistically independent” events

This past Thursday, we discussed the concept of statistical independence and focused attention on some important implications of statistical independence for probability distributions such as the binomial and normal distributions.

Here, I’d like to call everyone’s attention to an interesting (non-finance) probability problem related to statistical independence. Specifically, consider the so-called “Birthday Paradox”. The Birthday Paradox pertains to the probability that in a set of randomly chosen people, some pair of them will have the same birthday. Counter-intuitively, in a group of 23 randomly chosen people, there is slightly more than a 50% probability that some pair of them will both have been born on the same day.

To compute the probability that two people in a group of n people have the same birthday, we disregard variations in the distribution, such as leap years, twins, seasonal or weekday variations, and assume that the 365 possible birthdays are equally likely.[1] Thus, we assume that birth dates are statistically independent events. Consequently, the probability of two randomly chosen people not sharing the same birthday is 364/365. According to the combinatorial equation, the number of unique pairs in a group of n people is n!/2!(n-2)! = n(n-1)/2. Assuming a uniform distribution (i.e., that all dates are equally probable), this means that the probability that no pair in a group of n people shares the same birthday is equal to p(n) = (364/365)^[n(n-1)/2]. The event of at least two of the n persons having the same birthday is complementary to all n birthdays being different. Therefore, its probability is p’(n) = 1 – (364/365)^[n(n-1)/2].

Given these assumptions, suppose that we are interested in determining how many randomly chosen people are needed in order for there to be a 50% probability that at least two persons share the same birthday. In other words, we are interested in finding the value of n which causes p(n) to equal 0.50. Therefore, 0.50 = (364/365)^[n(n-1)/2]; taking natural logs of both sides and rearranging, we obtain (ln 0.50)/(ln 364/365) = n(n-1)/2. Solving for n, we obtain 505.304 = n(n -1); therefore, n is approximately equal to 23.[2]

The following graph illustrates how the probability that a pair of people share the same birthday varies as the number of people in the sample increases:

[1] It is worthwhile noting that real-life birthday distributions are not uniform since not all dates are equally likely. For example, in the northern hemisphere, many children are born in the summer, especially during the months of August and September. In the United States, many children are conceived around the holidays of Christmas and New Year’s Day. Also, because hospitals rarely schedule C-sections and induced labor on the weekend, more Americans are born on Mondays and Tuesdays than on weekends; where many of the people share a birth year (e.g., a class in a school), this creates a tendency toward particular dates. Both of these factors tend to increase the chance of identical birth dates, since a denser subset has more possible pairs (in the extreme case when everyone was born on three days of the week, there would obviously be many identical birthdays!).

[2]Note that since 26 students are enrolled in Finance 4366 this semester, this implies that the probability that two Finance 4366 students share the same birthday is roughly p’(26) = 1 – (364/365)^[26(25)/2] = 59%.

# Things That Make You Go Hmmm…

Financial historian John Stuart Gordon’s Wall Street Journal essay provides some particularly fascinating examples of rare events from the 19th, 20th, and 21st centuries!

Odds are these historical coincidences will strike you as unlikely.

# A much more “rigorous” way to calculate 1+1 = 2

One of my Baylor faculty colleagues pointed out an entertaining and somewhat whimsical parody on the use of math in applied economics and finance which first appeared in the Nov.-Dec. 1970 issue of The Journal of Political Economy, entitled “A First Lesson in Econometrics” (at least I found it entertaining :-)). Anyway, check it out!

File Attachment: JPEMathParody.pdf (30 KB)

# Visualizing Taylor polynomial approximations

In his video lesson entitled “Visualizing Taylor polynomial approximations“, Sal Kahn essentially replicates the tail end of last Thursday’s Finance 4366 class meeting in which we approximated y = eˣ with a Taylor polynomial centered at x=0.  Sal approximates y = eˣ with a Taylor polynomial centered at x=3 instead of x=0, but the same insight obtains in both cases, which is that one can approximate functions using Taylor polynomials, and the accuracy of the approximation increases as the order of the polynomial increases (see pp. 19-25 in my Mathematics Tutorial lecture note if you wish to review what we did in class last Thursday).