## Friday, January 26, 2018

Published 7:45 PM by with 0 comment

# Monte Carlo Simulations And Pi

Monte Carlo approximations for values are a really cool concept so I went ahead and visualized possibly the most well-known one...

#### Background

Take a square. Draw a circle in it so that the circle is completely contained in it and the square's side length is 2*r where r is the radius of the circle. The square's area is 4*r^2 and the circle's area is pi*r^2.

Now...randomly drop items into the square. The % of items that land inside the circle will be the same value as the circle's area divided by the square's area if you drop an infinite number of items, and the more items you drop, the closer the value will be. Given that the ratio of the areas is pi/4, the % of items that land inside the circle will approach the value of pi/4. Multiply that by 4, and you get a value for pi.

#### Simulations

First, I wanted to show this somehow. I settled on simulating a progressively larger number of tosses and plotting the results. The gif below shows the first quadrant of a circle of radius 1 inside a square of side length 2 with more and more dots being generated. The error at the top is equal to 4*(% in circle/pi) - 1. That's another way of writing the error in the estimation of pi.

A more interesting question is...how repeatable are these results? To find that, I ran the simulation 5000 times and tracked the error in each simulation after 100, 1000, 10000, and 100000 trials. The histograms showing the distribution of those errors is below:

As expected since each trial is independent, the errors are normally distributed. Also, the spread in errors gets smaller as the number of errors goes up. Given that the definition of standard deviation includes weighting by 1/(square root of trials), you would expect this spread to go down with the square root of the trials. Comparing the ratios of our standard deviations, we find that:

trial ratioactualexpected
100 vs 10003.14983.1623
1000 vs 100003.24503.1623
10000 vs 1000003.08673.1623

Each factor of 10 increase in number of trials gives you a drop of approximately 'square root of 10' in the standard deviation of the error distribution. You can use this to put bounds on how many trials you need to get a given precision. To see how that looks, I've included a picture with the run from the gif at the top plotted with the range implied by twice the standard deviation from the 5,000 simulations. Note that ~95% of the points should lie between the two standard deviation curves.