One of my modules has recently involved writing a set of monte carlo models. I’d heard of these mystical things before, but never implimented one myself (or understood the statistics behind them). I’ve become fairly interested in how these things work now, but one thing I didn’t understand was how the number of random numbers you use affected the final result. This seemed like a fairly easy thing to calclate and graph, so I bodged som outputs into my code, wrote a short python script to do a few hundred runs and see what came out the other end.
What came out, I really wasn’t expecting. I assumed the uncertainty (or variance) would decrease as an exponential curve as you incresed the iterations, what really occurs can be seen in the graph below.
That horrible wiggly bit at the beggining was completley unexpected. I am now wondering if it’s a sign that my data hasn’t been thermalized properly.
Any one out there with any experience of this want to comment?