Decisions Using Monte Carlo: More Samples may not be Better

The numerical method we call “Monte Carlo” (or MC for short) is considered the “gold standard” tool for risk management. The tool offerings that will perform a Monte Carlo are numerous, some really expensive, some designed to be used with other tools like Excel, and some for free. And if you don’t want to buy a tool to use Monte Carlo, it is really easy to program up, even for a beginner programmer.

Experienced users of MC know that using it is not all that simple. Even many novice MC users note that if they repeat their MC that they get different answers each time. This phenomenon is called “convergence error,” and there are hundreds if not thousands of papers published on it, but with no really clear answers to help in decision making. We do know rather conclusively that if you increase the number of MC samples, the standard error in repeated MC answers reduces. We know also that if you are using a random number generator to get your MC samples, that fixing the seed for the algorithm stops the variation in MC answers. This however just fixes the error, but gives you no clue to how big or small your MC error might be.

The problems in using MC in decision making always seem to come down to how many samples you need to use so that MC errors will not cause you to make a bad decision. Of all the things that can cause a bad decision, we really depend on our computers and algorithms to not be one of them. There is no answer to this dilemma, and the guidance to always use more samples may not be feasible; larger numbers of MC samples for most real world decision problems are usually very expensive and time consuming to generate. Worse yet, it is possible that by using more samples that the MC error might actually increase. (Most MC users don’t know that dirty little secret.) And even worse yet, you have no way to know if the error increased or decreased using more samples.

In this workshop, we will go through the MC equations and explain how they work. We will explore just how big or small MC errors can be, and how the size of the error can change as a function of problem and numbers of samples used. To do this, we will look at 2.4 million repeats of 19 different MC problems for which we know the true answers. Then we will present a brand new method to bound the size of MC errors. This leads to a decision process that uses MC results where you can actually use fewer MC samples, and yet always make good decisions.

Using more MC samples may not be better.

About The Speakers

Mark Powell

Mark Powell

Consultant, Attwater Consulting

Mark Powell has practiced Risk Management for over 45 years in a wide variety of technical environments including defense, aerospace, energy, and commercial. His roles in these environments have included project manager, engineering manager, chief systems engineer, and research scientist.