In his opus magnum *Risk Analysis – A Quantitative Guide*, David Vose identifies four important probability theorems for risk analysis. Each of our heroes is connected with one of these theorems.

- Strong law of large numbers (Chebyshev’s Inequality)
- Central Limit Theorem (Laplace)
- Binomial Theorem (Bernoulli)
- Bayes Theorem

Pafnuty Chevyshev is one of the founding fathers of Russian mathematics in the nineteenth century. The strong law of large numbers is the principle behind the Monte Carlo simulation. The greater the number of iterations (or sample size), the closer the risk analysis output (distribution) will be to the theoretical distribution. This is intuitively obvious but can be proven mathematically.

Pierre-Simon Laplace was a brilliant scholar, the French equivalent of Newton with achievements in many spheres. In statistics he refined Bayes interpretation and developed the Central Limit Theorem. The Central Limit theorem where the sums of independent distribution functions tend towards a normal / log normal distribution. Also, as the independent functions are summed then the variance of the sum will decrease.

The Bernolli family produced many prominent mathematicians including Jacob. Jacob Bernoulli described Bernoulli trials in his *Ars Conjectandi*, published 8 years after his death in 1705. The Binomial Theorem is used in probability analysis to calculate the results of Bernoulli trials. In risk management terms a risk can have one of two possible outcomes: the risk will either impact or it will be avoided. This is a Bernoulli trial.

Rev. Thomas Bayes was an English Presbyterian minister in the eighteenth century. His theorem was published by the Royal Society after his death. Bayes Theorem describes the probability of an event based on conditions that might be related to the event. It shows how the probability that a risk may occur is affected by a new piece of evidence.