Mathematics

Bayes Theorum



Tweet
James W Fairley's image for:
"Bayes Theorum"
Caption: 
Location: 
Image by: 
©  

Although Bayesian statistics is regarded as something new, The Reverend Thomas Bayes theorem was first published in 1763. It is quite simple. It is a way of calculating the odds of something happening, if you already know something else. We use the principle, naturally and often subconsciously, all the time in making decisions about what to do.

Bayesian statistical methods are increasingly being used to make decisions about healthcare, particularly at an economic level. They are also coming into drug trials because they can reach conclusions quicker, with fewer trial participants.

Bayesian statistics is based on an acknowledgment that we already know, or at least believe, something before we start a trial. This is known as the "prior", expressed as a probability distribution.

Probability is defined as a degree of individual belief, ranging from 0 (complete disbelief) to 1 (certainty). Any uncertainty is located philosophically in the human mind, not in the data itself. The trial then provides some new information. Bayes theorem - a simple mathematical formula - is then used to synthesize the new information with the prior, to form the "posterior", also expressed as a probability distribution. Bayes theorem uses simple addition and multiplication to operate probability laws familiar to any gambler. Bayesian statistical inferences are based on the posterior, which combines information from both the prior and the trial data.

Bayesian statistics is focused on answering the question "How does this new evidence change what we believe?" It explicitly allows for all available evidence to be taken into account. Because it acknowledges that prior belief can vary, various forms of evidence can be included or discounted from the prior belief, and given different weightings. Using a variety of different priors means the same data can be interpreted in many ways. This allows numerous models to be run, testing the effects of different assumptions and viewpoints on the observed data.

Although Bayes theorem itself is simple, the mathematical models built up to incorporate all these different factors quickly become so complex as to be impenetrable. The computations involved in the Markov Chain Monte Carlo simulation are such that Bayesian statistics in practice becomes a "black box" exercise. Various inputs result in various outputs, and it is almost impossible to work out what is going on in the middle. This induces a mistrust of the process in those accustomed to working with the relatively simple, transparent and fixed formulae of conventional statistical tests.

There are two major differences between Bayesian and conventional frequentist statistics.

First, probability itself. Both Bayesian and conventional statistics consider probability as varying from 0 to 1. In Bayesian statistics, probability is defined as the degree of personal belief, where 1 equals certainty or truth.

Conventional frequentist probability is defined in terms of a converging frequency, based on an infinite number of repetitions of the event that could have occurred. For example, tossing a coin gives a probability of 0.5 for either heads or tails, but you would have to toss the coin many times to see the proportion of heads and tails converge toward that figure. The uncertainty is considered to reside in the data, as an imperfect sample, not in the mind of the observer. The observer is considered neutral and impartial at all times. He does not need to have any belief, he simply observes, objectively, and his place could be taken by any other neutral observer.

The difference between the two approaches could be illustrated by a two headed coin. The Bayesian statistician would just look at the coin and say "it's a two headed coin - I'm certain". The conventional statistician would refuse to cheat by using the prior information. He would just keep tossing it, getting head after head after head until he got bored with the repetition. He would then say that, if this was a normal coin, there was only a 0.00000-whatever chance that he would have got the number of heads he got. Of course, no real statistician, conventional or otherwise, would, in reality, be that daft, and both would simply state that this problem was not suitable for any form of statistical analysis.

Second, the explicit incorporation of the prior belief in the Bayesian method. Conventional statistics tries to stick to objective data, and does not attempt to incorporate any prior belief. Some elements of prior belief are, however, implied, for example when deciding to use a one-tailed or two-tailed test. The Bayesians point out that conventional statisticians, while deluding themselves that they are being objective, do in fact admit prior knowledge, but in a patchy, inconsistent way that is not formally acknowledged and quantified. It is generally agreed that conventional statistical tests have less power to provide inferences, but are more repeatable, and more objective. Scientists value objectivity and repeatability very highly. They are always suspicious, and often disparaging, of anything that smacks of the fudge factor.

The most fundamental criticism of the Bayesian method is that it is subjective. In particular, the prior belief is subjective. One person's prior belief is different from that of another. Therefore the posterior distribution will also vary one person from another, making any inference derived from it subjective. Proponents of the Bayesian method retort that scientists aren't as objective as they think, they are deluding themselves, they aren't so objective either, they just pretend they can be, and so they may as well use the more powerful method. In 2008, the tide is flowing in favor of the Bayesians.

Tweet
More about this author: James W Fairley

From Around the Web




ARTICLE SOURCES AND CITATIONS