Introduction || Instructions (with example)
This screen takes prior probabilities for a set of alternative hypotheses, conditional probabilities for several possible outcomes, and information about which outcome(s) occurred. It produces revised probabilities for the original hypotheses.
Bayes' Theorem provides a way to apply quantitative reasoning to what we normally think of as "the scientific method". When several alternative hypotheses are competing for our belief, we test them by deducing consequences of each one, then conducting experimental tests to observe whether or not those consequences actually occur. If an hypothesis predicts that something should occur, and that thing does occur, it strengthens our belief in the truthfulness of the hypothesis. Conversely, an observation that contradicts the prediction would weaken (or destroy) our confidence in the hypothesis.
In many situations, the predictions involve probabilities-- one hypothesis might predict that a certain outcome has a 30% chance of occurring, while a competing hypothesis might predict a 50% chance of the same outcome. In these situations, the occurrence or non-occurrence of the outcome would shift our relative degree of believe from one hypothesis toward another. Bayes theorem provides a way to calculate these "degree of belief" adjustments.
In Bayes' Theorem terminology, we first construct a set of mutually-exclusive and all-inclusive hypothesis and spread our degree of belief among them by assigning a "prior probability" (number between 0 and 1) to each hypothesis. If we have no prior basis for assigning probabilities, we could just spread our "belief probability" evenly among the hypotheses.
Then we construct a list of possible observable outcomes. This list should also be mutually exclusive and all inclusive. For each hypothesis we calculate the "conditional probability" of each possible outcome. This is just the probability of observing each outcome if that particular hypothesis is true. For each hypothesis, the sum of the conditional probabilities for all the outcomes must add up to 1.
We then note which outcome actually occurred. Using Bayes' formula, we can then compute revised "post hoc" probabilities for the hypotheses. This page implements the rather messy-looking formula. You need only identify the hypotheses and outcomes, assign prior probabilities to the hypotheses and conditional probabilities to the outcomes, and identify the outcome that actually occurred; the JavaScript program will do the rest.
Note: Before using this page for the first time, make sure you read the JavaStat user interface guidelines for important information about interacting with JavaStat pages.
Suppose a woman is the daughter of a carrier of hemophilia, and therefore is known to have a 50/50 chance of being a carrier herself. If she subsequently has a normal child, how does this affect the likelihood that she is a carrier.
You can also click the Reset button to reset all cells in the table to their default values, or click the Rev-to-Prior button to move the revised probabilities into the prior probability fields (useful for analyzing sequential outcomes).
Reference: An Introduction to Scientific Research by E. Bright Wilson, Jr., McGraw-Hill (and Dover).