Two schools of thought dominate statistical inference: the Frequentist and the Bayesian approach. Using a simple coin flip example, this post walks through how each framework builds uncertainty intervals, what those intervals actually mean, and when to reach for one over the other.
Introduction
When making decisions based on probability and uncertainty, we have a choice of approach. Our two options are the Frequentist and the Bayesian interpretations. These frameworks differ fundamentally in how they interpret probability and uncertainty. Frequentists define probability as the long-run frequency of an event occurring, while Bayesians treat probability as a degree of belief that updates with new evidence.
A simple but illustrative example is the question of whether a coin is fair given some observed data. Suppose we flip a coin N = 10 times — what can we conclude about its fairness based on the observed outcome? This post explores how each approach answers that question.
Frequentist Approach
The frequentist method estimates the probability of heads, denoted as $$p$$, based solely on observed data. A common approach is to construct a confidence interval for $$p$$, which provides a range of plausible values based on the sample proportion.
Confidence Interval for Coin Fairness
Say we observe 7 heads out of 10 flips. The frequentist estimate of $$p$$ is simply:
\[\hat{p} = \frac{7}{10} = 0.70\]Using a 95% confidence interval for a binomial distribution:
\[CI = \hat{p} \pm Z_{0.025} \times \sqrt{\frac{\hat{p} (1 - \hat{p})}{n}}\]Substituting values ($$Z_{0.025} = 1.96$$, $$n = 10$$):
\[CI = 0.70 \pm 1.96 \times \sqrt{\frac{0.70 \times 0.30}{10}} = 0.70 \pm 0.284\]Rounding, we get the confidence interval: [0.416, 0.984], or approximately [0.42, 0.98]. A frequentist interpretation is that if we repeated this experiment many times, 95% of such intervals would contain the true value of $$p$$. It does not tell us the probability that $$p$$ lies in this specific range.
Bayesian Approach
The Bayesian method applies Bayes' theorem to update prior beliefs about $$p$$ based on observed data.
Bayes' Theorem
\[P(H | D) = \frac{P(D | H) \, P(H)}{P(D)}\]Where:
- $$P(H \mid D)$$ is the posterior probability of fairness given the data (7 heads out of 10 flips),
- $$P(D \mid H)$$ is the likelihood of observing the data under a given $$p$$,
- $$P(H)$$ is the prior belief about $$p$$,
- $$P(D)$$ is the marginal probability of the data.
Choosing a Prior
A common prior for coin fairness is the Beta distribution: $$\text{Beta}(\alpha, \beta)$$, where $$\alpha$$ and $$\beta$$ represent prior "pseudo-counts" of heads and tails. A uniform prior $$\text{Beta}(1,1)$$ assigns equal probability to all values of $$p$$ between 0 and 1, expressing complete uncertainty. This is mathematically equivalent to observing 1 hypothetical head and 1 hypothetical tail before seeing any real data.
Updating with Data
Given 7 heads in 10 flips, the posterior follows a $$\text{Beta}(\alpha + 7, \beta + 3)$$ distribution, or $$\text{Beta}(8,4)$$. The Bayesian credible interval can be directly computed from this posterior. For this example, the 95% credible interval is approximately [0.408, 0.891].
Unlike the frequentist approach, the Bayesian method allows direct probability statements: "Given our prior belief and the observed data, there is a 95% probability that $$p$$ lies within [0.408, 0.891]." This is a statement about $$p$$ itself, not about a repeated sampling procedure.
Comparing the Two Approaches
Key Differences
Confidence Intervals vs. Credible Intervals:
- Frequentist: The 95% confidence interval [0.42, 0.98] means that if you repeated this experiment infinitely many times and computed an interval each time, 95% of those intervals would contain the true $$p$$. You cannot say "there is a 95% probability that $$p$$ lies here." Once the experiment is done, $$p$$ is either in the interval or not.
- Bayesian: The 95% credible interval [0.408, 0.891] means that given your prior belief and the data, there is a 95% probability that $$p$$ lies in this range. This directly answers the question most people intuitively ask.
When to Use Each Approach:
- Use Frequentist methods when: you're in a standardized scientific field with established protocols, you want to avoid subjective prior choices, or you're testing whether an effect exists at all.
- Use Bayesian methods when: you have meaningful prior knowledge you want to incorporate, you need to make decisions under uncertainty, or you want direct probability statements about parameters.
Frequentist Confidence Interval Visualization
Use the sliders below to adjust the total number of flips and the observed number of heads. The visualization shows the point estimate and 95% Wilson confidence interval in real time. Notice how the interval narrows as the sample size grows, and how extreme outcomes (all heads or all tails) produce narrower intervals than intermediate ones.
Bayesian Posterior Visualization
The visualization below lets you set your own prior via the $$\alpha$$ and $$\beta$$ sliders and observe how the posterior updates with observed data. Setting $$\alpha = \beta = 1$$ gives the uniform prior (complete ignorance), while higher values like $$\alpha = \beta = 10$$ express a strong belief that the coin is fair. The gray curve shows your prior, the blue curve shows the posterior after observing the data, and the shaded region marks the 95% credible interval.
Conclusion
The frequentist and Bayesian approaches offer fundamentally different perspectives on the same data. Neither is universally "correct."
Frequentists ask: "If this experiment were repeated many times, what range of values would contain the true parameter 95% of the time?" This provides a principled, reproducible framework free from subjective priors.
Bayesians ask: "Given my prior beliefs and the data I observed, what is the probability distribution over possible parameter values?" This directly answers what most practitioners intuitively want to know, but requires specifying prior assumptions.
Choose your approach based on your goals: use frequentist confidence intervals for standardized hypothesis testing and regulatory compliance; use Bayesian credible intervals when you have prior knowledge worth incorporating and need direct probability statements. In practice, both approaches coexist. They're asking different questions about the same problem.