Inverse probability

Revision as of 03:24, 25 February 2025 by Paulsadleir (talk | contribs) (Uploading file Inverse probability.txt)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Inverse probability, an older term in probability theory, refers to the probability distribution of an unobserved variable. Today, determining such variables falls under inferential statistics. This method involves Bayesian probability, where a probability distribution is assigned to the unobserved variable. The likelihood function represents the data given this variable but does not provide a probability distribution for it on its own. The posterior distribution combines both the data and a prior distribution to determine the unobserved variable's distribution.

The term "inverse probability" was first used by De Morgan in 1837, referencing Laplace's earlier work. Fisher discussed it in 1922, highlighting confusion between true values and estimates. Jeffreys later defended Bayesian methods using the term in 1939. The term "Bayesian" replaced "inverse probability" after its introduction by Fisher in 1950.

Historically, inverse probability was the dominant statistical approach until frequentism emerged in the early 20th century, developed by Fisher, Neyman, and Pearson. By the 1950s, terms like frequentist and Bayesian became common to distinguish these methods.

An example of inverse probability is estimating a star's position from data, now part of inferential statistics. The transition from "direct" and "inverse probability" to "likelihood function" and "posterior distribution" occurred mid-20th century, reflecting shifts in statistical terminology and methodology.