In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found in experiments whose sample space is nonnumerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.
In applied probability, a probability distribution can be specified in a number of different ways, often chosen for mathematical convenience:
A probability distribution can either be univariate or multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector—a set of two or more random variables—taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution.
Introduction
The
probability mass function (pmf)
p(
S) specifies the probability distribution for the sum
S of counts from two
dice. For example, the figure shows that
p(11) = 1/18. The pmf allows the computation of probabilities of events such as
P(
S > 9) = 1/12 + 1/18 + 1/36 = 1/6, and all other probabilities in the distribution.
To define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. In the discrete case, one can easily assign a probability to each possible value: for example, when throwing a fair die, each of the six values 1 to 6 has the probability 1/6. In contrast, when a random variable takes values from a continuum then, typically, probabilities can be nonzero only if they refer to intervals: in quality control one might demand that the probability of a "500 g" package containing between 490 g and 510 g should be no less than 98%.
The
probability density function (pdf) of the
normal distribution, also called Gaussian or "bell curve", the most important continuous random distribution. As notated on the figure, the probabilities of intervals of values correspond to the area under the curve.
If the random variable is realvalued (or more generally, if a total order is defined for its possible values), the cumulative distribution function (CDF) gives the probability that the random variable is no larger than a given value; in the realvalued case, the CDF is the integral of the probability density function (pdf) provided that this function exists.
Terminology
As probability theory is used in quite diverse applications, terminology is not uniform and sometimes confusing. The following terms are used for noncumulative probability distribution functions:
The following terms are somewhat ambiguous as they can refer to noncumulative or cumulative distributions, depending on authors' preferences:

Probability distribution function: continuous or discrete, noncumulative or cumulative.

Probability function: even more ambiguous, can mean any of the above or other things.
Finally,

Probability distribution: sometimes the same as probability distribution function, but usually refers to the more complete assignment of probabilities to all measurable subsets of outcomes, not just to specific outcomes or ranges of outcomes.
Basic terms

Mode: for a discrete random variable, the value with highest probability (the location at which the probability mass function has its peak); for a continuous random variable, the location at which the probability density function has its peak.

Support: the smallest closed set whose complement has probability zero.

Head: the range of values where the pmf or pdf is relatively high.

Tail: the complement of the head within the support; the large set of values where the pmf or pdf is relatively low.

Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof.

Median: the value such that the set of values less than the median has a probability of onehalf.

Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution.

Standard deviation: the square root of the variance, and hence another measure of dispersion.

Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value is a mirror image of the portion to its right.

Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean.
Cumulative distribution function
Because a probability distribution Pr on the real line is determined by the probability of a scalar random variable X being in a halfopen interval (∞, x], the probability distribution is completely characterized by its cumulative distribution function:

F(x) = \Pr \left[ X \le x \right] \qquad \text{ for all } x \in \mathbb{R}.
Discrete probability distribution
The probability mass function of a discrete probability distribution. The probabilities of the
singletons {1}, {3}, and {7} are respectively 0.2, 0.5, 0.3. A set not containing any of these points has probability zero.
The
cdf of a discrete probability distribution, ...
... of a continuous probability distribution, ...
... of a distribution which has both a continuous part and a discrete part.
A discrete probability distribution shall be understood as a probability distribution characterized by a probability mass function. Thus, the distribution of a random variable X is discrete, and X is then called a discrete random variable, if

\sum_u \Pr(X=u) = 1
as u runs through the set of all possible values of X. It follows that such a random variable can assume only a finite or countably infinite number of values. For the number of potential values to be countably infinite even though their probabilities sum to 1 requires that the probabilities decline to zero fast enough: for example, if \Pr(X=n) = \tfrac{1}{2^n} for n = 1, 2, ..., we have the sum of probabilities 1/2 + 1/4 + 1/8 + ... = 1.
Among the most wellknown discrete probability distributions that are used for statistical modeling are the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. In addition, the discrete uniform distribution is commonly used in computer programs that make equalprobability random selections between a number of choices.
Measure theoretic formulation
A measurable function X \colon A \to B between a probability space (A, \mathcal A, P) and a measurable space (B, \mathcal B) is called a discrete random variable provided its image is a countable set and the preimage of singleton sets are measurable, i.e., X^{1}(b) \in \mathcal A for all b \in B. The latter requirement induces a probability mass function f_X \colon X(A) \to \mathbb R via f_X(b):=P(X^{1}(b)). Since the preimages of disjoint sets are disjoint

\sum_{b \in X(A)} f_X(b) = \sum_{b \in X(A)} P(X^{1} (b)) = P \left( \bigcup_{b \in X(A)} X^{1}(b) \right) = P(A)=1.
This recovers the definition given above.
Cumulative density
Equivalently to the above, a discrete random variable can be defined as a random variable whose cumulative distribution function (cdf) increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant between those jumps. The points where jumps occur are precisely the values which the random variable may take.
Deltafunction representation
Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part.
Indicatorfunction representation
For a discrete random variable X, let u_{0}, u_{1}, ... be the values it can take with nonzero probability. Denote

\Omega_i=X^{1}(u_i)= \{\omega: X(\omega)=u_i\},\, i=0, 1, 2, \dots
These are disjoint sets, and by formula (1)

\Pr\left(\bigcup_i \Omega_i\right)=\sum_i \Pr(\Omega_i)=\sum_i\Pr(X=u_i)=1.
It follows that the probability that X takes any value except for u_{0}, u_{1}, ... is zero, and thus one can write X as

X=\sum_i u_i 1_{\Omega_i}
except on a set of probability zero, where 1_A is the indicator function of A. This may serve as an alternative definition of discrete random variables.
Continuous probability distribution
A continuous probability distribution is a probability distribution that has a probability density function. Mathematicians also call such a distribution absolutely continuous, since its cumulative distribution function is absolutely continuous with respect to the Lebesgue measure λ. If the distribution of X is continuous, then X is called a continuous random variable. There are many examples of continuous probability distributions: normal, uniform, chisquared, and others.
Intuitively, a continuous random variable is the one which can take a continuous range of values—as opposed to a discrete distribution, where the set of possible values for the random variable is at most countable. While for a discrete distribution an event with probability zero is impossible (e.g., rolling 3 ^{1}⁄_{2} on a standard die is impossible, and has probability zero), this is not so in the case of a continuous random variable. For example, if one measures the width of an oak leaf, the result of 3½ cm is possible, however it has probability zero because there are uncountably many other potential values even between 3 cm and 4 cm. Each of these individual outcomes has probability zero, yet the probability that the outcome will fall into the interval (3 cm, 4 cm) is nonzero. This apparent paradox is resolved by the fact that the probability that X attains some value within an infinite set, such as an interval, cannot be found by naively adding the probabilities for individual values. Formally, each value has an infinitesimally small probability, which statistically is equivalent to zero.
Formally, if X is a continuous random variable, then it has a probability density function ƒ(x), and therefore its probability of falling into a given interval, say [a, b] is given by the integral

\Pr[a\le X\le b] = \int_a^b f(x) \, dx
In particular, the probability for X to take any single value a (that is a ≤ X ≤ a) is zero, because an integral with coinciding upper and lower limits is always equal to zero.
The definition states that a continuous probability distribution must possess a density, or equivalently, its cumulative distribution function be absolutely continuous. This requirement is stronger than simple continuity of the cumulative distribution function, and there is a special class of distributions, singular distributions, which are neither continuous nor discrete nor a mixture of those. An example is given by the Cantor distribution. Such singular distributions however are never encountered in practice.
Note on terminology: some authors use the term "continuous distribution" to denote the distribution with continuous cumulative distribution function. Thus, their definition includes both the (absolutely) continuous and singular distributions.
By one convention, a probability distribution \,\mu is called continuous if its cumulative distribution function F(x)=\mu(\infty,x] is continuous and, therefore, the probability measure of singletons \mu\{x\}\,=\,0 for all \,x.
Another convention reserves the term continuous probability distribution for absolutely continuous distributions. These distributions can be characterized by a probability density function: a nonnegative Lebesgue integrable function \,f defined on the real numbers such that

F(x) = \mu(\infty,x] = \int_{\infty}^x f(t)\,dt.
Discrete distributions and some continuous distributions (like the Cantor distribution) do not admit such a density.
Some properties

The probability distribution of the sum of two independent random variables is the convolution of each of their distributions.

Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve nonnegativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).
Kolmogorov definition
In the measuretheoretic formalization of probability theory, a random variable is defined as a measurable function X from a probability space \scriptstyle (\Omega, \mathcal{F}, \operatorname{P}) to measurable space \scriptstyle (\mathcal{X},\mathcal{A}). A probability distribution of X is the pushforward measure X_{*}P of X , which is a probability measure on \scriptstyle (\mathcal{X},\mathcal{A}) satisfying X_{*}P = PX ^{−1}.
Random number generation
A frequent problem in statistical simulations (the Monte Carlo method) is the generation of pseudorandom numbers that are distributed in a given way. Most algorithms are based on a pseudorandom number generator that produces numbers X that are uniformly distributed in the interval [0,1). These random variates X are then transformed via some algorithm to create a new random variate having the required probability distribution.
Applications
The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.
As a more specific example of an application, the cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions.
Common probability distributions
The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, continuous, multivariate, etc.)
Note also that all of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution.
Related to realvalued quantities that grow linearly (e.g. errors, offsets)
Related to positive realvalued quantities that grow exponentially (e.g. prices, incomes, populations)
Related to realvalued quantities that are assumed to be uniformly distributed over a (possibly unknown) region
Related to Bernoulli trials (yes/no events, with a given probability)

Basic distributions:

Related to sampling schemes over a finite population:
Related to categorical outcomes (events with K possible outcomes, with a given probability for each outcome)
Related to events in a Poisson process (events that occur independently with a given rate)
Related to the absolute values of vectors with normally distributed components

Rayleigh distribution, for the distribution of vector magnitudes with Gaussian distributed orthogonal components. Rayleigh distributions are found in RF signals with Gaussian real and imaginary components.

Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on nonzero NMR signals.
Related to normally distributed quantities operated with sum of squares (for hypothesis testing)
Useful as conjugate prior distributions in Bayesian inference
See also
References

B. S. Everitt: The Cambridge Dictionary of Statistics, Cambridge University Press, Cambridge (3rd edition, 2006). ISBN 0521690277

Bishop: Pattern Recognition and Machine Learning, Springer, ISBN 0387310738

den Dekker A. J., Sijbers J., (2014) "Data distributions in magnetic resonance images: a review", Physica Medica, [1]
External links

Hazewinkel, Michiel, ed. (2001), "Probability distribution",
Probability distributions














Mixed continuousdiscrete univariate distributions













Theory of probability distributions









This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.