Bernoulli Distribution
Contents
Bernoulli Distribution#
PMF and CDF of Bernoulli Distribution#
(Bernoulli Trials)
A Bernoulli trial is an experiment with two possible outcomes: success or failure, often denoted as 1 or 0 respectively.
The three assumptions for Bernoulli trials are:
Each trial has two possible outcomes: 1 or 0 (success of failure);
The probability of success (\(p\)) is constant for each trial and so is the failure (\(1-p\));
Each trial is independent; The outcome of previous trials has no influence on any subsequent trials.
See more here.
(Bernoulli Distribution (PMF))
Let \(X\) be a Bernoulli random variable with parameter \(p\). Then the probability mass function (PMF) of \(X\) is given by
where \(0 \leq p \leq 1\) is called the Bernoulli parameter.
A Bernoulli distribution is a Bernoulli trial.
Some conventions:
We denote \(X \sim \bern(p)\) if \(X\) follows a Bernoulli distribution with parameter \(p\).
The states of \(X\) are \(x \in \{0,1\}\). This means \(X\) only has two (binary) states, 0 and 1.
We denote \(1\) as success and \(0\) as failure and consequently \(p\) as the probability of success and \(1-p\) as the probability of failure.
Bear in mind that \(X\) is defined over \(\pspace\), and when we say \(\P \lsq X=x \rsq\), we are also saying \(\P \lsq E \rsq\) where \(E \in \E\). Imagine a coin toss, \(E\) is the event that the coin lands on heads, which translates to \(E = \{X=1\}\).
Note further that a Bernoulli Trial is a single experiment with only two possible outcomes. This will be the main difference when we learn Binomial distribution (i.e. sampling 1 guy vs sampling n guys).
(Bernoulli Distribution (CDF))
Let \(X\) be a Bernoulli random variable with parameter \(p\). Then the cumulative distribution function (CDF) of \(X\) is given by
where \(0 \leq p \leq 1\) is called the Bernoulli parameter.
Plotting PMF and CDF of Bernoulli Distribution#
The PMF and CDF plots are shown below.
1from plot import plot_bernoulli_pmf
2
3_fig, axes = plt.subplots(1,2, figsize=(8.4, 4.8), sharey=True, dpi=125)
4plot_bernoulli_pmf(p=0.2, ax=axes[0])
5plot_bernoulli_pmf(p=0.8, ax=axes[1])
6plt.show()
Using Seed Number 42
1from plot import plot_bernoulli_pmf, plot_empirical_bernoulli
2
3fig, axes = plt.subplots(1, 2, figsize=(8.4, 4.8), sharey=True, dpi=125)
4plot_bernoulli_pmf(p=0.2, ax=axes[0])
5plot_empirical_bernoulli(p=0.2, size=100, ax=axes[0])
6
7plot_bernoulli_pmf(p=0.2, ax=axes[1])
8plot_empirical_bernoulli(p=0.2, size=1000, ax=axes[1])
9
10fig.supylabel("relative frequency")
11fig.suptitle("Histogram of Bernoulli($p=0.2$) based on $100$ and $1000$ samples.")
12plt.show()
Assumptions#
The three assumptions for Bernoulli trials are:
Each trial has two possible outcomes: 1 or 0 (success of failure);
The probability of success (\(p\)) is constant for each trial and so is the failure (\(1-p\));
Each trial is independent; The outcome of previous trials has no influence on any subsequent trials.
Expectation and Variance#
(Expectation of Bernoulli Distribution)
Let \(X \sim \bern(p)\) be a Bernoulli random variable with parameter \(p\). Then the expectation of \(X\) is given by
Proof. The proof is as follows
(Variance of Bernoulli Distribution)
Let \(X \sim \bern(p)\) be a Bernoulli random variable with parameter \(p\). Then the variance of \(X\) is given by
Proof. The proof is as follows
It can also be shown using the second moment of \(X\):
Maximum Variance#
Minimum and Maximum Variance of Coin Toss#
This example is taken from [Chan, 2021], page 140.
Consider a coin toss, following a Bernoulli distribution. Define \(X \sim \bern(p)\).
If we toss the coin \(n\) times, then we ask ourselves what is the minimum and maximum variance of the coin toss.
Recall in Definition 23 that the variance is basically how much the data deviates from the mean.
If the coin is biased at \(p=1\), then the variance is \(0\) because the coin always lands on heads. The intuition is that the coin is “deterministic”, and hence there is no variance at all. If the coin is biased at \(p=0.9\), then there is a little variance, because the coin will consistently land on heads \(90\%\) of the time. If the coin is biased at \(p=0.5\), then there is a lot of variance, because the coin is fair and has a 50-50 chance of landing on heads or tails. Though fair, the variance is maximum here.
Further Readings#
Chan, Stanley H. “Chapter 3.5.1. Bernoulli random variable.” In Introduction to Probability for Data Science, 137-142. Ann Arbor, Michigan: Michigan Publishing Services, 2021.