A key idea in data science and statistics is the Bernoulli distribution, named for the Swiss mathematician Jacob Bernoulli. It is crucial to probability theory and a foundational element for more intricate statistical models, ranging from machine learning algorithms to customer behaviour prediction. In this article, we will discuss the Bernoulli distribution in detail.
Read on!
A Bernoulli distribution is a discrete probability distribution representing a random variable with only two possible outcomes. Usually, these results are denoted by the terms “success” and “failure,” or alternatively, by the numbers 1 and 0.
Let X be a random variable. Then, X is said to follow a Bernoulli distribution with success probability p
Let X be a random variable following a Bernoulli distribution:
Then, the probability mass function of X is
This follows directly from the definition given above.
The Cumulative Distribution Function (CDF) of a Bernoulli distribution gives the probability that the random variable X is less than or equal to a given value x. For x∈{0,1}, the CDF is defined as:
For a Bernoulli random variable X:
You can compute the CDF of a Bernoulli distribution using the scipy.stats library:
from scipy.stats import bernoulli
# Given probability of success
p = 0.7
# CDF calculation
cdf_0 = bernoulli.cdf(0, p) # CDF at X = 0
cdf_1 = bernoulli.cdf(1, p) # CDF at X = 1
# Display results
print("CDF at X = 0 (F(0)): ", cdf_0)
print("CDF at X = 1 (F(1)): ", cdf_1)
Let X be a random variable following a Bernoulli distribution:
Then, the mean or expected value of X is
Proof: The expected value is the probability-weighted average of all possible values:
Since there are only two possible outcomes for a Bernoulli random variable, we have:
Sources: https://en.wikipedia.org/wiki/Bernoulli_distribution#Mean.
Also read: End to End Statistics for Data Science
Let X be a random variable following a Bernoulli distribution:
Then, the variance of X is
Proof: The variance is the probability-weighted average of the squared deviation from the expected value across all possible values
and can also be written in terms of the expected values:
Equation (1)
The mean of a Bernoulli random variable is
Equation(2)
and the mean of a squared Bernoulli random variable is
Equation(3)
Combining Equations (1), (2) and (3), we have:
The Bernoulli distribution is a special case of the Binomial distribution where the number of trials n=1. Here’s a detailed comparison between the two:
Aspect | Bernoulli Distribution | Binomial Distribution |
Purpose | Models the outcome of a single trial of an event. | Models the outcome of multiple trials of the same event. |
Representation | X∼Bernoulli(p), where p is the probability of success. | X∼Binomial(n,p), where n is the number of trials and p is the probability of success in each trial. |
Mean | E[X]=p | E[X]=n⋅p |
Variance | Var(X)=p(1−p) | Var(X)=n⋅p⋅(1−p) |
Support | Outcomes are X∈{0,1}, representing failure (0) and success (1). | Outcomes are X∈{0,1,2,…,n}, representing the number of successes in n trials. |
Special Case Relationship | A Bernoulli distribution is a special case of the Binomial distribution when n=1. | A Binomial distribution generalizes the Bernoulli distribution for n>1. |
Example | If the probability of winning a game is 60%, the Bernoulli distribution can model whether you win (1) or lose (0) in a single game. | If the probability of winning a game is 60%, the Binomial distribution can model the probability of winning exactly 3 out of 5 games. |
The Bernoulli distribution (left) models the outcome of a single trial with two possible outcomes: 0(failure) or 1 (success). In this example, with p=0.6 there is a 40% chance of failure (P(X=0)=0.4) and a 60% chance of success (P(X=1)=0.6). The graph clearly shows two bars, one for each outcome, where the height corresponds to their respective probabilities.
The Binomial distribution (right) represents the number of successes across multiple trials (in this case, n=5 trials). It shows the probability of observing each possible number of successes, ranging from 0 to 5. The number of trials n and the success probability p=0.6 influence the distribution’s shape. Here, the highest probability occurs at X=3, indicating that achieving exactly 3 successes out of 5 trials is most likely. The probabilities for fewer (X=0,1,2) or more (X=4,5) successes decrease symmetrically around the mean E[X]=n⋅p=3.
Also read: A Guide To Complete Statistics For Data Science Beginners!
The Bernoulli distribution is widely used in real-world applications involving binary outcomes. Bernoulli distributions are essential to machine learning when it comes to binary classification issues. In these situations, we must classify the data into one of two groups. Among the examples are:
A factory produces light bulbs. Each light bulb has a 90% chance of passing the quality test (p=0.9) and a 10% chance of failing (1−p=0.1). Let X be the random variable that represents the outcome of the quality test:
So, the probability of passing is 0.9 (90%).
E[X]=p.
Here, p=0.9.
E[X]=0.9..
This means the average success rate is 0.9 (90%).
Var(X)=p(1−p)
Here, p=0.9:
Var(X)=0.9(1−0.9)=0.9⋅0.1=0.09.
The variance is 0.09.
This example shows how the Bernoulli distribution models single binary events like a quality test outcome.
Now let’s see how this question can be solved in python
You need to install matplotlib if you haven’t already:
pip install matplotlib
Now, import the necessary packages for the plot and Bernoulli distribution.
import matplotlib.pyplot as plt
from scipy.stats import bernoulli
Set the given probability of success for the Bernoulli distribution.
p = 0.9
Calculate the probability mass function (PMF) for both the “Fail” (X=0) and “Pass” (X=1) outcomes.
probabilities = [bernoulli.pmf(0, p), bernoulli.pmf(1, p)]
Define the labels for the outcomes (“Fail” and “Pass”).
outcomes = ['Fail (X=0)', 'Pass (X=1)']
The expected value (mean) for the Bernoulli distribution is simply the probability of success.
expected_value = p # Mean of Bernoulli distribution
The variance of a Bernoulli distribution is calculated using the formula Var[X]=p(1−p)
variance = p * (1 - p) # Variance formula
Print the calculated probabilities, expected value, and variance.
print("Probability of Passing (X = 1):", probabilities[1])
print("Probability of Failing (X = 0):", probabilities[0])
print("Expected Value (E[X]):", expected_value)
print("Variance (Var[X]):", variance)
Output:
Create a bar plot for the probabilities of failure and success using matplotlib.
bars = plt.bar(outcomes, probabilities, color=['red', 'green'])
Set the title and labels for the x-axis and y-axis of the plot.
plt.title(f'Bernoulli Distribution (p = {p})')
plt.xlabel('Outcome')
plt.ylabel('Probability')
Add labels for each bar to the legend, showing the probabilities for “Fail” and “Pass”.
bars[0].set_label(f'Fail (X=0): {probabilities[0]:.2f}')
bars[1].set_label(f'Pass (X=1): {probabilities[1]:.2f}')
Show the legend on the plot.
plt.legend()
Finally, display the plot.
plt.show()
This step-by-step breakdown allows you to create the plot and calculate the necessary values for the Bernoulli distribution.
A key idea in statistics is the Bernoulli distribution model scenarios with two possible outcomes: success or failure. It is employed in many different applications, such as quality testing, consumer behaviour prediction, and machine learning for binary categorisation. Key characteristics of the distribution, such as variance, expected value, and probability mass function (PMF), aid in the comprehension and analysis of such binary events. You may create more intricate models, like the Binomial distribution, by becoming proficient with the Bernoulli distribution.
Ans. No, it only handles two outcomes (success or failure). For more than two outcomes, other distributions, like the multinomial distribution, are used.
Ans. Some examples of Bernoulli trails are:
1. Tossing a coin (heads or tails)
2. Passing a quality test (pass or fail)
Ans. The Bernoulli distribution is a discrete probability distribution representing a random variable with two possible outcomes: success (1) and failure (0). It is defined by the probability of success, denoted by p.
Ans. When the number of trials (n) equals 1, the Bernoulli distribution is a particular instance of the Binomial distribution. The Binomial distribution models several trials, whereas the Bernoulli distribution models just one.Ans.
Ans. Bernoulli Distribution is a discrete probability distribution because it deals with binary outcomes (0 or 1).
Ans. Bernoulli Distribution is often used in classification problems, especially in binary classifiers like logistic regression. It can also model binary outcomes in data, such as predicting whether a customer will make a purchase.
Ans. The Bernoulli distribution is fundamental in probability and statistics because it models binary outcomes, such as success/failure or yes/no scenarios. It forms the basis for more complex distributions like the Binomial and is widely used in machine learning, hypothesis testing, and decision-making processes.