Detailed Guide To Bayesian Decision Theory – Part 2

Chirag Goyal Last Updated : 25 May, 2021
5 min read

This article was published as a part of the Data Science Blogathon

Introduction

This is Part-2 of the 4-part blog series on Bayesian Decision Theory.

In the previous article, we discussed the basics of the Bayesian Decision Theory including its prerequisites, decisions taken based on the posterior probabilities with the help of the Bayes theorem. Towards the end, we also discussed the generalized idea of the Bayes theorem for multiple features and classes.

Now, in this article, we will be going through some of the advanced concepts for taking the decisions in Bayesian theory, which are more generalized. In order to get a better and clear understanding of this article, you may first visit the article on Bayesian Decision Theory. (Part-1).

 

How do we Generalized our Bayesian Decision theory?

We will generalize our theory by expanding our assumptions in four ways, given below:

1. Allow the use of more than one features

2. Allowing the use of more than two states of nature

3. Allowing actions other than deciding on the state of nature

  • Allowing actions other than classification primarily allows the possibility of rejection.
  • Refusing to make a decision in close or bad cases!

4. Introduce a loss function that is more general than the probability of error.

  • The loss function decides how costly each action taken is.

 

Developments after Generalization

Feature Space: When we allow more than one feature we move from scalar x to a feature vector x. Here the feature vector is in d-dimension of Euclidean space Rd, which is also known as feature space.

State of Nature: Allowing more states of nature provides us with a useful generalization for the expense of small notational changes.

Actions: Allowing more action other than classification allows the possibility of rejection, For example, refusing to make a decision in close cases which are often useful options if an incorrect decision is not too costly.

Loss function: Loss plays the role of deciding how costly our actions are and further can be used to convert a probability determination into a decision. Cost function deals with the classification errors or mistakes that are more costly than the others, which is different from the case which is often discussed by us i.e, of being equally costly.

Loss Function

Let there be c states of natures or categories as w1, w2,.., wand α1, α2,..αa be the set of actions possible. Then,

The loss function is λ(αi | wj ) is read as the loss of taking action αi when the true state of nature is wj. As we discussed, x is the d-component vector of the random variables that are in feature space and p(x |wj) be the class-conditional probability density function of x. Then, the posterior probability P(wj | x) can be computed as,

P(ωj|x)= p(x|ωj)P(ωj)/p(x) 

Evidence can be calculated by:

p(x) = Sum( j=1 to c): p(x|ωj)P(ωj)

Risk Function

If we observe an x that leads us to take action αi and if the true category it belongs is to wj then we face a loss of λ(αi | wj) and since P(ωj|x) is the probability that the correct category or state of nature is wj then the loss associated by taking action αi is given by

R(αi|x)= Sum(j=1 to c): λ(αi|ωj)P(ωj|x)

When talking in context to decision theory the expected loss is termed as Risk.

R(αi|x) is the conditional risk. Whenever we observe x, we can always minimize our expected loss by choosing the action which takes the minimum value of the conditional risk.

Decision Rule

Our primary aim of this article is to find the decision rule that will, in the end, minimize the overall risk.

A general decision rule is a function α(x) that signifies the optimal action to be taken for every possible set of features, we can say that for every x the decision function α(x) assumes one of the α’s value out of other possible values α1, α2, .., αa.

The overall risk R is the expected loss associated with the given decision rule and R(αi| x) is the conditional risk that is associated with the action αi. As the decision rule specifies our action, the overall risk is usually given by,

R = integration R(α(x)|x)p(x) dx

where dx = d-space volume element and

the integration extends over the entire feature space.

As for the decision rule, α(x) is selected such that the risk R(αi(x)) is minimum for every x so that the overall risk is also minimized.

Bayes Risk

Thus, according to the Bayes decision rule:

To minimize the overall risk, we calculate the conditional risk i.e,

R(αi|x)= sum (j=1 to c): λ(αij)P(ωj|x)

Such that i=1, .., a and select the action such that R(αi|x) is minimum.

 

For better understanding, let’s consider the example of two-category classification.

Here we will have action α1 corresponding to deciding that the state of nature is w1 and α2 for deciding w2.

Loss’s notation is λij = λ(αij) i.e. loss occurred when deciding wi given the true state of nature is wj. We rewrite our conditional risk as

R(α1|x)= λ11P(ω1|x)+ λ12P(ω2|x)

R(α2|x)= λ21P(ω1|x)+ λ22P(ω2|x)

Getting back to obtaining a decision rule we can basically agree on deciding w1 if R(α1|x) < R(α2|x) i.e. choosing one with less risk.

On the basis of R(α1|x) < R(α2|x) the above expression of risk we get

21 − λ11)P(ω1|x) > (λ12 − λ22)P(ω2|x)

By using the classic Bayes formula we can substitute the posteriors with class-conditional and priors to get the decision rule as decide ω1 if

21 − λ11)p(x|ω1)P(ω1) > (λ12 − λ22)p(x|ω2)P(ω2), or choose w2 otherwise

We can also rewrite it as

p(x|ω1) /p(x|ω2) > (λ12 − λ22) * P(ω2)/ λ21 − λ11 * P(ω1)

Assuming that λ2111,

This form can be interpreted as choosing w1 if the above equation holds true.

Here, p(x|ω1) /p(x|ω2) is usually known as the likelihood ratio.

The Bayes decision rule can be interpreted as deciding for w1 if the likelihood ratio exceeds a threshold value i.e, the right-hand side term which will be constant as prior and λ are constant after calculation, which is independent of the observation x.

This completes our all generalization cases!

Discussion Problem

Consider the following dataset:

             Sample No               Width                Height               Class
                1            Small                Small                 C1
                2            Medium                Small                 C2
                3            Medium                Large                 C2
                4             Large               Small                 C1
                5           Medium               Medium                 C1
                6            Large               Large                 C1
                7           Small               Medium                 C2
                8            Large               Medium                 C1

Now, Answer the following questions: (Use Bayesian Decision Theory)

1. Calculate the prior probabilities for both classes.

2. To which class the sample (Width- Small, Height- Large) belongs?

3. Calculate the probability of error in classifying the above sample(part-2).

Try to solve the Practice Question and answer it in the comment section below.

For any further queries feel free to contact me.

 

End Notes

Thanks for reading!

If you liked this and want to know more, go visit my other articles on Data Science and Machine Learning by clicking on the Link

Please feel free to contact me on Linkedin, Email.

Something not mentioned or want to share your thoughts? Feel free to comment below And I’ll get back to you.

About the author

Chirag Goyal

Currently, I am pursuing my Bachelor of Technology (B.Tech) in Computer Science and Engineering from the Indian Institute of Technology Jodhpur(IITJ). I am very enthusiastic about Machine learning, Deep Learning, and Artificial Intelligence.

I am a B.Tech. student (Computer Science major) currently in the pre-final year of my undergrad. My interest lies in the field of Data Science and Machine Learning. I have been pursuing this interest and am eager to work more in these directions. I feel proud to share that I am one of the best students in my class who has a desire to learn many new things in my field.

Responses From Readers

Clear

Joe
Joe

Thank you Chirag. I have enjoyed your articles. It would definitely help understanding if you gave practical worked examples of each issue, rather than theoretical abstractions from the start. It is not obvious why you could not immediately decide if a picture was of a mouse or an elephant by just two categories, tusks and trunk, say. If the picture lacks both, then it is more likely to be an elephant if there are only 2 possible answers. Why would it cause a theoretical loss if identified incorrectly? Your discussion problem is difficult to solve as the prior probabilities are unknown. You cannot say C1 prior is 5/8 just because there are only 8 samples and 5 are C1s. To which class the sample (Width- Small, Height- Large) belongs is not defined in the 8 samples. Small and Large data occurs 20% and 33% of the time in C1 and C2 only because there are more C1 samples. Without more information it could belong to either or neither. So the probability of error remains unknown.

lokesh
lokesh

Hii can you post solution of the above problem?

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details