Bayes Optimal Classifier
Contents
Bayes Optimal Classifier#
Definition#
For a classification problem on \(\mathcal X\times\mathcal Y\) with feature space \(\mathcal X = \mathbb R^D\) and \(K\) classes \(\mathcal Y = \{1,\ldots,K\}\) (i.e. an element in \(\mathcal{X} \times \mathcal{Y}\) is a pair \((\boldsymbol{x}, y) \in \mathbb{R}^D \times \{1, \ldots, K\}\)).
Assume that the conditional distribution of \(X\) given the target \(Y\) is given by:
Then we can define a classifier \(h\) as a rule that maps an observation \(X=x\) to estimate the true underlying class \(Y\) as \(h(x) := \hat{y}\). In this case, the classifer is just a hypothesis \(h: \mathbb{R}^D \to \{1, \ldots, K\}\) where \(h\) classifies an observation \(\boldsymbol{x}\) to the class \(h(\boldsymbol{x})\).
The probability of misclassification or true risk, of a classifier \(h\) can be defined as:
where \(1_{h(\mathbf{x}) \neq y}\) is an indicator function that is 1 if \(h(\mathbf{x}) \neq y\) and 0 otherwise. Note that \(h(\mathbf{x}) \neq y\) is equivalent to the loss function \(\mathcal{L}\) defined over a single pair \((\mathbf{x}, y)\) as \(\mathcal{L}(h(\mathbf{x}), y) = 1_{h(\mathbf{x}) \neq y}\)1. This loss function just means the probability in which the classifier \(h\) makes a mistake. This form is exactly the same form as the generalization error in Definition 105.
We can construct a classifier \(h^{*}\) that minimizes the true risk if we know the true distribution \(\mathcal{P}_{\mathcal{D}}\). Such a classifier is called the Bayes classifier. The Bayes classifier is defined as:
It can be shown that \(h^{*}\) is the classifier that minimizes the true risk \(\mathcal{R}_{\mathcal{D}}\) for such a construction of the classifier, along with the 0-1 loss function.
This means for classifier \(h \in \mathcal{H}\), you always have
(Bayes Optimal Classifier and Naive Bayes)
If the components of \(\mathbf{X}\) are conditionally independent given \(Y\), then the Bayes classifier is the same as the Naive Bayes classifier.
(Some remarks)
Note that this definition merely says that the Bayes classifier achieves minimal zero-one loss over any other deterministic classifier, it does not say anything about it achieving zero error.
Also note that this is of course not true for the general case of a classifier \(h \in \mathcal{H}\), where the underlying distribution \(\mathcal{P}_{\mathcal{D}}\) is unknown.
Further Readings#
Hal Daumé III. “Chapter 2.1. Data Generating Distributions.” In A Course in Machine Learning, January 2017.
- 1
This is basically the 0-1 loss function.