Is the Learning Problem Solvable?
Is the Learning Problem Solvable?#
The learning problem at hand is to find a hypothesis \(h \in \mathcal{H}\) that minimizes the expected risk \(\hat{\mathcal{R}}\) over the training samples \(\mathcal{S}\) which is generated by the underlying unknown true distribution \(\mathcal{D}\) such that the generalization risk/error \(\mathcal{R} \leq \hat{\mathcal{R}} + \epsilon\) with high probability.
This section is dedicated to answering the question: Is the learning problem solvable? More concretely, the learning problem is solvable if there exists a hypothesis \(h \in \mathcal{H}\) such that the following probability holds:
That is the probability that the least upper bound (that is the supremum \(\sup _{h \in \mathcal{H}}\) ) of the absolute difference between \(\mathcal{R}\) and \(\hat{\mathcal{R}}\) is larger than a very small value \(\epsilon\). If this probability is sufficiently small, that is there is a very little chance that \(\hat{\mathcal{R}}\) differs much than \(\mathcal{R}\), then the learning problem is solvable (cited from Machine Learning Theory).