Empirical Risk Minimization#

This chapter talks about the Empirical Risk Minimization (ERM) principle. This statistical learning theory defines a family of algorithms that give theoretical bounds on their performance. The core idea is that we cannot know exactly how well an algorithm will work in practice (the true “risk”) because we don’t know the true distribution of data that the algorithm will work on, but we can instead measure its performance on a known set of training data (the “empirical” risk).

Learning Objectives#

Further Readings#

  • Jung, Alexander. “Chapter 4. Empirical Risk Minimization.” In Machine Learning: The Basics. Springer Nature Singapore, 2023.

  • Deisenroth, Marc Peter, Cheng Soon Ong, and Aldo A. Faisal. “Chapter 8.2. Empirical Risk Minimization.” In Mathematics for Machine Learning. Cambridge: Cambridge University Press, 2021.

  • Wikipedia: Empirical Risk Minimization

  • Wikipedia: Loss Function