Boosting
Contents
Boosting#
Boosting is a machine learning ensemble technique that combines several weak learners to form a strong learner. Unlike bagging, which trains individual models independently, boosting trains the models in sequence, where each model tries to correct the mistakes of the previous model. The final prediction is then made by combining the predictions of all the models. Boosting is used to improve the accuracy of the overall model by reducing the bias and variance. The most popular boosting algorithm is gradient boosting, which is used in a variety of applications such as regression, classification, and ranking.
Further Readings#
Vincent Tan notes
Murphy, Kevin P. “Chapter .” In Probabilistic Machine Learning: An Introduction. MIT Press, 2022.
James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. “Chapter .” In An Introduction to Statistical Learning: With Applications in R. Boston: Springer, 2022.
Jung, Alexander. “Chapter ” In Machine Learning: The Basics. Singapore: Springer Nature Singapore, 2023.
Bishop, Christopher M. “Chapter .” In Pattern Recognition and Machine Learning. New York: Springer-Verlag, 2016.
Hal Daumé III. “Chapter .” In A Course in Machine Learning, January 2017.
https://github.com/goodboychan/goodboychan.github.io/tree/main/_notebooks