- AI, But Simple
- Posts
- Boosting Models in Machine Learning, Simply Explained
Boosting Models in Machine Learning, Simply Explained
AI, But Simple Issue #80

Hello from the AI, but simple team! If you enjoy our content, consider supporting us so we can keep doing what we do.
Our newsletter is no longer sustainable to run at no cost, so we’re relying on different measures to cover operational expenses. Thanks again for reading!
Boosting Models in Machine Learning, Simply Explained
AI, But Simple Issue #80
In machine learning, ensembling is a very useful technique to improve model generalization and performance by combining multiple weaker models to create a stronger one.
Gradient boosting is a key ensembling technique that achieves state-of-the-art results in regression and classification, powering models like XGBoost, LightGBM, and CatBoost.
Gradient boosting is so performant; models implementing this technique have dominated Kaggle competitions and supervised real-world applications for many years.
It is important to note that gradient boosting is a supervised algorithm. For unsupervised applications, gradient boosting would not work—we would have to look at other ensembling techniques specific to unsupervised models.
The "gradient" in gradient boosting refers to its use of gradient descent to iteratively improve predictions. Gradient boosting builds multiple weak models into a strong model sequentially, with each new model correcting the errors of the previous models.
