Ensemble Models, Simply Explained

AI, But Simple Issue #74

 

Hello from the AI, but simple team! If you enjoy our content, consider supporting us so we can keep doing what we do.

Our newsletter is no longer sustainable to run at no cost, so we’re relying on different measures to cover operational expenses. Thanks again for reading!

Ensemble Models, Simply Explained

AI, But Simple Issue #74

Imagine you’re watching a boxing competition. The bell for the final round rings, and it goes to the judges’ scorecard. You notice that there is a panel of three judges. Why is it that there are usually at least three judges and never just one?

Well, judges can be biased, tired, or make errors and miss important details. Accordingly, we have a panel of judges to form a collective decision.

Combining their final scores, we get a final decision that is more balanced and more fair than using only one individual judge’s score.

In essence, this is exactly what ensembling does in machine learning. The individual judgments of multiple models to make decisions are all considered or combined in order to make one final, more accurate decision.

Every AI model learns differently. One model could potentially be overfit, meaning it has memorized the training data too closely, while another might underfit, meaning it has oversimplified the true complexity of the data.

Combining their outputs, either by averaging, voting, or stacking them, means individual mistakes can potentially be cancelled out, to an extent—the overall performance will increase.

Subscribe to keep reading

This content is free, but you must be subscribed to AI, But Simple to continue reading.

Already a subscriber?Sign in.Not now