- AI, But Simple
- Posts
- Principle Component Analysis (PCA), Simply Explained
Principle Component Analysis (PCA), Simply Explained
AI, But Simple Issue #72

Hello from the AI, but simple team! If you enjoy our content, consider supporting us so we can keep doing what we do.
Our newsletter is no longer sustainable to run at no cost, so we’re relying on different measures to cover operational expenses. Thanks again for reading!
Principle Component Analysis (PCA), Simply Explained
AI, But Simple Issue #72
Dealing with high-dimensional data is one of the biggest challenges in modern data science and machine learning. Images, text embeddings, genomic data, or sensor streams used as data can often contain thousands of features (input variables).
Such high-dimensional datasets carry a huge amount of information, but they also make computation more expensive, models harder to train, and results more difficult to interpret.
This is where dimensionality reduction comes in. The central idea of this concept is simple: can we represent the data with fewer variables while still retaining most of the important information?
One of the most powerful, widely used, and simple techniques for dimensionality reduction is Principal Component Analysis (PCA).
