• AI, But Simple
  • Posts
  • Neural Network Backpropagation, Mathematically Explained

Neural Network Backpropagation, Mathematically Explained

AI, But Simple Issue #82

 

Hello from the AI, but simple team! If you enjoy our content, consider supporting us so we can keep doing what we do.

Our newsletter is no longer sustainable to run at no cost, so we’re relying on different measures to cover operational expenses. Thanks again for reading!

Neural Network Backpropagation, Mathematically Explained

AI, But Simple Issue #82

Imagine you were walking through a maze blindfolded. Once you take an incorrect turn, you feel a tap on your shoulder that’ll tell you “warmer” or “colder” depending on whether you’ve gotten closer or further away from the exit of the maze.

Although you still cannot see the maze, you use this feedback to understand how far you are from the exit, eventually finding the correct route out. This is effectively the process of backpropagation in neural networks.

As a neural network trains, it does not find out what the ideal weights should be to form a solution. Rather, all it learns is just how “wrong” it was and makes corrections to reduce the errors it makes.

Backpropagation is the calculus way to convert each error signal into information used to update the model. It does this for every parameter (weight or bias) in every layer, quantifying how much each weight contributed to the error.

Subscribe to keep reading

This content is free, but you must be subscribed to AI, But Simple to continue reading.

Already a subscriber?Sign in.Not now