← Guide

🌱 Beginner β€” AI Fundamentals

Chapter 3 of 24

πŸ“ˆ Chapter 3: How Neural Networks Learn

Forward pass, loss, and backpropagation

Neural networks learn by training: feed data in, predict an output, compare to the correct answer (that’s the loss), then adjust the weights so the loss goes down. The math that adjusts the weights is backpropagation. One pass forward (prediction) + one pass backward (gradients) + an update step is repeated millions of times.

Training loop (simplified)

Data

Text (or pairs) the model learns from

β†’

Tokenize

Turn text into token IDs

β†’

Forward

Model predicts next token

β†’

Loss

Compare prediction to correct answer

β†’

Backward

Compute gradients

β†’

Update

Adjust billions of parameters

Repeat over huge datasets for many steps. "Parameters" are the numbers being updated; more parameters = more capacity to memorize patterns.