[{"data":1,"prerenderedAt":74},["ShallowReactive",2],{"term-g\u002Fgradient-descent":3,"related-g\u002Fgradient-descent":60},{"id":4,"title":5,"acronym":6,"body":7,"category":40,"description":41,"difficulty":42,"extension":43,"letter":44,"meta":45,"navigation":46,"path":47,"related":48,"seo":54,"sitemap":55,"stem":58,"subcategory":6,"__hash__":59},"terms\u002Fterms\u002Fg\u002Fgradient-descent.md","Gradient Descent",null,{"type":8,"value":9,"toc":33},"minimark",[10,15,19,23,26,30],[11,12,14],"h2",{"id":13},"eli5-the-vibe-check","ELI5 — The Vibe Check",[16,17,18],"p",{},"Gradient Descent is how an AI learns — it's the algorithm that nudges the model's weights in the right direction after each mistake. Imagine you're blindfolded on a hilly landscape and you want to reach the lowest point. You feel which way is downhill and take a small step. Repeat millions of times and you reach the bottom. That 'downhill direction' is the gradient.",[11,20,22],{"id":21},"real-talk","Real Talk",[16,24,25],{},"Gradient descent is the optimization algorithm used to minimize a loss function by iteratively updating model parameters in the direction opposite to the gradient of the loss with respect to those parameters. Variants include SGD, Adam, AdaGrad, and RMSProp. The learning rate controls step size. Modern deep learning uses mini-batch gradient descent.",[11,27,29],{"id":28},"when-youll-hear-this","When You'll Hear This",[16,31,32],{},"\"The model optimizes via gradient descent.\" \u002F \"Adam is a variant of gradient descent with adaptive learning rates.\"",{"title":34,"searchDepth":35,"depth":35,"links":36},"",2,[37,38,39],{"id":13,"depth":35,"text":14},{"id":21,"depth":35,"text":22},{"id":28,"depth":35,"text":29},"ai","Gradient Descent is how an AI learns — it's the algorithm that nudges the model's weights in the right direction after each mistake.","advanced","md","g",{},true,"\u002Fterms\u002Fg\u002Fgradient-descent",[49,50,51,52,53],"Backpropagation","Loss Function","Learning Rate","Training","Weights",{"title":5,"description":41},{"changefreq":56,"priority":57},"weekly",0.7,"terms\u002Fg\u002Fgradient-descent","5268LMHuZomk4N_w6XNbqOFIcPmlOUqC8ZJNXXaNtE4",[61,64,68,71],{"title":49,"path":62,"acronym":6,"category":40,"difficulty":42,"description":63},"\u002Fterms\u002Fb\u002Fbackpropagation","Backpropagation is how errors flow backwards through a neural network during training.",{"title":50,"path":65,"acronym":6,"category":40,"difficulty":66,"description":67},"\u002Fterms\u002Fl\u002Floss-function","intermediate","The loss function is the AI's score of how badly it's doing.",{"title":52,"path":69,"acronym":6,"category":40,"difficulty":66,"description":70},"\u002Fterms\u002Ft\u002Ftraining","Training is the long, expensive process where an AI learns from data.",{"title":53,"path":72,"acronym":6,"category":40,"difficulty":66,"description":73},"\u002Fterms\u002Fw\u002Fweights","Weights are the numbers inside a neural network that determine what it knows and how it behaves — they're the AI's 'brain cells.",1776518284786]