[{"data":1,"prerenderedAt":77},["ShallowReactive",2],{"term-o\u002Foverfitting":3,"related-o\u002Foverfitting":61},{"id":4,"title":5,"acronym":6,"body":7,"category":40,"description":41,"difficulty":42,"extension":43,"letter":44,"meta":45,"navigation":46,"path":47,"related":48,"seo":55,"sitemap":56,"stem":59,"subcategory":6,"__hash__":60},"terms\u002Fterms\u002Fo\u002Foverfitting.md","Overfitting",null,{"type":8,"value":9,"toc":33},"minimark",[10,15,19,23,26,30],[11,12,14],"h2",{"id":13},"eli5-the-vibe-check","ELI5 — The Vibe Check",[16,17,18],"p",{},"Overfitting is when your model gets TOO good at the training data and becomes useless on new data. It's like a student who memorizes every past exam question but can't solve any new problems. The model learned the noise and quirks of the training set instead of the actual underlying pattern. It aces training, flunks production.",[11,20,22],{"id":21},"real-talk","Real Talk",[16,24,25],{},"Overfitting occurs when a model learns the training data too closely, including noise and random variation, at the expense of generalization to unseen data. It manifests as low training loss but high validation\u002Ftest loss. Prevention strategies include regularization (dropout, weight decay), early stopping, data augmentation, and cross-validation.",[11,27,29],{"id":28},"when-youll-hear-this","When You'll Hear This",[16,31,32],{},"\"The model is overfitting — validation loss is going up.\" \u002F \"More data helps prevent overfitting.\"",{"title":34,"searchDepth":35,"depth":35,"links":36},"",2,[37,38,39],{"id":13,"depth":35,"text":14},{"id":21,"depth":35,"text":22},{"id":28,"depth":35,"text":29},"ai","Overfitting is when your model gets TOO good at the training data and becomes useless on new data.","intermediate","md","o",{},true,"\u002Fterms\u002Fo\u002Foverfitting",[49,50,51,52,53,54],"Underfitting","Bias","Variance","Epoch","Training","Regularization",{"title":5,"description":41},{"changefreq":57,"priority":58},"weekly",0.7,"terms\u002Fo\u002Foverfitting","WsFEMP61xWQL9lLCGwpmZyE4wbd1kdRdzw4eaNh9nPI",[62,65,68,71,74],{"title":50,"path":63,"acronym":6,"category":40,"difficulty":42,"description":64},"\u002Fterms\u002Fb\u002Fbias","In ML, bias means the model has systematic errors — it's consistently wrong in the same direction.",{"title":52,"path":66,"acronym":6,"category":40,"difficulty":42,"description":67},"\u002Fterms\u002Fe\u002Fepoch","An epoch is one complete pass through your entire training dataset. If you have 100,000 examples, one epoch means the model has seen all 100,000 once.",{"title":53,"path":69,"acronym":6,"category":40,"difficulty":42,"description":70},"\u002Fterms\u002Ft\u002Ftraining","Training is the long, expensive process where an AI learns from data.",{"title":49,"path":72,"acronym":6,"category":40,"difficulty":42,"description":73},"\u002Fterms\u002Fu\u002Funderfitting","Underfitting is the opposite of overfitting — the model hasn't learned enough and does badly on BOTH the training data AND new data.",{"title":51,"path":75,"acronym":6,"category":40,"difficulty":42,"description":76},"\u002Fterms\u002Fv\u002Fvariance","Variance in ML means your model is too sensitive to the specific training data it saw.",1776518299935]