[{"data":1,"prerenderedAt":69},["ShallowReactive",2],{"term-v\u002Fvariance":3,"related-v\u002Fvariance":59},{"id":4,"title":5,"acronym":6,"body":7,"category":40,"description":41,"difficulty":42,"extension":43,"letter":44,"meta":45,"navigation":46,"path":47,"related":48,"seo":53,"sitemap":54,"stem":57,"subcategory":6,"__hash__":58},"terms\u002Fterms\u002Fv\u002Fvariance.md","Variance",null,{"type":8,"value":9,"toc":33},"minimark",[10,15,19,23,26,30],[11,12,14],"h2",{"id":13},"eli5-the-vibe-check","ELI5 — The Vibe Check",[16,17,18],"p",{},"Variance in ML means your model is too sensitive to the specific training data it saw. Change the training data a little and the model changes a lot — it learned all the noise. High variance usually means overfitting. The bias-variance tradeoff is the fundamental tension in ML: simple models have high bias, complex models have high variance.",[11,20,22],{"id":21},"real-talk","Real Talk",[16,24,25],{},"Variance in the bias-variance tradeoff refers to the model's sensitivity to fluctuations in the training data. High variance indicates overfitting: the model captures noise rather than signal, and small changes in training data produce large changes in model behavior. Regularization, ensemble methods, and more data help reduce variance.",[11,27,29],{"id":28},"when-youll-hear-this","When You'll Hear This",[16,31,32],{},"\"High variance means the model changes a lot across different training splits.\" \u002F \"We're in the high-variance regime — add dropout.\"",{"title":34,"searchDepth":35,"depth":35,"links":36},"",2,[37,38,39],{"id":13,"depth":35,"text":14},{"id":21,"depth":35,"text":22},{"id":28,"depth":35,"text":29},"ai","Variance in ML means your model is too sensitive to the specific training data it saw.","intermediate","md","v",{},true,"\u002Fterms\u002Fv\u002Fvariance",[49,50,51,52],"Bias","Overfitting","Underfitting","Regularization",{"title":5,"description":41},{"changefreq":55,"priority":56},"weekly",0.7,"terms\u002Fv\u002Fvariance","IYyWlP4t_wnT30rS_PF-4xkJq1LQ2QDStS01PpgLbQ8",[60,63,66],{"title":49,"path":61,"acronym":6,"category":40,"difficulty":42,"description":62},"\u002Fterms\u002Fb\u002Fbias","In ML, bias means the model has systematic errors — it's consistently wrong in the same direction.",{"title":50,"path":64,"acronym":6,"category":40,"difficulty":42,"description":65},"\u002Fterms\u002Fo\u002Foverfitting","Overfitting is when your model gets TOO good at the training data and becomes useless on new data.",{"title":51,"path":67,"acronym":6,"category":40,"difficulty":42,"description":68},"\u002Fterms\u002Fu\u002Funderfitting","Underfitting is the opposite of overfitting — the model hasn't learned enough and does badly on BOTH the training data AND new data.",1776518321775]