[{"data":1,"prerenderedAt":74},["ShallowReactive",2],{"term-g\u002Fgolden-test":3,"related-g\u002Fgolden-test":63},{"id":4,"title":5,"acronym":6,"body":7,"category":45,"description":46,"difficulty":47,"extension":48,"letter":49,"meta":50,"navigation":51,"path":52,"related":53,"seo":57,"sitemap":58,"stem":61,"subcategory":6,"__hash__":62},"terms\u002Fterms\u002Fg\u002Fgolden-test.md","Golden Test",null,{"type":8,"value":9,"toc":38},"minimark",[10,15,19,23,31,35],[11,12,14],"h2",{"id":13},"eli5-the-vibe-check","ELI5 — The Vibe Check",[16,17,18],"p",{},"Golden tests compare output against a saved \"golden\" snapshot — when \"it looks right\" becomes automated and version-controlled. You run the code, approve the output once, and that output becomes the test. Next time it changes, the test fails. It's like taking a photo of something working correctly and using that photo to catch when it stops looking right. Great for rendered HTML, PDF output, API responses, or anything with complex visual structure.",[11,20,22],{"id":21},"real-talk","Real Talk",[16,24,25,26,30],{},"Golden tests (also called approval tests or baseline tests) work by serializing a function's output and comparing it to a stored reference file. Frameworks include Jest's ",[27,28,29],"code",{},"toMatchSnapshot()",", Approvaltests.NET, and custom golden file comparators. Unlike assertion-based tests that describe expected values explicitly, golden tests capture the full output — which makes them easy to create but potentially noisy when outputs change legitimately.",[11,32,34],{"id":33},"when-youll-hear-this","When You'll Hear This",[16,36,37],{},"\"We use golden tests for the PDF generator — any layout change breaks it and flags for review.\" \u002F \"Golden tests found the regression immediately. Without them we'd have shipped broken HTML.\"",{"title":39,"searchDepth":40,"depth":40,"links":41},"",2,[42,43,44],{"id":13,"depth":40,"text":14},{"id":21,"depth":40,"text":22},{"id":33,"depth":40,"text":34},"testing","Golden tests compare output against a saved 'golden' snapshot — when 'it looks right' becomes automated and version-controlled.","intermediate","md","g",{},true,"\u002Fterms\u002Fg\u002Fgolden-test",[54,55,56],"Snapshot Testing","Test Runner","Flaky Test",{"title":5,"description":46},{"changefreq":59,"priority":60},"weekly",0.7,"terms\u002Fg\u002Fgolden-test","kwy_zzySFZQn5A5oN-4tQ6jLEmqfPoChZ9IKG-J5_hw",[64,67,71],{"title":56,"path":65,"acronym":6,"category":45,"difficulty":47,"description":66},"\u002Fterms\u002Ff\u002Fflaky-test","A flaky test is a test that sometimes passes and sometimes fails for no clear reason — even when nothing changed.",{"title":54,"path":68,"acronym":6,"category":45,"difficulty":69,"description":70},"\u002Fterms\u002Fs\u002Fsnapshot-testing","beginner","Snapshot testing is taking a picture of your component's output and saving it. Next time tests run, it takes a new picture and compares them.",{"title":55,"path":72,"acronym":6,"category":45,"difficulty":69,"description":73},"\u002Fterms\u002Ft\u002Ftest-runner","A test runner is the thing that actually runs your tests and tells you which ones passed and which ones failed.",1775560903080]