[{"data":1,"prerenderedAt":75},["ShallowReactive",2],{"term-t\u002Ftop-k":3,"related-t\u002Ftop-k":59},{"id":4,"title":5,"acronym":6,"body":7,"category":40,"description":41,"difficulty":42,"extension":43,"letter":44,"meta":45,"navigation":46,"path":47,"related":48,"seo":53,"sitemap":54,"stem":57,"subcategory":6,"__hash__":58},"terms\u002Fterms\u002Ft\u002Ftop-k.md","Top-k",null,{"type":8,"value":9,"toc":33},"minimark",[10,15,19,23,26,30],[11,12,14],"h2",{"id":13},"eli5-the-vibe-check","ELI5 — The Vibe Check",[16,17,18],"p",{},"Top-k limits the AI's word choices to the K most likely options. If K is 50, the AI only picks from the top 50 most probable words for each step. It's like telling someone they can only order from the top 50 items on the menu — still lots of choice, but you won't accidentally end up with the weird exotic stuff nobody orders.",[11,20,22],{"id":21},"real-talk","Real Talk",[16,24,25],{},"Top-k sampling restricts the token sampling pool to the K highest-probability tokens at each generation step. Unlike top-p, the number of candidates is fixed regardless of the probability distribution shape. Lower K values produce more focused output; higher K values allow more diversity. Often combined with temperature and top-p.",[11,27,29],{"id":28},"when-youll-hear-this","When You'll Hear This",[16,31,32],{},"\"Set top-k to 40 for more coherent generation.\" \u002F \"Top-k and top-p can be used together.\"",{"title":34,"searchDepth":35,"depth":35,"links":36},"",2,[37,38,39],{"id":13,"depth":35,"text":14},{"id":21,"depth":35,"text":22},{"id":28,"depth":35,"text":29},"ai","Top-k limits the AI's word choices to the K most likely options. If K is 50, the AI only picks from the top 50 most probable words for each step.","advanced","md","t",{},true,"\u002Fterms\u002Ft\u002Ftop-k",[49,50,51,52],"Temperature","Top-p","Token","Inference",{"title":5,"description":41},{"changefreq":55,"priority":56},"weekly",0.7,"terms\u002Ft\u002Ftop-k","KQlzLj6XUX0F5jtvEywGn9UYVc_gGuRJC6lUBf_zXH0",[60,64,67,72],{"title":52,"path":61,"acronym":6,"category":40,"difficulty":62,"description":63},"\u002Fterms\u002Fi\u002Finference","intermediate","Inference is when the AI actually runs and generates output — as opposed to training, which is when it's learning.",{"title":49,"path":65,"acronym":6,"category":40,"difficulty":62,"description":66},"\u002Fterms\u002Ft\u002Ftemperature","Temperature controls how creative (or chaotic) an AI's responses are. Low temperature (like 0.1) makes it boring, safe, and predictable — great for code.",{"title":51,"path":68,"acronym":6,"category":69,"difficulty":70,"description":71},"\u002Fterms\u002Ft\u002Ftoken","vibecoding","beginner","In AI-land, a token is a chunk of text — roughly 3\u002F4 of a word.",{"title":50,"path":73,"acronym":6,"category":40,"difficulty":42,"description":74},"\u002Fterms\u002Ft\u002Ftop-p","Top-p (also called nucleus sampling) is another dial that controls how an AI picks its next word.",1776518319587]