[{"data":1,"prerenderedAt":129},["ShallowReactive",2],{"term-t\u002Ftemperature":3,"related-t\u002Ftemperature":109},{"id":4,"title":5,"acronym":6,"body":7,"category":89,"description":90,"difficulty":91,"extension":92,"letter":93,"meta":94,"navigation":95,"path":96,"related":97,"seo":103,"sitemap":104,"stem":107,"subcategory":6,"__hash__":108},"terms\u002Fterms\u002Ft\u002Ftemperature.md","Temperature",null,{"type":8,"value":9,"toc":83},"minimark",[10,15,19,23,26,30,72,76,79],[11,12,14],"h2",{"id":13},"eli5-the-vibe-check","ELI5 — The Vibe Check",[16,17,18],"p",{},"Temperature controls how creative (or chaotic) an AI's responses are. Low temperature (like 0.1) makes it boring, safe, and predictable — great for code. High temperature (like 1.5) makes it creative, surprising, and occasionally unhinged — great for poetry. It's the AI's personality dial.",[11,20,22],{"id":21},"real-talk","Real Talk",[16,24,25],{},"Temperature is a hyperparameter that scales the logits before softmax in token sampling. A temperature of 0 makes the model deterministic (always picks the highest probability token). Higher temperatures flatten the probability distribution, making lower-probability tokens more likely to be sampled, increasing diversity and creativity.",[11,27,29],{"id":28},"show-me-the-code","Show Me The Code",[31,32,37],"pre",{"className":33,"code":34,"language":35,"meta":36,"style":36},"language-python shiki shiki-themes material-theme-lighter material-theme material-theme-palenight","response = client.chat.completions.create(\n    model=\"gpt-4\",\n    messages=[{\"role\": \"user\", \"content\": \"Write a haiku about bugs\"}],\n    temperature=0.9  # creative mode\n)\n","python","",[38,39,40,48,54,60,66],"code",{"__ignoreMap":36},[41,42,45],"span",{"class":43,"line":44},"line",1,[41,46,47],{},"response = client.chat.completions.create(\n",[41,49,51],{"class":43,"line":50},2,[41,52,53],{},"    model=\"gpt-4\",\n",[41,55,57],{"class":43,"line":56},3,[41,58,59],{},"    messages=[{\"role\": \"user\", \"content\": \"Write a haiku about bugs\"}],\n",[41,61,63],{"class":43,"line":62},4,[41,64,65],{},"    temperature=0.9  # creative mode\n",[41,67,69],{"class":43,"line":68},5,[41,70,71],{},")\n",[11,73,75],{"id":74},"when-youll-hear-this","When You'll Hear This",[16,77,78],{},"\"Set temperature to 0 for deterministic outputs.\" \u002F \"Crank up the temperature if the responses feel too robotic.\"",[80,81,82],"style",{},"html .light .shiki span {color: var(--shiki-light);background: var(--shiki-light-bg);font-style: var(--shiki-light-font-style);font-weight: var(--shiki-light-font-weight);text-decoration: var(--shiki-light-text-decoration);}html.light .shiki span {color: var(--shiki-light);background: var(--shiki-light-bg);font-style: var(--shiki-light-font-style);font-weight: var(--shiki-light-font-weight);text-decoration: var(--shiki-light-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":36,"searchDepth":50,"depth":50,"links":84},[85,86,87,88],{"id":13,"depth":50,"text":14},{"id":21,"depth":50,"text":22},{"id":28,"depth":50,"text":29},{"id":74,"depth":50,"text":75},"ai","Temperature controls how creative (or chaotic) an AI's responses are. Low temperature (like 0.1) makes it boring, safe, and predictable — great for code.","intermediate","md","t",{},true,"\u002Fterms\u002Ft\u002Ftemperature",[98,99,100,101,102],"Top-p","Top-k","Token","Inference","LLM",{"title":5,"description":90},{"changefreq":105,"priority":106},"weekly",0.7,"terms\u002Ft\u002Ftemperature","6DbVicrw5dtOad72yMpB2jlbi49fI3w19H97kF7hTss",[110,113,118,122,126],{"title":101,"path":111,"acronym":6,"category":89,"difficulty":91,"description":112},"\u002Fterms\u002Fi\u002Finference","Inference is when the AI actually runs and generates output — as opposed to training, which is when it's learning.",{"title":102,"path":114,"acronym":115,"category":89,"difficulty":116,"description":117},"\u002Fterms\u002Fl\u002Fllm","Large Language Model","beginner","An LLM is a humongous AI that read basically the entire internet and learned to predict what words come next, really really well.",{"title":100,"path":119,"acronym":6,"category":120,"difficulty":116,"description":121},"\u002Fterms\u002Ft\u002Ftoken","vibecoding","In AI-land, a token is a chunk of text — roughly 3\u002F4 of a word.",{"title":99,"path":123,"acronym":6,"category":89,"difficulty":124,"description":125},"\u002Fterms\u002Ft\u002Ftop-k","advanced","Top-k limits the AI's word choices to the K most likely options. If K is 50, the AI only picks from the top 50 most probable words for each step.",{"title":98,"path":127,"acronym":6,"category":89,"difficulty":124,"description":128},"\u002Fterms\u002Ft\u002Ftop-p","Top-p (also called nucleus sampling) is another dial that controls how an AI picks its next word.",1776518317959]