[{"data":1,"prerenderedAt":75},["ShallowReactive",2],{"term-t\u002Ftoken":3,"related-t\u002Ftoken":59},{"id":4,"title":5,"acronym":6,"body":7,"category":40,"description":41,"difficulty":42,"extension":43,"letter":44,"meta":45,"navigation":46,"path":47,"related":48,"seo":53,"sitemap":54,"stem":57,"subcategory":6,"__hash__":58},"terms\u002Fterms\u002Ft\u002Ftoken.md","Token",null,{"type":8,"value":9,"toc":33},"minimark",[10,15,19,23,26,30],[11,12,14],"h2",{"id":13},"eli5-the-vibe-check","ELI5 — The Vibe Check",[16,17,18],"p",{},"In AI-land, a token is a chunk of text — roughly 3\u002F4 of a word. Every time you talk to an AI, your message gets chopped into tokens, processed, and you get tokens back. More tokens = more expensive. It's the currency of the AI world. 'Hello world' is 2 tokens, but emoji are weirdly expensive and non-English text uses more tokens per word.",[11,20,22],{"id":21},"real-talk","Real Talk",[16,24,25],{},"A token is the fundamental unit of text processing in LLMs. Tokenizers split text into subword units using algorithms like BPE (Byte Pair Encoding). Token count determines both cost (API pricing is per-token) and context window usage. English text averages ~1.3 tokens per word; code typically has a higher token-per-character ratio due to syntax.",[11,27,29],{"id":28},"when-youll-hear-this","When You'll Hear This",[16,31,32],{},"\"That prompt is 4,000 tokens — it's going to cost us.\" \u002F \"We optimized the system prompt from 2,000 to 800 tokens.\"",{"title":34,"searchDepth":35,"depth":35,"links":36},"",2,[37,38,39],{"id":13,"depth":35,"text":14},{"id":21,"depth":35,"text":22},{"id":28,"depth":35,"text":29},"vibecoding","In AI-land, a token is a chunk of text — roughly 3\u002F4 of a word.","beginner","md","t",{},true,"\u002Fterms\u002Ft\u002Ftoken",[49,50,51,52],"Context Window","Tokenizer","LLM","Prompt Engineering",{"title":5,"description":41},{"changefreq":55,"priority":56},"weekly",0.7,"terms\u002Ft\u002Ftoken","2oAsGaY8EpfFFVZlG5gzLb65kjeWERfKOX31RDj1fKM",[60,64,69,72],{"title":49,"path":61,"acronym":6,"category":40,"difficulty":62,"description":63},"\u002Fterms\u002Fc\u002Fcontext-window","intermediate","A context window is how much text an AI can 'see' at once — its working memory.",{"title":51,"path":65,"acronym":66,"category":67,"difficulty":42,"description":68},"\u002Fterms\u002Fl\u002Fllm","Large Language Model","ai","An LLM is a humongous AI that read basically the entire internet and learned to predict what words come next, really really well.",{"title":52,"path":70,"acronym":6,"category":40,"difficulty":62,"description":71},"\u002Fterms\u002Fp\u002Fprompt-engineering","Prompt engineering is the art of talking to AI so it actually does what you want.",{"title":50,"path":73,"acronym":6,"category":67,"difficulty":62,"description":74},"\u002Fterms\u002Ft\u002Ftokenizer","A tokenizer chops text into pieces that the AI model can understand — but not in ways humans would expect.",1776518319240]