Token Counter

Estimate token counts across GPT-4, Claude, Llama, Gemini, and other LLMs. Paste text, see counts per model side-by-side. Files never leave your browser.

ModelTokensInput $/1MCost (this text)
Estimates only. Real tokenizers (tiktoken, SentencePiece, etc.) may differ by ±10%. For exact counts, run the model's tokenizer locally.

What is this for?

Every interaction with an LLM is metered in tokens — sub-word units that the model's tokenizer carves from your text. Tokens drive both context-window limits ("does this fit?") and pricing ("how much will this cost?"). The exact count depends on the model's tokenizer, which you usually don't have at hand. This tool gives you a fast estimate for every major model side-by-side, plus the dollar cost for the input text against each model's published per-token price.

When to use it

Honesty about accuracy

Pricing notes

Common gotchas