Chat Thread Viewer
Paste an OpenAI or Anthropic messages array (JSON) and see it rendered as a readable chat thread. Color-coded by role. Tool-call params expanded. Pure browser, no upload.
What is this for?
LLM applications log their conversations as JSON arrays of messages — that's what gets sent to the API and what you'll see in audit logs, evaluation traces, fine-tuning datasets, and SDK debug output. Reading those arrays as a human is awful: walls of escaped strings, tool-call arguments wrapped in escaped JSON-inside-JSON, system prompts mashed in with the rest of the flow. This tool gives you a quick chat-bubble render so you can scan the actual conversation, see which messages contained tool calls, and spot the one prompt that went sideways.
What formats it understands
- OpenAI Chat Completions.
[{role, content}, ...]with optionaltool_callson assistant messages androle: "tool"for tool results. The most common shape. - Anthropic Messages API.
[{role, content: [...]}]wherecontentis an array of blocks (text,tool_use,tool_result,image). System prompt is usually top-level — paste it as a system message if you want it shown. - LangChain message dumps.
[{type: "human" | "ai" | "system", content: ...}]— older LangChain shape, still common in saved traces. - Wrapper objects. If you paste
{"messages": [...]}or{"input": [...]}, the wrapper gets unwrapped automatically.
What gets rendered
- Role-coloured bubbles. System = grey centred, user = indigo right-aligned, assistant = neutral left-aligned, tool result = green.
- Tool calls. Expanded by default with pretty-printed arguments. Both OpenAI's
tool_callsform and Anthropic'stool_useblock style are handled. Tool result messages render as a separate bubble with the result content. - Code fences and inline code. Triple-backtick blocks render as
<pre>with monospace, single-backtick spans render as inline code. No syntax highlighting (we don't ship a tokenizer for that), but indentation is preserved. - Image references. Anthropic image blocks render a small pill showing the source URL or media type — we don't actually load the image (keeps the tool offline).
- Stats line. Format detected, message count, tool-call count, and a rough token estimate using the same heuristic as our Token Counter (chars / 3.8 for English, 1 token per CJK char).
Common gotchas
- Trailing commas. Standard JSON doesn't allow them. If you copied from a debugger or REPL output, you may need to clean up
{...},]→{...}]before pasting. - Single quotes. Python's
repruses single quotes. Run it throughjson.dumpsbefore pasting, or use a Python-literal-to-JSON converter. - Anthropic system prompt. The system instruction in Anthropic's API is a top-level field, not a message. If your dump only has the messages array, the system prompt won't be in there — paste it as
{"role": "system", "content": "..."}at the start to see it. - Tool-call arguments as escaped JSON. OpenAI returns
argumentsas a string of JSON. We unescape and pretty-print it. If your JSON-in-string is malformed, the raw string is shown instead. - Privacy. Nothing leaves the page. The whole render runs in JS on whatever JSON you paste. Don't paste anything you wouldn't paste into a notepad app.