Claude Code Generated the Broken Image — 50 Agent Calls, 3 Redesigns, and a Stealth SRI Bug
The broken glyphs were obvious the moment I opened the screenshot. Korean characters rendering as □□□□ across article cards on spoonai.me — labels like 총 규모, OpenAI 지분, and 구조 reduced to empty boxes. It looked like a font loading issue. The actual cause was more embarrassing: Claude Code had generated the image itself, and Korean glyphs never made it into the embedded font.
TL;DR The corrupted glyphs came from a Claude Code-generated infographic sitting in public/images/posts/. Found it, deleted it, deployed the fix in one commit. Same week: three sites redesigned, 50 Agent calls dispatched in parallel across two sessions, and Codex caught an SRI hash mismatch that slipped past visual review entirely.
The Font Renderer’s Confession
The prompt was direct:
“The broken fonts are from images Claude Code made directly — not pulled from external sources. Delete all self-generated images.”
Claude Code scanned the entire spoonai-site repo. The signal was the credit field in each post’s frontmatter: "spoonai" or "spoonai 정리" meant self-generated; "CNBC", "Anthropic", "TechCrunch" meant external source.
Full scan result: one self-generated image. public/images/posts/2026-05-06/openai-deployment-company-tpg-10b-01.jpg (58 KB). Two posts — the Korean and English versions of the OpenAI Deployment Company article — both referenced this file. Removed the image: block from both frontmatters, deleted the JPG.
Commit 8b55047. Three files changed. Push → Vercel auto-deploy. The hollow boxes disappeared from the article cards; the hero fell back to a gradient.
The lesson here is relevant to any automated content pipeline: content generated by Claude Code is hard to trace after the fact. Without a metadata field like credit set at creation time, there’s no reliable way to distinguish self-generated assets from externally sourced ones. When your pipeline creates files, explicit provenance tracking isn’t optional — it’s the only way to clean up later without guessing.
50 Agent Calls: Five Redesigns Running in Parallel
Two large redesign requests landed this week, plus a smaller CTA improvement.
spoonai redesign — repositioned toward a card-news format. Five design variants generated in a single pass:
01-wire.html— Editorial Magazine02-quarterly.html— Quarterly Report03-masonry.html— Masonry Feed04-object.html— Object-Oriented05-brief.html— Brief/Minimal
The user selected 05-brief.html’s direction. That got refined into 05a-editorial-premium.html as the production target — the version that’s now heading into the actual codebase.
coffeechat.it.kr redesign — a mentoring platform for the Korean game industry. Five variants dispatched in parallel. Feedback after review: “doesn’t look professional enough.” Pivoted the visual direction toward platforms like Inflearn, Class101, and FastCampus — Korean edtech platforms known for authoritative, conversion-optimized UI. Wrote a new plan.md, dispatched another five variants in parallel. Ten agent calls for this project alone across two rounds.
daymoon photographer site — a smaller but concrete improvement. The top-nav Booking link needed to anchor into the redesigned booking section rather than the old contact form. Added four contact channel options as a card cluster: booking reservation link, KakaoChannel, Naver notice board, and Instagram DM. Two files touched: index.html and styles.css.
The efficiency case for parallel dispatch is straightforward. Generating five HTML prototypes sequentially means five round-trips — each waiting on the previous. Launching five frontend-implementer Agent instances simultaneously collapses that into one. Across two sessions (spoonai + coffeechat), Agent was invoked 50 times, accounting for 21% of 234 total tool calls.
The pattern pays off specifically when the subtasks are independent. Five design variants with no shared state between them is the ideal scenario. Each Agent gets its own direction, generates its own file, and the results come back in parallel. There’s nothing to coordinate.
The Bug Codex Found That Visual Review Missed
During the coffeechat redesign, React was loaded via CDN across all five variants (V2 through V5). Each used react.production.min.js — but the SRI (Subresource Integrity) hashes attached to the <script> tags were generated for the .development.js build.
SRI is a browser security mechanism: the browser fetches the resource, hashes it, and compares against the integrity attribute. If there’s a mismatch, the script is blocked — silently, unless you have DevTools open. Four of the five variants had React not loading at all.
The design-reviewer agent passed all of them. Visually, the static HTML rendered — layout, typography, colors — everything looked correct. There was no obvious breakage, no console error visible in a screenshot, nothing that would flag in a visual pass.
Codex cross-verification caught it explicitly:
V2/V3/V4/V5 — loading react.production.min.js
with SRI hash for .development.js. Browser will block.
Swapped in the correct hashes for the production CDN URLs immediately.
This is why cross-verification is in the pipeline rather than optional. A model reviewing its own output develops blind spots — it pattern-matches over context it already processed rather than reading independently. An external pass catches different things. SRI attributes are a particular failure case: the attribute is on the tag, the page renders (the static HTML doesn’t need React to display), and the mismatch is only detectable by computing a SHA hash. Visual review structurally cannot catch this class of bug.
When the Hook Blocked Four File Reads
One session produced an unexpected dead end. The task was reading four medical and dental advertising analysis files:
Read(summary.json) → cancelled
Read(2026-05-10-daily-update.md) → cancelled
Read(rolling-knowledge-base.md) → cancelled
Read(source-index.md) → cancelled
Root cause: the orchestration hook runs a state.sh Bash call before processing the message. The workflow path for that session was misconfigured — the project slug didn’t match what state.sh expected. Bash failed. All four Read calls bundled in the same message were cancelled along with it.
Retrying hit the same failure mode. The session ended with “cannot summarize without file contents.”
The takeaway for composing multi-tool messages: don’t bundle Bash and Read calls in the same message when the Bash call might fail. The hook cancels everything in the group, not just the failing call. Let Bash resolve first, confirm it succeeded, then issue the Read calls separately. The failure mode isn’t obvious until you hit it in production.
This is a real cost of orchestration hooks: they add coordination overhead that can cascade into unrelated failures. The tradeoff is that they enforce workflow state discipline. The fix isn’t removing the hooks — it’s changing how messages are composed when Bash calls are involved.
Week Stats
| Metric | Count | Share |
|---|---|---|
| Total sessions | 4 | — |
| Total tool calls | 234 | 100% |
| Bash | 124 | 53% |
| Agent | 50 | 21% |
| Read | 17 | 7% |
| Write | 11 | 5% |
| Edit | 5 | 2% |
| Files created | 9 | — |
| Files modified | 5 | — |
Bash at 53% reflects the volume of state.sh state management, build verification, and git commit/push cycles running through the orchestration layer. That overhead is real — roughly half of all tool calls this week were coordination, not content.
Agent at 21% is elevated compared to a typical week. The redesign workload drove it: generating five independent HTML prototypes per project is exactly the kind of work parallel dispatch was built for. Sequential generation would have produced the same output at five times the latency.
The Read/Write/Edit numbers (17/11/5) are low, which makes sense — the actual content creation was delegated to Agent instances rather than happening in the main session.
Next: the spoonai editorial-premium direction is finalized. What’s left is migrating 05a-editorial-premium.html from a standalone prototype into the production Astro codebase.
More projects and build logs at jidonglab.com
Comments 0