TL;DR: As of April 30, the anthropic-beta: context-1m-2025-08-07 header stopped working on claude-sonnet-4-5 and claude-sonnet-4. Any request to those models exceeding 200k tokens is now rejected. The migration is two steps: remove the beta header, and change the model identifier to claude-sonnet-4-6 (or claude-opus-4-6 for reasoning-heavy work). Sonnet 4.6 includes 1M context at standard pricing ($3 input / $15 output per MTok) — no premium, no beta header.
What to do: Remove anthropic-beta: context-1m-2025-08-07 from all requests and update model identifiers from claude-sonnet-4-5/claude-sonnet-4 to claude-sonnet-4-6; if you haven't done this yet, long-context requests are already failing.
Why trust it: Official Anthropic platform release notes; confirmed by multiple third-party migration guides and the Anthropic Bedrock announcement listing Sonnet 4.6 as the replacement.
Skeptic check: —
Claude Security scans code for vulnerabilities; now in beta
TL;DR: Claude Security is now in public beta for Claude Enterprise customers. It scans codebases for vulnerabilities using Opus 4.7 and proposes targeted fixes. Access is available directly on the Claude Platform or through integration partners; Team and Max plan access is described as coming soon.
What to do: If you're a Claude Enterprise customer, try Claude Security on a security-sensitive component of your codebase; if you're not on Enterprise, watch the announcement page for Team/Max rollout timing.
Why trust it: Primary-source product announcement from Anthropic on April 30; independently covered by SiliconANGLE, SecurityWeek, and The New Stack on the same date, each confirming the launch details.
Skeptic check: No independent benchmarks on false-positive rates, patch quality, or comparison with established tools (Snyk, Semgrep) have been published yet; all capability claims come from Anthropic alone.
LLM 0.32 alpha ships typed response model
Source: Simon Willison (simonwillison.net), April 29, 2026
TL;DR: The LLM command-line tool and Python library (simonw/llm) shipped the first alpha of its biggest internal change: the old text-prompt → text-response model is replaced by typed multi-part messages. Responses are now a stream of typed parts — text, tool calls, tool outputs, images, audio — rather than a single text blob. This unblocks server-side tool execution and multi-modal output from the same response stream. A same-day follow-up (0.32a1) fixed a bug where tool-calling conversations couldn't be correctly serialized and reinflated from SQLite.
What to do: If you maintain an LLM plugin, read the 0.32 alpha release notes and update your plugin's response handling before the stable release ships; if you use LLM for tool-calling workflows with SQLite persistence, upgrade to 0.32a1 now to pick up the reinflation fix.
Why trust it: Author is the sole developer of the library; both alphas were published the same day (April 29) with detailed changelog notes. The alpha release strategy is explicitly designed to surface plugin compatibility issues in real-world use before stable ships.
Skeptic check: Alpha — the response model API may change before 0.32 stable lands if testing reveals design issues.
Claude Code 2.1.123 fixes MCP server stalls and token-count hint
Source: Anthropic, Claude Code changelog, April 29, 2026
TL;DR: Version 2.1.123 ships three bug fixes. First: plugin-defined MCP servers that shared a URL with an unauthenticated claude.ai connector were getting stuck in "connecting" state on session start — fixed. Second: the idle-return "/clear to save X tokens" hint was showing cumulative session token totals instead of the current context window size, so the number could be badly overstated in long sessions — fixed. Third: OAuth auth was looping on 401 when CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1 was set — fixed.
What to do: Update Claude Code (npm install -g @anthropic-ai/claude-code or equivalent); the token-count fix in particular means the "/clear" hint now shows an accurate current-context size, so your context-management decisions are based on real numbers.
Why trust it: Official release notes from Anthropic's Claude Code GitHub repository.
Skeptic check: —
Sources used today: Anthropic platform release notes, Claude Blog (claude.com/blog), Simon Willison (simonwillison.net), Anthropic Claude Code GitHub releases.
Skipped: 1 hype take; 1 policy-only post (Zig anti-AI contribution policy, April 30 — in window but doesn't change how to configure or prompt a tool); 3 leaderboard announcements without methodology. Out-of-window items noted for reference: Claude Opus 4.7 (April 16), Memory for Managed Agents beta (April 23), Cursor 3 (April 2), MCP 2026 roadmap (March 9).
Coverage gaps: Direct fetch blocked (403) for all r.jina.ai-prefixed and bare-URL fetches against simonwillison.net, platform.claude.com, code.claude.com, github.com, and arxiv.org. All coverage relies on web-search snippets. No in-window posts confirmed from Latent Space, Hamel Husain, Chip Huyen, Eugene Yan, or Interconnects (Nathan Lambert). No Hacker News items met the 200-point threshold in the window (Zig anti-AI HN post had ~120 points).
Inaccessible links:
simonwillison.net/2026/Apr/29/llm/ — 403 — wanted the full release notes for the 0.32a0 response-model API changes and migration guidance for plugin authors.