Build · founder · 7 min read

State of AI — Week of April 13, 2026

Claude Mythos is too dangerous to release, Cursor 3 reshapes the IDE, and vibe coding lands in the Harvard Gazette. Here's what actually matters this week.

The pace of meaningful news in this space is picking up. A few items from the past two weeks deserve more than a tweet thread.

The biggest story: Anthropic won’t release its most powerful model

On April 7, Anthropic announced Claude Mythos — and in the same breath announced it wouldn’t be releasing it to the public.

During internal testing, Mythos autonomously discovered and exploited zero-day vulnerabilities in every major operating system and web browser. Not in theory. Not in a benchmark. In practice. Anthropic concluded that releasing a model with those capabilities publicly, where it can be accessed by anyone, is not responsible.

Instead, they launched Project Glasswing — a restricted consortium that gives AWS, Apple, Cisco, CrowdStrike, Google, Microsoft, and Nvidia access to Mythos Preview for defensive security work only. The companies share their findings with the broader industry rather than using the model commercially.

Why does this matter to you as a non-technical founder? A few reasons.

First, it marks the first time a frontier AI lab has explicitly withheld a model due to offensive capability risk. Previous “safety” decisions were mostly about harmful content. This is about genuine autonomous capability — a model that can break into systems by itself. That’s a different category.

Second, it’s a preview of the governance questions that are coming. The tools you’re using to build products run on AI models. As those models get more capable, the question of who controls access to them — and under what conditions — will shape the environment you’re building in. Mythos is a leading indicator.

Third, it reframes the “AI security” conversation for founders. The common concern is whether your AI-built app is secure enough. The Mythos story is a reminder that the attack surface is expanding from the other direction too. Not just your code, but the infrastructure your code runs on. If you’re not thinking about security as a product requirement, this is a prompt to start.

The story is still developing. Anthropic says they want to eventually deploy Mythos-class models at scale with proper safeguards in place. That timeline is not specified.

Cursor 3 launched — and it’s a bigger shift than the version number suggests

Cursor shipped version 3.0 on April 2. If you have a technical co-founder or a developer on your team, this is what they’re switching to (or already using).

The change isn’t the features — it’s the underlying bet about where software development is going. Cursor 3 is built on the assumption that most code will be written by AI agents, and that the developer’s job is increasingly to orchestrate agents rather than write lines. The new interface reflects that completely.

The headline addition is the Agents Window: a standalone hub where you spin up multiple AI agents, each working on a different task simultaneously — one refactoring a module, one writing tests, one updating documentation. You monitor their status and output in real time. You can trigger these agents from your phone, from Slack, from a GitHub comment, or from a Linear ticket. The agent keeps running whether your laptop is open or not.

For context on how to think about this: the product decision for you isn’t whether to use Cursor. It’s whether the developers you’re working with are running modern tools or outdated ones. A developer using Cursor 3 is working in a fundamentally different paradigm than one still treating AI as an autocomplete feature. If you’re hiring or evaluating contractors, asking about their tooling tells you something real about how they work.

Cursor also shipped BugBot improvements this week: BugBot now supports MCP, meaning it can pull context from tools like Linear, Jira, or Notion when reviewing code. It also gained real-time self-improvement — it learns from reviewer feedback on pull requests and promotes patterns into standing review rules automatically. For teams tracking issues in Linear or Jira, this is the beginning of a tight loop between project management and code quality.

Vibe coding made the mainstream press — and the Harvard Gazette

Two publications that don’t normally cover developer tools ran vibe coding features in the first week of April.

Bloomberg’s FOMO newsletter (April 5) framed it as a trend fueling anxiety among non-technical professionals who feel like they’re being left behind. The audience is Bloomberg Weekend readers — finance, business, policy — not developers. The framing is “you should probably understand what this is.”

The Harvard Gazette (April 1) ran a piece featuring a Harvard Graduate School of Education professor who taught a six-week course on vibe coding. Her argument: vibe coding is a window into something deeper about how humans and AI are learning to work together, not just a productivity trick for developers.

What this tells you: the cultural conversation about vibe coding has crossed from the developer world into the general professional world. That’s a transition, and it changes the dynamics. The people evaluating you, hiring you, funding you, and buying from you are now encountering this concept. Some of them will want to understand it. Some will have opinions. A few will try it themselves.

If you’ve been building with these tools, you’re ahead of most of the people you deal with professionally. That’s worth being specific about in conversations — not as a flex, but as genuine expertise that has value.

The stack is converging — but not the way anyone predicted

Six months ago, most people assumed the AI coding space would consolidate: one tool would win, the others would die or get acquired. That hasn’t happened. What’s happening instead is composability.

Cursor 3, Claude Code, and OpenAI Codex are increasingly being used together as a layered stack. Cursor for daily IDE work. Claude Code for complex, multi-step agentic tasks from the terminal. Codex for cloud-based autonomous agents running in the background. OpenAI even published an official plugin that runs inside Claude Code, which is about as clear a signal of interoperability as you’ll get.

All three tools converged at $20/month for individual plans. The “standard” senior developer stack in 2026 is Cursor + Claude Code at roughly $40/month combined — which covers almost every scenario.

For non-technical founders, the practical implication is that the AI coding tool ecosystem is not a winner-take-all market. The tools specialize, and people who know the space use multiple tools depending on the task. That’s fine. It means no single tool becomes a dependency risk, and competition between them keeps quality improving.

What to watch next week

The Claude Mythos story will develop — Project Glasswing is new, and the consortium partners haven’t said much yet. Specifically worth watching: whether Anthropic publishes any technical details about what Mythos can do, and what the governance structure for Glasswing actually looks like.

Lovable is also quietly becoming a more interesting company. Their recent Aikido Security partnership (built-in pentesting for Lovable-built apps) is the clearest signal yet that they’re taking security seriously, not just adding a checkbox feature. Whether that moves the needle on enterprise adoption is the question.

Related guides

Recommended next step

Was this helpful?