Build · founder · 8 min read
35 Security Holes in One Month: Why Vibe-Coded Apps Are Getting Riskier in 2026
35 new CVEs in March 2026 were traced to AI-generated code. Here's what happened and what founders need to do about it.
In March 2026, researchers logged 35 new CVEs — Common Vulnerabilities and Exposures — traced directly to AI-generated code. Not discovered in AI-generated code. Caused by it.
This is no longer an abstract risk. Georgia Tech’s Software Security Lab has been running a dedicated “Vibe Security Radar” since May 2025, tracking CVEs that originate in AI-written codebases. They’ve seen a steady acceleration. March 2026 was the worst month on record.
This piece is not about whether you should stop using AI coding tools. You shouldn’t, and the security risk doesn’t change that. It’s about understanding why this is happening, what patterns the vulnerabilities follow, and what a non-technical founder can actually do to reduce exposure without hiring a security engineer.
Why March was particularly bad
Three factors converged.
First, the volume of AI-generated code in production has exploded. Veracode’s 2026 State of Software Security report put approximately 45% of all new code in production as AI-assisted or AI-generated — up from around 12% in 2024. More AI code in production means more surface area for AI-specific failure patterns to show up as real vulnerabilities.
Second, the Moltbook incident put a spotlight on the category. Moltbook was an entirely vibe-coded social network that made international security news in February 2026 when a researcher found that user session tokens were being exposed in API responses — to every other user. The codebase was estimated to be 95% AI-generated. The researcher noted the vulnerability was exactly the kind that human code review catches as a matter of habit: an object was serialized with more fields than intended. No AI caught it. No test caught it.
Third, security researchers are now specifically looking for AI-generated code in the wild and testing it. This means more vulnerabilities are being found and disclosed. The CVE count is partially a function of scrutiny, not just of code quality degrading.
The patterns that keep appearing
Across the 35 March CVEs and the broader body of Vibe Security Radar data, four failure patterns dominate.
Object over-serialization
This is the Moltbook pattern. AI generates an API endpoint that returns a user object. The object has a password_hash field, or a session_token field, or a stripe_customer_id. The AI serializes the whole object and returns it in the JSON response because that’s what “returns user data” means to a model trained on code that works, not code that’s safe.
Human developers catch this through code review and habit — you look at what you’re returning and ask “should I be returning this?” AI doesn’t ask that question.
Authentication logic that looks correct but isn’t
A surprisingly common CVE pattern: authentication middleware that runs correctly under normal circumstances but fails at the edges. An empty string passes validation. A null value bypasses a check. A JWT is verified for signature but not for expiry.
These bugs are hard to spot because the code reads correctly. The logic is almost right. The edge case isn’t tested because the prompt didn’t specify it, and the AI didn’t model the threat.
Dependency selection from stale training data
AI tools reach for libraries based on training data that may be months or years old. A package that was widely used in 2023 and appeared frequently in training data may have known CVEs disclosed in 2025. The AI doesn’t know. It recommends what it’s seen used, and what it’s seen used may be unsafe now.
Missing rate limiting and resource controls
AI-generated API endpoints routinely omit rate limiting, query limits, and file size restrictions. These aren’t the kind of security bugs that get disclosed as CVEs immediately — they show up as denial-of-service vectors or as financial attacks (processing an API endpoint with unbounded database queries in a loop). The AI wrote code that works correctly for a single user in a test environment. At production scale with malicious traffic, it breaks in ways the AI never modeled.
The 1.7x statistic you should know
Veracode’s analysis of production codebases found that pull requests primarily authored by AI tools introduced issues at 1.7x the rate of human-authored PRs. The categories most overrepresented: cross-site scripting, insecure authentication, and broken object-level authorization (BOLA) — the technical category that includes the Moltbook pattern.
This doesn’t mean AI tools are bad. It means they have consistent blind spots, and those blind spots are different from where human developers tend to fail.
What founders can actually do
Most of this guidance assumes you have no security engineering background and limited budget. These are the interventions that deliver the highest risk reduction per hour spent.
Use a security-aware AI prompt from the start
Before you generate any code that touches user data, authentication, or payments, use a prompt that explicitly includes security constraints. Something like: “You are building [feature]. Before writing any code, list every data field that will be persisted or returned in API responses, and confirm whether each should be visible to the requesting user. Flag any input that is not validated before use. Flag any external package you recommend and note its current CVE status if you know it.”
This doesn’t guarantee safe code. It does force the AI to reason about security before generating, which measurably improves output.
Audit what your APIs return
After building any feature that returns user data, manually inspect the API response. Not the code — the actual JSON your endpoint returns. Look at every field. Ask: should this be here? Is there anything in this response that a user shouldn’t see?
This takes 20 minutes per endpoint and catches the over-serialization pattern almost every time.
Run your dependencies through a vulnerability scanner
Before your first production deploy, run npm audit (or your package manager’s equivalent) and review the output. If you’re using a tool like Snyk or Socket.dev, run the free tier scan on your repository. These tools catch the “AI recommended a package with known CVEs” pattern automatically.
Add rate limiting before you go live
If you’re using an API framework, adding rate limiting is typically a middleware configuration change. Lovable, Bolt, and most full-stack AI builders either include it or have a standard way to add it. If yours doesn’t, ask the AI to add rate limiting to every public endpoint before launch.
Don’t skip authentication edge-case testing
For any authentication flow — login, signup, password reset, session management — manually test the edge cases: empty strings, null values, expired tokens used after logout, multiple simultaneous sessions. This is not something you need a security engineer for. It’s a 30-minute manual test session per feature.
The realistic risk level
Being direct: if you’ve vibe-coded an app and shipped it to production without any security review, there’s a meaningful chance it has at least one vulnerability in the patterns described above.
This is not a reason to panic. Most vibe-coded apps are not high-value targets, and exploiting application vulnerabilities requires targeting you specifically. The Moltbook incident was newsworthy partly because Moltbook attracted attention by being an entirely AI-coded social network, which made it an interesting research target.
But as more AI-generated apps process real payments, store real user data, and grow real user bases, the economic incentive to find and exploit these patterns grows. The March CVE data is a preview of a trend, not a summary of it.
The window to build these habits before you need them is now.
Further reading
- Our vibe coding security guide covers the foundational security checklist every AI-built app should go through before launch.
- Georgia Tech’s SSLab Vibe Security Radar publishes monthly summaries of AI-code CVEs.
- Veracode’s 2026 State of Software Security report has the underlying data on AI vs human PR issue rates.
Related guides
founder · 7 min read
The Lovable Security Crisis: What Non-Technical Founders Must Know
10.3% of Lovable apps had critical security flaws. Here's what happened, who's at risk, and what to do if you built with Lovable.
founder · 8 min read
Vibe Coding Security: What AI Gets Wrong (and How to Fix It)
45% of AI-generated code contains critical vulnerabilities. Here's what founders and PMs need to know before shipping AI-written code to production.
founder · 6 min read
What Is Vibe Designing? The No-Design-Skills Way to Build Beautiful Apps
Vibe designing is AI-first UI generation from plain language. No Figma, no design degree, no handoff. Here's how it works and which tools to use in 2026.
Enjoying this guide?
Get weekly practical guides, plus tool updates and implementation playbooks.