Build · founder · 8 min read

Vibe Coding Security: What AI Gets Wrong (and How to Fix It)

45% of AI-generated code contains critical vulnerabilities. Here's what founders and PMs need to know before shipping AI-written code to production.

Published March 27, 2026 ·
securitybest practicesAI codevibe coding

A Veracode study made the rounds in early 2026 with a number that should give every non-technical founder pause: 45% of AI-generated code contains at least one critical vulnerability. Not a minor issue. Critical — as in, the kind that gets your users’ data exfiltrated or your app used as a spam relay.

This isn’t a reason to stop using AI coding tools. But it is a reason to stop treating “the AI wrote it” as a security review.

Here’s what you actually need to know.

Why AI-Generated Code Is Insecure by Default

AI coding assistants are trained to produce code that works, not code that’s safe. These are genuinely different objectives. A function that queries a database and returns results “works” whether or not it’s vulnerable to SQL injection. The tests pass. The demo looks good. The vulnerability ships.

The problems cluster in predictable places:

Input validation — AI tends to trust data it’s handed. It will write a form handler that accepts whatever the user types, passes it to a database query, and calls it done. In a real codebase, you sanitize everything at the boundary.

Authentication and session handling — Token generation, session expiry, logout flows. These are fiddly to get right and AI often gets them wrong in subtle ways — short token lifetimes that aren’t enforced, logout that doesn’t invalidate server-side sessions, JWT implementations with weak algorithms.

Dependency choices — When an AI reaches for a library to solve a problem, it’s drawing on training data that may be years old. It will happily suggest packages with known CVEs because it doesn’t check the current vulnerability state of what it recommends.

Secrets management — Left to its own devices, AI will often hard-code API keys in environment-specific config files, or place them directly in source code. It’s doing what it was trained to do: make the code run. Keeping secrets out of git is a separate concern it doesn’t always surface.

CORS and access control — Overly permissive CORS configs are everywhere in AI-generated backend code. Access-Control-Allow-Origin: * is the path of least resistance when the AI is trying to get a frontend and backend talking.

The Four Things That Actually Matter

You don’t need to become a security engineer. You need four habits.

1. Static analysis on every commit

Tools like Semgrep, Snyk, and CodeQL (free for public repos via GitHub) will catch the most common classes of vulnerabilities automatically. Checkmarx is the enterprise-grade option if you’re handling sensitive data at scale.

The practical move: connect Snyk or GitHub’s built-in code scanning to your repo. It takes about 20 minutes to set up. After that, it runs on every push and flags issues before they ever reach production.

Don’t wait until a security audit to learn you’ve been shipping SQL injection vulnerabilities for six months.

2. Dependency scanning

Your AI-written app almost certainly uses 50–100 third-party packages. Any one of them could have a disclosed vulnerability. npm audit (for Node projects) and pip-audit (for Python) are built-in tools that check your dependency tree against known CVE databases.

Run them. Schedule them in CI. When they surface something, update the package.

The alternative — which is most people’s default — is finding out about a critical vulnerability in a dependency after someone else does.

3. Treat AI output as untrusted code

This is the mindset shift. When a contractor writes code for your company, you review it before it ships. AI is a contractor who doesn’t know your threat model, doesn’t know what data you’re handling, and has never read your terms of service.

Before any AI-written code touches production:

  • Does it validate all inputs?
  • Does it have any hardcoded credentials or API keys?
  • Does it handle authentication in a way that matches how the rest of your app works?
  • If it creates an API endpoint, who can call it?

You don’t need to audit every line. You need to ask these questions and verify the answers.

4. Use environment variables, always

Every AI-generated project should have a .env file for secrets and a .gitignore entry that ensures .env is never committed. This is table stakes. Check your repo — right now — and search for any string that looks like an API key or database password that isn’t in a .env file.

If you find one: rotate the credential immediately, then clean the git history. Not just delete the file — the history. GitHub’s documentation on removing sensitive data covers this.

The Tools Worth Knowing

Snyk — Developer-friendly vulnerability scanner. Free tier is generous. Integrates with GitHub, GitLab, and most CI pipelines. The best starting point for most founders. Covers both static code analysis and dependency scanning.

Semgrep — Open-source static analysis. More customizable than Snyk for teams that want to write their own rules. The community ruleset covers OWASP Top 10 patterns well.

Checkmarx — Enterprise-grade SAST (static application security testing). Referenced in most serious security coverage because it’s been the industry standard for years. Too heavy for a solo founder, but worth knowing when you’re hiring your first security-focused engineer.

GitHub Advanced Security / CodeQL — Free for public repos, paid for private. The static analysis is solid and it’s zero-friction if you’re already on GitHub. Turn it on in your repo settings.

1Password Secrets Automation — If your app handles secrets programmatically (not just at dev time), 1Password’s secrets management product keeps credentials out of your codebase entirely. Doppler is a simpler alternative for smaller teams.

The Honest Assessment

Vibe coding tools have genuinely lowered the barrier to building software. That’s real and it matters. But lower barrier to building means lower barrier to building insecure things at scale.

The founders who are going to get hurt aren’t the ones who never use AI tools — those people are building slowly and carefully regardless. The ones at risk are the ones who use AI to ship fast, assume the tool handled the security, and find out otherwise when a user’s data is compromised.

The gap between “AI-written code that works” and “AI-written code that’s safe to ship” is maybe three hours of setup — static analysis, dependency scanning, environment variable discipline. Do it once, make it automatic, and stop worrying about it.

The Veracode number isn’t an indictment of AI coding. It’s a benchmark. Know where you’re starting from and build the systems to do better.