Build · beginner · 7 min read
When Things Break: A Debugging Guide for Non-Coders
What to do when your AI-built app stops working — a practical guide to diagnosing and fixing the most common vibe coding problems.
Something was working. Now it isn’t. You don’t know why. You don’t know how to read the error. And the AI that built the thing in the first place seems to be making it worse every time you ask it to fix something.
This is the vibe coding debugging experience. It happens to everyone. It’s not a sign that you chose the wrong tool or that software development isn’t for you. It’s just part of the process, and there are reliable ways to get through it.
The Golden Rule of Debugging
Before anything else: do not panic-prompt.
The worst thing you can do when something breaks is immediately ask the AI to “fix it” without giving it any information. The AI will guess. It will often guess wrong. It will change things that weren’t broken in the process. Now you have two problems.
The golden rule is: describe what happened, what you expected, and what you actually see. Always include the error message if there is one. The more specific your description, the better your chance of a fast fix.
Bad: “The login page is broken, please fix it.”
Good: “After I added the password reset feature, the login form stopped working. When I click ‘Sign in’, nothing happens — no error, no redirect. Before the password reset change it was working. The error in the browser console says: ‘TypeError: Cannot read properties of undefined (reading submit)’.”
That second prompt gives the AI three critical pieces of information: what changed before the break, what the symptom is, and the actual error message. It can work with that.
The 5 Most Common Failures
1. The AI Broke Something That Was Working
This is the most common problem. You asked for a new feature and the AI’s implementation touched a shared piece of code, breaking something unrelated.
Fix: In Lovable and similar tools, you can restore previous versions. Do this first — restore to the last working state before your most recent prompt. Then add the feature again with a more constrained prompt: “Add the export button to the top-right of the dashboard. Do not change any existing functionality. Only add the new button and its click handler.”
The phrase “do not change existing functionality” is your friend. Use it.
2. The UI Looks Wrong
The layout broke, elements are overlapping, or the mobile view is a mess. This is usually a CSS conflict — two sets of styling instructions fighting each other.
Fix: Take a screenshot and attach it to your prompt (most tools support this). Say: “The layout is broken as shown in the screenshot. The sidebar should be on the left at 240px wide, and the main content should fill the remaining space. On mobile, the sidebar should collapse to a hamburger menu.” Describe the intended layout explicitly — don’t just say “fix the layout.”
3. Data Isn’t Saving
You fill out a form, submit it, and the data doesn’t appear. Or it appears, then disappears on refresh. This is usually a database connection issue or a missing “save to database” step in the AI’s implementation.
Fix: Check these in order:
- Refresh the page after submitting — does the data appear then? If yes, it’s a display bug, not a save bug.
- Log out and log back in — is the data there? If yes, it was saving but not re-fetching.
- Create a test record, then navigate to a different page and come back — is it still there?
Describe what you find: “I create a record, it shows in the list, but when I refresh the page it’s gone. The data is not persisting to the database.”
4. Authentication Is Broken
Users can’t sign up, can’t log in, are being logged out unexpectedly, or are seeing data that belongs to other users. Auth bugs are the most serious because they affect user trust and data security.
Fix: Auth problems usually fall into two categories: the flow is broken (users can’t log in) or the permissions are wrong (users can see each other’s data). Test both explicitly:
For flow issues: try the sign-up and login in a fresh browser window in incognito mode. This eliminates cached session issues.
For permissions: create two test accounts with different email addresses. Log in as Account A, create a record. Log out, log in as Account B. Can Account B see Account A’s record? It should not. If it can, describe this precisely to the AI.
5. Deployment Fails
The app works in the preview but fails when deployed to a real URL. This is often an environment variable issue — configuration that exists in your development environment but wasn’t carried over to production.
Fix: In Lovable and similar tools, environment variables (like your Supabase database credentials) need to be explicitly set in the deployment settings, not just in the local preview. Look for a “Settings” or “Environment Variables” section in your tool’s dashboard. The error message from a failed deployment will usually tell you which variable is missing.
How to Read an Error Without Knowing Code
Error messages feel like gibberish at first. They aren’t. They follow patterns, and you can extract useful information from them even without coding knowledge.
The error type (at the start): TypeError, ReferenceError, SyntaxError. A TypeError means something expected one type of data but got another. A ReferenceError means something tried to use a variable that doesn’t exist. A SyntaxError means the code has a typo or formatting issue. You don’t need to deeply understand these — just know they’re different categories.
The message (the plain English bit): “Cannot read properties of undefined” means the code tried to access something that doesn’t exist yet. “is not a function” means the code tried to call something as a function that isn’t one. “Failed to fetch” means a network request failed.
The file and line number (if shown): This tells the AI exactly where to look. Always include it.
The single most useful thing you can do with an error: copy the entire error message — don’t paraphrase it — and paste it into your AI tool’s chat with the context of what you were doing when it appeared.
The Fix-Forward vs. Start-Over Decision
Sometimes the right call is to restore a previous version and start the feature fresh. Sometimes it’s to keep debugging forward. Here’s a rough guide:
Fix forward when:
- You can identify exactly what changed that caused the break
- The AI’s most recent attempt made clear progress on the fix
- You’re within 2-3 prompts of a working state
- The rest of the app is unaffected
Start over (restore previous version) when:
- Multiple things are broken simultaneously
- You’ve been trying to fix the same issue for more than 30 minutes with no progress
- The AI is introducing new bugs faster than it’s fixing old ones
- You can’t identify what changed or when the break happened
Restoring a previous version isn’t failure — it’s good engineering practice. Every experienced developer does it. The version history in Lovable is there for exactly this purpose. Use it without guilt.
Preventing Breakage Before It Happens
The best debugging is the debugging you don’t have to do.
Test after every prompt. Don’t send five prompts in a row and test at the end. After each change, click through the affected feature and verify it still works. This makes it trivially easy to identify which prompt caused a problem.
Save working states explicitly. Before making a significant change, create a checkpoint. In Lovable, you can add a description to each version. “Working: login, contacts list, add contact” is a useful checkpoint label.
Make one change at a time. The more things a prompt asks the AI to change simultaneously, the higher the chance of an unexpected interaction. Small, focused prompts produce more predictable results.
Describe constraints, not just features. When asking for a new feature, tell the AI what it should not touch: “Add a delete button to the contact card. Do not modify the contact form, the contact list, or the database schema.”
Debugging is a skill, and like all skills it gets easier with practice. The non-coders who get through it fastest are the ones who stay methodical: reproduce the problem, describe it precisely, include the error, test the fix, verify nothing else broke. That process works regardless of your technical background.
Enjoying this guide?
Get the weekly digest — new tools, honest takes, and what founders are shipping.