Why vibe coding breaks after the first 30 messages
Replace the fear of AI by excitement of building with AI.
With AI, the work shifts from how to build something to what to build. Vibecoder is then someone who tests what to build, sitting together with his technical partner - AI - who is taking care of how.
It doesn’t really matter whether you have a technical or non-technical background—both can be an advantage or a disadvantage. What matters more is being naively optimistic about what AI can do, then pushing the tools to their limits to try to implement your ideas.
In this paradigm, a human’s advantage is judgment quality, which determines what to build. It can beat AI’s judgment alone because you’re combining your own judgment with AI’s. At the same time, you need to stay self-aware of the limitations in your own thinking. It’s also important to keep learning how AI works, so you understand its limitations, too.
Over the last year and a half, I’ve vibe coded 30+ ideas, ranging from projects implemented in a few prompts over ten minutes to ideas that required weeks of active evening vibe coding.
As the projects got more complex, I started running into problems.
The main problem is a context memory window. It’s basically the AI’s working memory, where after 20–30 messages it starts losing track of what you discussed early on. Even if you tell it to reread previous messages and the codebase, it can spend most of its tokens reading files, and it may still take a while to identify the real issue.
Whether you keep working in one chat or split across multiple chats, the agent needs a stable reference for what’s going on—otherwise it starts acting like the main character in Memento, remembering only the last 15 minutes. AI also doesn’t know your preferences or taste. It has a generic idea of what humans like, but your taste is specific, so you have to guide it with references and constraints for what “good” looks like.
Since both humans and AI make assumptions about the world, you need a middle ground: a shared source of truth for you and the agent. That’s what documentation and PRDs (Project Requirement Documents) are for, they provide a source of truth during development.
Phase 1: Idea Exploration
With multiple coding-agent tools available, it’s easy to test several approaches to the same problem. If you have a rough idea of what you want to build, use voice dictation (Wispr Flow, built-in voice in ChatGPT, Lovable, etc.) to offload your thinking.
You’ll get immediate feedback on the idea. You can copy-paste the same input into coding tools like Replit or Cursor to test quickly. Most of the time, I use ChatGPT (thinking mode) to discuss architecture, feasibility, constraints, and security. This helps shape the idea, see whether something similar exists, do basic competitor analysis, and clarify the unique pain your app solves.
Once the architecture exploration is done, look for visual references of what the app should look like (most projects end up as web apps). Collect references (see Useful links section below) from other apps, and attach screenshots to provide aesthetic context.
Some sites also let you download the actual design code, so attach that for the build phase. Give the AI real code snippets or ZIP files of similar designs if you want pixel-level fidelity.
Phase 2: Creating a Source of Truth
In classic software development, maybe 20–30% of time went into requirements and design, and the rest into implementation.
Now that “how to implement” is increasingly handled by AI, you should spend most of your time (around 80%) on planning and interacting with the agent, and only 20% on execution.
Before you start building, you need a blueprint, which is documentation such as PRDs and project docs. And yes, AI can help generate them. Once you’ve described the idea, attached relevant prior conversations, and you’re ready to implement, you produce the docs. Point the agent at the input files and ask it to generate the “must have” set first.
Must have:
README.md — Describes what the project is, what you need to run it, how to install dependencies, how to start it, how to run tests and builds, and how to configure it with environment variables and other settings. It is for anyone setting up, running, or debugging the project.
AGENTS.md — Defines how the AI agent should behave, including goals, boundaries, coding standards, quality expectations, what to check before making changes, and how to report completed work. It is the agent behavior contract.
project-overivew.md — Gives a short high level overview of the product intent, target user, and and the core “why the project exists.” Keep it short (≈100 words). It is designed to be inserted into every prompt to prevent drift. It also tells the agent to read all PRDs before acting and to list exactly what it changed so you can test it.
Nice to have:
implementation-plan.md — Describes the intended architecture and the build sequence. It should outline major components, key technical decisions, dependencies between parts, and the recommended order of implementation.
design-guidelines.md — Defines the UI and visual system rules the product should follow. It should specify typography, color usage, spacing, layout grid, elevation and shadows, opacity, component patterns, states, and interaction rules so the UI stays consistent and high quality.
user-journey.md — Documents the end to end user flow as steps a user takes to reach the main outcome. It should cover entry points, signup and onboarding, core paths, decision points, errors and empty states, and what success looks like.
tasks.md — Serves as the single working checklist for execution. It should break work into actionable items, track status and priority, link to relevant specs, and stay updated as scope changes so the agent always knows what to do next.
Phase 3: Implementation
Once you provide the inputs and all the necessary documentation, your job is to steer the AI, as it will be able to do its work on its own, but the human role would be to overview its thinking and steer it in the right direction.
Since you already laid out the project goal in the project overview file, what is left is to make sure that the AI is not being distracted from the mission, so you need to really focus on reading and following its thinking.
That includes nudging it to capture new failures and learnings in AGENTS.md, so the same mistakes don’t repeat.
There would be bugs, of course, and your job would be to test the solution and provide the feedback. For most of the problems, dumping screenshots and comments that are not working would be enough, but sometimes a more detailed problem description would be needed. If it’s a, then you can use the console tab in the browser developer tools, either screenshot or dump the log zip, which usually solves most of the issues. Or you can also prompt AI to write additional tests and logs during each step so it can then refer to those logs.
If you’re working in other tools like Cursor or Lovable, it also might be useful to just download the repo and paste it in the codex, which would usually solve the issue pretty quick. If you get stuck, it’s usually because you ignored the context window, didn’t provide clear documentation or just not clear in your ask.
Conclusion
Vibe coding is about building with AI while keeping your standards high. As soon as projects grow beyond small prototypes, the main constraints become context, taste, and alignment. Clear documentation turns those constraints into a strength by keeping the agent focused and consistent. When you bring strong references and a solid source of truth, you move faster and the result is more reliable and more polished. Good luck building!
Usefule links
UI/UX inspiration
📱 Mobbin — https://mobbin.com
🎨 Dribbble — https://dribbble.com
✨ Godly — https://godly.website
Productivity / build tools
🎙️ Wispr Flow — https://wisprflow.ai
🧠 Cursor — https://cursor.com
🛠️ Lovable — https://lovable.dev


