- Filip K: Freedom Newsletter
- Posts
- Vibe Coding Is Killing Your Side Project
Vibe Coding Is Killing Your Side Project
You can save it without killing the vibe
Vibe coding. That’s the name of the trend among new developers to hand over the reins to AI tools and just ‘let them do their thing’.
While the concept is amusing, this trend has led to a concerning number of complaints about AI rendering codebases unusable by repeatedly fixing issues it previously introduced, only to generate new ones in the process.
Bugs in software development are unavoidable, but frustration is understandable when the tool that initially accelerated development becomes the reason why the project is brought to a halt.
This phenomenon is shocking to everyone but senior developers who are all too familiar with the traps of fast iteration speeds at the expense of thorough planning.
The rise of AI coding tools is creating a new divide in software development: those who understand how to use them, and those who are blindly letting AI dictate their code, with no second thought other than ‘it works’.
Fortunately, there is a simple principle we can deduce to help us avoid falling into this trap ourselves. But to get there, we need to better understand the issue at hand.
AI Blind Spot: Lack Of Context
If you ask a bunch of programmers how a specific feature should be built within a codebase you will quickly realize that code can come in as many flavours as 19h century paintings.
Despite there being well-established ‘patterns’ and ‘best practices’, any developer would agree on two points:
There rarely is one single best approach to building software.
The best approach is always entirely dependent on the context.
To fully grasp the importance of both these points, let’s imagine a worst-case scenario: a feature that is built with a sub-optimal approach and without sufficient context.
Being built sub-optimally already implies that some part of the developed feature could have been done better. We could potentially do fewer DB calls, perform stricter validation on the input data or build the feature with an approach that exposes fewer vulnerabilities.
But even a well-implemented feature can only be assessed as such under the lens of the project’s context. The same implementation may be perfect for a simple, locally run project while being the source of an immediate server crash if deployed at scale.
So we don’t want to just build with the best approach. We want to build with the best approach that accounts for the most amount of relevant context.
This might seem obvious at first, but relevant context includes things that are often hard to predict like future business logic changes, application load levels and feature-specific latency requirements.
The idea of context-dependant optimality is key for understanding the growing wave of AI-aided developer frustration being posted daily on Reddit or Twitter. It all boils down to how new developers are using AI.
AI-Driven Mediocrity
There are two main reasons why the AI developer tool hype is led by juniors and non-technical folks while many senior developers remain sceptical: a magician can only impress you if you don’t know how the trick works.
If you have never built any production-level application, AI producing a working mobile app demo for a to-do tracker will always leave you baffled. After all, you have no idea what it takes to make such apps work, but you know that developers are often paid 6 figures to build them.
On the other hand, if you are aware that libraries like pygame allow you to build a fully functioning game with less than 100 lines of code, the magic of AI easily fades into the ‘cool, whatever’ category. It may be impressive, but certainly nothing to call home about.
The idea that the excitement around these tools comes, at least partially, from a lack of understanding of how application development works gives us some insight. It helps us understand the thought process of a new developer trying to build something with cursor or v0. Let’s try to put ourselves in their shoes.
Upon hearing about new developers creating businesses by putting an app together with AI, a beginner will choose an AI coding tool and proceed to type out his app idea:
‘I want to create a mobile app that helps students with their homework by letting them scan a sheet of paper with questions and providing all the answers’
Obediently, the LLM proceeds to spit out the most straightforward and (literally) predictable possible response to the request. This creates what appears to be a ‘pretty decent’ output for such little amount of work.
‘At this rate, I will be done in a few hours’ they think.
Little does he know that with this first prompt, he already doomed the project to failure. We will talk about why in a second.
Now, you might be thinking that even the most naive beginner would know that he needs to provide a little bit more detail. As we will see the problem lies in the type of content of this prompt more so than on the level of detail, and applies to any abstraction level of the application.
After this first prompt, the output already looks pretty good! So, the new developer continues to prompt away, making just enough progress on every step to keep their hopes up that by the end of day, their app will be ready to deploy and monetize.
This process is hampered by one of the biggest Achilles heels for a developer in this position: their inability to critically assess the output.
As time goes on, the inevitable tragedy strikes and the LLM falls into what seems to be a development black-hole: an application so complex and convoluted that even its creator is unable to continue working on it.
The frustrated junior keeps prompting, tries using claude-sonnet, o1 and even mistral to bring his application back to a working state, but to no success. With a broken codebase, no familiarity with it and being unable to debug further, only one solution remains: post on Reddit or Twitter asking for help.
The Problem
The problem that the new developer is facing can be understood by a simple statement: there are infinite ways to build software, but only a few of those will fulfill all your requirements.
To elaborate on this concept let’s focus on some crucial questions left unanswered by the prompt:
No business context: what grade of students? what class? what type of homework? how do students log in?
No development approach: iOS or Android? or both? what framework? what DB? how many concurrent users?
The initial reaction to these questions might be ‘It doesn’t matter, any students, any login method with any framework, just make it work’.
Besides the obvious technical complexities, there’s a deeper reason why ‘just make it work’ is not a viable long-term approach: it delegates responsibility and decision-making to the AI.
‘Just make things work’ is often a reaction that stems from frustration with the perceived level of complexity when you do not understand the requirements deeply enough. The tendency then becomes to let the LLM take the lead by predicting those requirements and complying with them.
By delegating both the requirements and the implementation details to the AI models, we statistically ensure one of the following:
a) We build the wrong thing (not enough context provided)
b) We build the right thing in a way that we cannot extend or continue building (we’ve provided context about business but the AI made bad implementation decisions)
c) A mix of a) and b) where we build something that resembles what we initially wanted, but it’s too difficult to make any further changes.
One way to visualize what happens when LLMs take the lead is to imagine a tree-like structure representing the different decisions to be made throughout development. Every decision leads you to a different path but only some of the paths will lead to success. Only you know what the definition of success is, so how likely is it that the AI will get the decisions right often enough to go into one of the successful paths?
Traditionally, such decisions have been made by humans. At every point, the developer draws from the full context of their experience and asks himself ‘How should this be built so that present and future requirements are fulfilled?’.
While this may sound like a simple question, imagine trying to capture the entire context that goes into such decisions in a single prompt. It would be unimaginably hard, and that’s exactly what good developers are paid to do.
This magic-like skill accumulates over years of frustration, anger and sheer persistence while trying to build software that works. Unfortunately, it also slows down development speed, often leads to long meetings and requires sign-off from stakeholders, making the idea of replacing it with AI-driven decisions very alluring. Yet this special skill is also what prevents falling into the same disaster that we often see LLMs going into: keeping technical debt manageable.
Beginners using AI have it the worst since they do not know where any of these decision points come from (why do I even have to choose which DB we use?) and so their decisions get delegated by simple omission. Think of that: Every time you don’t mention an important detail about a decision to be made, you are asking the AI to decide it for you.
The Solution
The discussion around the problem already shed some light on what the potential solution to the problem might be. And no, it’s not ignoring AI.
This point is important to underline: I would never say that the solution is to quit using AI code tools or ‘stop AI development’ as some famous people interestingly have suggested. As with most powerful tools, the effectiveness lies in the hands of the user, more than anything. So let’s see if we can find a solution that allows us to leverage this technology, without its obvious pitfalls.
Let’s start by asking ourselves the key question: how can I use AI coding tools in a way that doesn’t lead to a) software that does not fulfil requirements and b) does so in a way that is future-proof?
With the assumption that we will continue to use AI tools in their current form, we can rephrase this question to: How do I write my prompts so that the output complies with my exact requirements and adheres to basic software design principles for long-term development?
The beauty of this last question is that it answers itself: To get the most out of AI tools we need to provide exact requirements based on the totality of the context surrounding our application.
Or if we want to make a high-level first principle out of it:
Give AI instructions on HOW to do things, not WHAT things to do.
This is the essence of the beginner’s problem, and also the path to a solution.
When you are developing your first application, you have no idea how things ‘should’ be implemented. So the only way you can make progress is to prompt what to build instead of how to build it.
By doing so, you delegate all decisions to the LLM, who as we saw above, will inevitably lead you to a point where either the app doesn’t do what you want, or it has amassed too much technical debt to continue.
If, on the other hand, you had started by explaining exactly how the implementation should be done, the decisions that are delegated are minor and likely easy to change (colors, styling, etc).
Even better, by being specific about the how, you maintain some familiarity with the project, as you understand how and why features are implemented.
Finally, this approach forces you to learn and keep learning. It allows you to distinguish yourself from other new developers who can simply prompt the same basic high-level idea but who will most likely hit the same dead-end you have.
So next time you find yourself explicitly prompting to build a feature, stop and think ‘How should this be built?’.
If you have no idea, ask the AI to tell you what the different options are. LLMs are great at showing you what you don’t know, so it will very likely cover most of the possibilities.
Look at all the different options, assess them, and choose one. You will make mistakes but at least they will be your mistakes and you will have the knowledge to iterate on them.
If you want some more practical tips around providing context to LLMs, here are 3 easy wins:
Always have an idea of how something should be implemented before prompting AI to build it.
Before starting a project, write a README.md with specifics about the goal of the project, all the domain knowledge around it. Reference this when prompting the AI.
If you are using cursor, go to cursor.directory and paste the applicable rules to a .cursorrules file in your project.
These 3 points will massively increase the amount of context your AI tool has, giving it specific guidance on how to build things and what to build.
Final Words
Although the tonality of this post might not reflect that, I’m overwhelmingly positive about the productivity of AI tools in the hands of those who know how to use them. My anecdotal experience with my own projects is that seeing results faster keeps my motivation high and encourages me to use tech stacks that I might be less familiar with, encouraging growth and novel experiences.
That is why my proposed solution still leverages AI. I believe it is possible to use it productively but it requires thoroughness and proficiency from the user. This is not to discourage newcomers into the space but rather to encourage them to spend more time learning and understanding what they are building and what is the best way to do so.
At the cost of some development speed, you will gain the proficiency required to leverage these tools. Soon enough the risk of hitting the dreaded point of no return will be minimal, and you will see it as yet another opportunity to grow, rather than the impossible roadblock that it presents now.