Unless you’ve been living under a rock for the past couple of years, you’ve probably noticed that AI dominates your feeds on LinkedIn, YouTube, and just about every other platform. Everything is about AI—everyone’s talking about it, building with it, or trying to monetize it.
As much as I love AI—and I do use it, build systems with it, and follow the space closely—there’s one trend I keep seeing across platforms like X, Reddit, and LinkedIn that really worries me. It’s what people call vibe coding.
So, what is vibe coding?
Vibe coding is a new approach to software development that leverages large language models (LLMs) to generate code based on natural language prompts—essentially letting the AI handle the implementation while the developer focuses on describing what they want in plain language instead of code.
In other words, it’s not about writing code in the traditional sense anymore. It’s about typing prompts into an AI tool, copying whatever comes out, and hoping it works—without fully understanding what’s being built under the hood.
Platforms like Lovable, Bolt.new, and Replit make it extremely easy to spin up an app quickly with AI and zero coding. While that’s great for rapid prototyping or learning, it encourages a mindset where understanding is not needed — as long as the app runs. This new idea of building an application by just typing to some magic black box (AI) what you want, and you magically get 'something' is extremely dangerous.
I love the quote, “a picture is worth a thousand words,” and nothing illustrates the danger better than a quick search on GitHub. If you type something like OPENAI_API_KEY=
into the search bar, you'll instantly find thousands of leaked API keys. Why?
Because many vibe coders—especially those who rely heavily on AI tools—either blindly copy-paste the output into their codebase or use tools that inject the code for them automatically, often including sensitive information like API keys, database URIs, and credentials. They deploy it and move on—without understanding the risks.
This is the dark side of vibe coding: no fundamentals, no awareness of security, no understanding of secret management—just code being deployed by people with no real understanding of coding or proper development practices.
These vibe coders aren't aware that secrets should be stored in environment variables and protected using secure services like Azure Key Vault, AWS Secrets Manager, or Vault. They don’t know about .gitignore
or the consequences of exposing a key in a public repository.
Many of them started coding only after LLMs became widely available and skipped the foundational learning entirely. They rely solely on prompts, with little to no understanding of security, architecture, or even basic coding principles—which is extremely dangerous.
They’re just chasing a working demo with gradient backgrounds and emojis, not building a secure, reliable system.
I’m not against vibe coding—when it’s used for basic, boilerplate tasks you already understand but simply don’t want to spend hours doing. That’s totally fine.
But the idea of building entire systems or applications blindly, without understanding what’s actually happening under the hood, and expecting them to work long-term—that’s a recipe for disaster.
Vibe coding is cancerous
Vibe coding is harmless when confined to trivial boilerplate or an isolated sandbox.
The rot begins when entire features—or worse, full systems—reach production without anyone truly understanding the code.
If this starts spreading like cancer, the consequences will be severe. Entire teams will ship code they don’t understand, creating systems that are fragile, insecure, and impossible to maintain. Technical debt will explode. Debugging will become guesswork. Critical bugs and vulnerabilities will slip through because no one knows what the code is supposed to do in the first place.
Over time, we’ll have a generation of developers who can prompt but can’t code—who can build demos, but not systems.
And companies will be left with software that’s essentially a black box—one that breaks, exposes sensitive data, or collapses under real-world load with no one capable of fixing it.
The developer risks
First, the engineer who ships vibe-coded solutions often has no idea what’s inside the AI-generated black box. And when you don’t understand something, you can’t control it—just like mistaking an alligator for a puppy.
But the real danger is long-term: relying solely on vibe coding erodes your fundamentals—if you had any to begin with. You stop thinking in terms of design patterns, data structures, security, performance, and scalability. Over time, you become someone who can only prompt but not debug, only generate but not reason. You become completely dependent on these tools—and suddenly, you can't build anything on your own.
And when the AI fails—or the job market demands real engineering skills—what’s the need for you? You’re done.
The business risk
Unchecked vibe code in production can sink a company financially and reputationally unless a qualified engineer reviews every line. Imagine someone “prompt-engineering” a banking service and pressing Deploy. Here’s what can—and often does—go wrong:
Hard-coded secrets leak to GitHub, letting attackers drain accounts in minutes.
Missing input validation enables SQL injection or path-traversal attacks that expose customer data.
Broken authorization logic lets one user see or transfer another user’s funds.
Concurrency bugs (e.g., unsafely updating balances) cause double withdrawals or negative totals.
Unbounded retries / infinite loops spike cloud bills and DOS your own service.
Silent error-handling swallows exceptions, leaving corrupted data with no audit trail.
Regulatory breaches—like logging raw cardholder data—trigger multi-million-euro GDPR fines.
Untested edge cases crash mobile apps, torpedoing user trust and app-store ratings.
Let’s say your vibe-coded solution makes it to production. If you don’t understand how it works, then even when things start breaking, you might not realize that the feature is the root cause. It could take days—or worse, weeks—to connect the dots.
And here’s the nightmare scenario: how do you debug something you don’t even understand?
You’re staring at code that looks like magic, with no idea what’s normal and what’s broken.
Good luck fixing a production issue under pressure when you weren’t the one who actually written the code.
Keep vibe coding on a short leash; once it slips into critical paths, it mutates from a time-saver into an existential threat.
Who profits from selling this illusion?
This whole mindset has been fueled by the big tech companies, who are investing tens—and even hundreds—of billions into AI, while aggressively pushing the narrative that “everyone can be a developer now.” But that’s one of the biggest and costliest lies the tech world has ever seen.
They promote ideas like the “prompt engineer” or the “AI pair programmer” by building tools such as GitHub Copilot, Claude Code, and Cursor—platforms where you can let AI generate large chunks of code with little to no understanding of what it’s actually doing. For many, it feels like magic—a black box that just works.
But that same code might as well launch a nuclear missile, and they wouldn’t know the difference.
Of course, I’m exaggerating (I hope) to make a point—but the danger is real.
Let’s be real now
AI is here to stay. It’s a major breakthrough and absolutely boosts productivity—that's a fact. But what AI is not is a self-sufficient software engineer that can build and maintain entire systems on its own.
As much as companies pushing this narrative want us to believe otherwise—mostly to justify the billions they've poured into AI, boost their stock prices, and keep investors happy—the reality is very different. There’s no AI system you can just turn on and expect it to build the next Netflix or Amazon.
Yes, there are incredible tools that help you build and ship faster. That’s true. But that’s where it ends.
I especially love when companies claim that “30%,” “50%,” or even “60%” of their code is now written by AI. It sounds impressive—until you realize what it actually means. It doesn’t mean there’s some evil AI in the basement writing production-grade systems while engineers sip mojitos. It simply means that some portion of the code pushed to production contains AI-generated snippets—most of which are (hopefully) still reviewed, edited, and understood by real developers.
Judgement Day
My advice to you: never be that guy.
Thanks, AI, for painting this so beautifully on the first try.
Until next time—write code, learn code, not just prompts.
Written by human and AI ❤️🤖