Back to all insights

The vibe coding problem: a flood of buggy apps = goldmine for hackers

Threat landscape

/

June 30, 2025

What’s vibe-coding

Originally coined by OpenAI co-founder Andrej Karpathy, vibe coding is a way of building software as easy as inputting a single AI prompt and watching the machine doing the rest. In short – now nearly anyone capable of writing can build a product.

Instead of starting with the App architecture, you can now quickly explain to an LLM the concept of your app in a few sentences, refine it, and feed it to an LLM such as Lovable, Cursor, or 0x and watch your app being built live. This makes the process of coding exceptionally easy, fast and efficient. Now the user can skip the hoops of running checks, performing tests, maintaining conventions, etc.

It works like a charm for prototyping, or as a demo for testing your software idea, before you decide to heavily invest in it. Super easy and fast. But it may hit you back if you choose to commercialize this product, and rely on LLM as your main CTO…

Why this might be a risky deal

Patterns over Convention: LLMs mainly use correlation patterns that come from thousands of datasets derived from GitHub repositories, Reddit, tutorials, StackOverflow, and tons more. All these sources often contain more insecure examples than secure ones. AI tools often generate “clean-looking” code that looks secure on the surface but contains hundreds of vulnerabilities and security gaps.

  • For example, a JWT validation snippet might check for the token’s presence but skip signature verification.

Understanding the broader landscape of threats posed by vibe coding…

We used Cursor, and entered the following prompt:

"Create a Node.js/Express login route. I want to generate the token on login and validate it on protected routes"

Now let's see the output:

Let’s quickly do a rundown of 3 potential threats that may be missed by an inexperienced developer who casually vibe codes on autopilot:

1. Hardcoded secrets (High)

Problem: Hardcoding secrets in source code is a serious security misconfiguration. This could easily leak in version control or through error messages.

Solution: Use environment variables:

Make sure to load it securely using tools like dotenv or your deployment platform's secret management.

2. Token Parsing without prefix (Medium)


What’s happening – The code assumes the Authorization header follows the format Bearer <token> and pulls out the token like this:

Why this could be risky – while this works if the header is well-formed, there's no check to make sure it actually starts with "Bearer". If someone sends a malformed header, say, without a space or with a completely different prefix, you can end up with undefined or unexpected behavior. In JavaScript, that might just show a harmless undefined, but in stricter languages like Go or Java, this kind of unchecked parsing could throw an exception or cause a crash.

That’s not a security vulnerability on its own, but it does open the door to avoidable bugs and brittle code.

A safer approach – add a simple check to confirm the header starts with what you expect:

This makes the logic clearer, avoids weird edge cases, and builds good habits if the codebase ever grows or moves to another language.

3. Weak brute force protection on login

Problem – you can see that bcrypt cost is set at 8. While OK for local testing, yet extremely low to any modern standards if you ship the app live.

Anyone can hammer the /login route repeatedly and eventually break in. 

Solution – it’s recommended setting cost parameter to something like 12, if you want to go live with the app.

Looks can deceive – substance is key

The problems aren’t limited to what you see on the surface, because your code LLM outputs might look just perfect. Issues may span far beyond just how the code is written and what mechanisms are in place. There's always more.

For example:

  • Outdated or vulnerable 3rd-party libraries: There’s no guarantee the LLM will use the most up to date version of 3rd-party software, which even experienced devs sometimes skip. This comes with a risk of Supply chain attacks and malware hidden in typosquatting libraries (e.g., requests vs requestr).

  • Developers might skip formal threat modeling or secure design reviews, trusting AI code as "ready to use" which is bound to be riddled with flaws.

Also worth noting that vibe coding, at least today, come lacking cohesive security event logging (e.g., login failures, authz denials), input validation logs, or alerts for suspicious activity.

How to mitigate security risks

It’s all about setting constraints, being proactive, and having independent tools, skills, and human judgement to verify your code before it goes into production.

At the bare minimum, you can use tools like OWASP Dependency-Check, npm audit, as well as lock versions using package-lock.json, poetry.lock, or pip-tools.

You can also deploy your app on a dev domain and use Deepengine to check for vulnerabilities, miscofigs, or outdated software. 

  1. Use SAST tools like Checkmarkx or Aikido to catch vulns in the codebase and aid in maintaining security.

  2. Don’t fall for the “Accept all“ option, always check each output manually for flaws.

  3. Add Security Linters (like ESLint plugins for security, Semgrep)

  4. Depending on your vibe coding tool, you may set rules to tell your AI agent to prioritize security in its output.

  5. Vibe code in phases with structured validation and by integrating Threat Modeling into design and code review

  6. Do not copy/paste code at all! Understand the principal concept behind the code from the AI and implement it manually.

Conclusion

LLMs generate code at the function level without considering system-level architectural limitations, like session state, service interactions, or permission enforcement.

This lack of architectural thinking means broken access controls, missing state checks, and logic flaws that only become obvious in bigger contexts.

Vibe coding is all about pattern matching, giving you the speed, but taking away the conventionality. It's a awesome tool for rolling out a product for testing purposes, to say, showcase to potential customers and see if it cuts close to the product market fit. But when it comes to shipping a REAL product, that uses real, sensitive user data – it’s worth investing extra time and resources into staying conscious about security.

Spot security gaps in WebApps

1-st target is free, find threats in <5 min.