The future of coding has changed.... Over the past two weeks, I experimented with vibe coding, using AI tools to rapidly generate code based on high-level prompts and it’s been a mix of beautiful and urghhhhh. It’s incredibly resourceful for building complex zero-to-one projects in remarkably short periods. I developed a Multi-Hazard Disaster Warning System, Social Media Algorithm Simulator, Image Analysis Tool, Anomaly Detection System, and refactored a Data Visualization and ML platform, all in under two weeks. Vibe coding shines in delivering functional applications quickly, but it’s a double-edged sword. The results are often impressive on the surface yet flawed under the hood. Without reviewing the code, your application might seem to run perfectly, but it could harbor issues like missing logic, improper control flow, security vulnerabilities, incomplete features with placeholders, and of course AI-generated hallucinations. Developers using vibe coding without fully understanding their requirements risk running into problems. Yes, the interface may look polished, but does it function correctly? Is it secure and reliable? Don’t rush to deploy a vibe-coded project without thorough review and rigorous testing. AI is like a child, full of potential but needing guidance. As the parent, you must carefully steer it to ensure the final product meets your standards.
"Vibe Coding: A Double-Edged Sword for Developers"
More Relevant Posts
-
🚨BREAKING: Anthropic’s Sonnet 4.5 just set a new bar: 30 hours of nonstop coding. The INSANE part? It cloned Claude.ai and tested its own implementation. Anyone who’s pushed AI models on real repos knows the failure mode: they scatter — shallow edits, lost threads, abandoned tasks. That’s the core weakness of LLMs in software engineering: context collapse. Once the model drifts, the project unravels. Sonnet 4.5 feels different. It doesn’t just generate code — it tracks context, avoids distractions, and carries a task forward across hours of work, far beyond what previous models could handle. Paired with new upgrades — 📌 Checkpoints — save progress at key points so the model can pick up exactly where it left off. 📌 In-memory context tools — manage and update context on the fly, keeping track of variables, edits, and dependencies. This model begins to act less like a “copilot” and more like a junior engineer who reliably keeps grinding through the weekend. Performance numbers (82% Sweep Bench, 61.4% OS-world). It’s now the state-of-the-art coding model. The race for the best coding agent is heating up. And the trajectory is insane! What are your thoughts on this major leap in AI coding?
To view or add a comment, sign in
-
I’ve noticed a lot of negative sentiment around “Vibe Coding” recently, including from experienced developers. Part of this, I think, comes from a misunderstanding. When Andrej Karpathy introduced the phrase earlier this year, he meant something specific: getting into the flow, forgetting about the mechanics of code, and just typing or speaking to see what emerges. Over time, the term has been stretched to mean “any AI-generated code,” which I think misses the point. Vibe Coding has its place. It’s well-suited for prototypes, demos, and quick utilities where speed matters more than maintainability, scalability, or security. That is very different from engineering software with AI tools. For real systems, you still need conscious design and architecture. Tools like Cline, Roo, Cursor, Copilot, Aider, and Claude Code are improving quickly, and many have "ask" or "design" roles where you aren't generating code, but asking about design. A quick prompt to the default model is rarely enough. Choosing the right model, setting up roles and system prompts, and grounding the process in design documents all make a significant difference. Looking ahead, we may see prompts becoming a new kind of source code, with LLMs acting more deterministically. You’ll just request a new feature and regenerate the codebase in a consistent way. But we’re not there yet. For now, the distinction matters: vibe coding is about flow and speed, while AI-assisted engineering is about structure and quality. Both are valuable, but they should not be confused. Am I right? Is the boundary starting to blur? Or do you draw a clear line between vibe coding and real software engineering with AI?
To view or add a comment, sign in
-
Vibe coding was supposed to kill software developers. It’s not killing us. It’s creating more work than ever. I’m an engineer caught in the thick of it. I believed AI tools would help me ship my side project MVP in days. Instead, I spent two brutal weeks debugging hallucinations and wrestling with AI’s forgetfulness. Every session felt like a maze. Juggling Claude, Copilot, and Cursor conversations, re-explaining context the AI just forgot, and patching up bugs I barely understood. Here's what I learned living this chaos firsthand: - AI-generated code isn’t a final product, it’s a rough draft demanding intense human oversight - Developers aren’t replaced; they become AI’s janitors, context keepers, and planning guards - Managing token limits and keeping scope in check without guardrails is a daily battle - Success means shepherding AI’s messy brilliance into maintainable, scalable code Couple weeks trapped in this loop taught me that without humans in the loop, especially those who understand context. AI coding can’t deliver alone. So no, developers aren’t going anywhere. We’re evolving into the newest role in tech: AI Coding Context Keeper.
To view or add a comment, sign in
-
I have had a similar revelation as Tony below 👇: Vibe coding is in my opinion only for low impact weekend projects, not for serious production code on complex problems. Natural language is just way to ambiguous. When converting a prompt to code, AI will fill in the gaps - but with what? LLMs have been trained on both good code and large amounts of bad code, so it often fill the gaps with poor code or bad assumptions. Correcting these things in my experience, often end up taking longer than if you wrote the code yourself. Instead I prefer working iteratively and saving good and clear descriptions of the features (a spec) and allowing the AI to use that. I normally avoid doing to much in a single prompt, so the code base never comes to far of track. Spec driven development is more suitable for real work, the prompts you give is actually you clarifying you intent, and this is an important asset that should be saved with the code. Vibe coding can still be fun, just be careful where you use it. Vibe coding is most useful when you have low needs for code quality and a less precise vision of what you need.
The shine is starting to wear off vibe coding. I know, because I’ve been there. One moment I’m in Cursor, impressed by what a large language model can produce. The next, I’m staring at the clock, realising I’ve lost hours because it couldn’t navigate the complexity of the codebase I’m working on - and I find myself back in PyCharm, questioning the value of the detour. One answer that’s emerging is "Spec-First Development". You don’t toss vague instructions at an AI and hope for magic. Instead, you craft the clearest, most rigorous specification you can - and let the agent build from that. Better still, treat specifications as code artefacts: versioned, under source control, and continually refined with compacted, curated context. For me, this isn’t a radical shift. I’ve long worked with test-driven development, where the unit test is the specification. And LLMs thrive on this. Unit tests aren’t just notes in prose; they’re executable specifications - formal, computable, and unforgiving in all the right ways. That’s the deeper lesson: when you hand a spec to an LLM, the more formalisation, the better. Natural language is fine, but structure wins. Code loves tests. Maths loves notation. Business, however, has no equivalent - no unit test for strategy documents, compliance rules, or process design. Enter ontologies and knowledge graphs. They give us a way to formalise business semantics, capturing domains in rigorous, computable detail. Paired with an LLM, they don’t just guide the generation - they also validate it. The future of agentic coding - and of LLMs in business more broadly - won’t be built on “vibes.” It will be driven by how well we can formalise intent: transforming ambiguity into something structured, testable, and executable. ⭕ Spec-kit: https://lnkd.in/dPZCgUeq ⭕ What is an Ontology: https://lnkd.in/ePS7ha8z ⭕ What is a Knowledge Graph: https://lnkd.in/e5ed_f8g
To view or add a comment, sign in
-
-
Ontologies and Knowledge graphs are the secret to unlocking the true potential of AI when your client doesn't know their data.
The shine is starting to wear off vibe coding. I know, because I’ve been there. One moment I’m in Cursor, impressed by what a large language model can produce. The next, I’m staring at the clock, realising I’ve lost hours because it couldn’t navigate the complexity of the codebase I’m working on - and I find myself back in PyCharm, questioning the value of the detour. One answer that’s emerging is "Spec-First Development". You don’t toss vague instructions at an AI and hope for magic. Instead, you craft the clearest, most rigorous specification you can - and let the agent build from that. Better still, treat specifications as code artefacts: versioned, under source control, and continually refined with compacted, curated context. For me, this isn’t a radical shift. I’ve long worked with test-driven development, where the unit test is the specification. And LLMs thrive on this. Unit tests aren’t just notes in prose; they’re executable specifications - formal, computable, and unforgiving in all the right ways. That’s the deeper lesson: when you hand a spec to an LLM, the more formalisation, the better. Natural language is fine, but structure wins. Code loves tests. Maths loves notation. Business, however, has no equivalent - no unit test for strategy documents, compliance rules, or process design. Enter ontologies and knowledge graphs. They give us a way to formalise business semantics, capturing domains in rigorous, computable detail. Paired with an LLM, they don’t just guide the generation - they also validate it. The future of agentic coding - and of LLMs in business more broadly - won’t be built on “vibes.” It will be driven by how well we can formalise intent: transforming ambiguity into something structured, testable, and executable. ⭕ Spec-kit: https://lnkd.in/dPZCgUeq ⭕ What is an Ontology: https://lnkd.in/ePS7ha8z ⭕ What is a Knowledge Graph: https://lnkd.in/e5ed_f8g
To view or add a comment, sign in
-
-
https://lnkd.in/eNKPjDah "Vibe coding is the next evolutionary step in how generative AI is impacting coding and the software development lifecycle. Vibe coding, or AI-assisted development, lets a developer or less technical builder develop full-stack applications using an iterative series of AI prompts to establish and then improve the application’s design." #VibeCoding #AI #ArtificialIntelligence #GenerativeAI #AGI
To view or add a comment, sign in
-
Vibe coding is rewriting what it means to be a developer whether you like it or not” Post Draft I didn’t believe it at first. Then I tried vibe coding giving prompts, letting AI generate modules, and barely touching the code myself. It was fast. Almost magical. Until one day, production broke. And I had zero idea why. That’s when I realized: vibe coding is powerful and dangerous. Because it hides your assumptions, magnifies errors, and can make your code feel like magic even when it’s fragile. Recent research on vibe coding backs this up: developers describe it as a flow state where AI handles logic, but the unpredictability and trust overhead create new kinds of technical debt. Here’s what I learned the hard way: 1. You must verify everything AI gives you A generated function might look correct, but small edge case assumptions can wreck your whole system. 2. Don’t hide from your architecture AI can’t know your entire system. You must frame prompts in context and verify integration. 3. Treat AI code like legacy code Because you’ll come back later trying to understand it. Give comments, tests, safety nets. Vibe coding isn’t here to replace developers. It’s here to reveal who understands their code and who’s just pasting magic. We’re entering a new era: Those who can steer AI’s power not just ride it will lead. Have you tried vibe coding yet? What surprised you the most?
To view or add a comment, sign in
-
Coding agents can help you do an hour’s work in 5 minutes… and 5 minutes’ work in an hour. We ran Gemini 2.5 Pro, Claude Sonnet 4, and GPT-5 across SWE-bench and had professional engineers dissect every failure. One pattern kept emerging: models don’t just fail, they fail confidently. They lie to themselves. They don’t know how to backtrack. In one case, a model hallucinated an entire class. That fiction spiraled into invented methods and outputs. After 39 turns and 693 lines of altered code, it gave up, still convinced it was right. Another model guessed wrong too — but when runtime errors pushed back, it re-investigated and eventually landed on the correct fix. A third took a different path: instead of guessing, it re-checked context, and solved the bug in one clean shot. The difference wasn’t raw intelligence. It was epistemic humility: > Do you notice what you don’t know? > Do you verify guesses? > Do you backtrack when assumptions break? Leaderboards won’t show you this. Trajectories will. Full breakdown here 👉 https://lnkd.in/eEdD8sHW
To view or add a comment, sign in
-
Anthropic just declared war on every coding assistant. 🥊 Let that sink in. Their new Claude Sonnet 4.5 model is a monster. And they're calling it the world's best. Here’s the proof: ✦ Benchmark Domination: The new model scored 61.4% on OSWorld benchmarks, a massive jump from 42.2% just four months ago. ✦ Insane Autonomy: It can now code independently on complex projects for over 30 hours straight, a dramatic improvement from the seven hours possible with its predecessor. Not 7 hours. Thirty. ✦ Vibe-Coding is Real: This is the future Andrej Karpathy talked about. Developers describing what they want in natural language while AI handles the technical implementation. But this isn't just about better syntax. This is a fundamental shift. Some see this as the ultimate tool to create 10x engineers. Others see it as the beginning of the end for junior developers. This isn't just a new model. It's a new era of building. I'm genuinely curious. Will AI coding assistants create a new class of elite engineers, or will they replace the majority? * * *
To view or add a comment, sign in
-
-
Vibe coding — the AI-assisted development wave that lets you prototype full stack apps with natural language prompts — might just reshape how software is made. But control, security, and governance must keep pace. Read more: https://bit.ly/4mxv7QI #VibeCoding #AIinDev #FutureOfSoftware
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development