AI won’t solve your chaos. Your chaos is structural. Every tech wave comes with the same promise. “This one will finally make us 10X more productive.” Today it’s AI coding assistants. Before that, it was low-code, no-code, micro services… Different tool, same fantasy. Here’s the truth: most teams don’t have a productivity problem. They have a thinking problem. AI can write code 10X faster, but that’s not where the bottleneck is. The real work in software happens before the typing: aligning priorities, clarifying trade-offs, and deciding what actually matters. When you skip that part, AI doesn’t solve your problem. It just accelerates confusion. It turns unclear thinking into working code. Fast. That’s what I call The Kumbaya Paradox. It’s when everyone sings together about productivity and collaboration, but nobody does the painful alignment work that makes it real. So instead of harmony, you get noise. Beautifully automated noise. AI doesn’t remove complexity. It amplifies it if your organization can’t handle clarity. You don’t need faster code generation. You need fewer illusions about where the real problems are. And yes, speed works when you’re chasing your first 100M ARR. When chaos is a feature, not a bug. But for a real business, one that needs to scale, serve, comply, and sustain, chaos stops being creative. It becomes expensive. The playbook that fuels growth at 100M ARR kills companies at 1B. AI won’t change that. It will just make the gap appear sooner. The next 12 months will see a record wave of CTO turnover. Not because they failed but because no one can bridge fantasy-level productivity expectations with real-world engineering constraints. Let’s stop to pretend, what do you think?
Julien Mangeard’s Post
More Relevant Posts
-
AI isn’t killing software. It’s killing the stuff nobody wanted to do anyway. The chores. The busywork. The 17 steps that should’ve been three. Here’s what people get wrong: they think AI threatens software companies because it can replicate features. But software was never really about the tech. It was about workflow knowledge, customer relationships, and distribution that took years to build. AI can’t copy those overnight. The real opportunity? Teams that use AI to eliminate the 95% of work users actively despise. That’s where the value is. But there’s a bigger lesson here about how we work, and it looks at the difference between being efficient versus being effective to achieve proeuctivity. We’ve spent decades optimizing for efficiency—doing things faster, cheaper, better. We celebrate being “busy.” We wear our packed calendars like badges of honor. Effectiveness on the other hand is different. It’s about doing the right things, not just doing things right. You can be incredibly efficient at tasks that don’t matter. You can perfectly optimize your way to nowhere. Being productive isn’t about clearing your inbox faster. It’s about knowing which emails to ignore in the first place. AI gives us a chance to step back and ask: what actually moves the needle? What creates real value? What should I stop doing entirely? AI-First Organizations won’t be the teams who use AI to do more stuff. They’ll be the people who use it to do the right stuff—and ditch the rest.
To view or add a comment, sign in
-
-
We recently started giving names to AI agents in our team — just like we do with human teammates. Why? Their numbers are growing fast, and it’s the only way to keep track of who’s doing what — both for humans and for the machines talking to each other. This “problem” comes from our work on automating engineering with AI. The result: a platform where you can design processes that involve dozens of agents collaborating like a real team. Some highlights of how it works: ✨ 𝗡𝗼𝗻-𝘁𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 – not just message → response. Agents listen for events, react when needed, or stay silent. One thread can span days and involve multiple humans and agents. ✨ 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗺𝗲𝗺𝗼𝗿𝘆 – different memory types with different access rules. Agents can share or keep their own. ✨ 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗮𝘁 𝘀𝗰𝗮𝗹𝗲 – one agent can coordinate dozens of others. ✨ 𝗜𝘀𝗼𝗹𝗮𝘁𝗲𝗱 𝘄𝗼𝗿𝗸𝘀𝗽𝗮𝗰𝗲𝘀 – agents can safely use shells, install packages, run tasks. ✨ 𝗦𝗶𝗺𝗽𝗹𝗶𝗳𝗶𝗲𝗱 𝘁𝗼𝗼𝗹 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 – prebuilt library of everything engineering teams need, with support for custom images/packages. What this gives us: workflows fully run by AI “employees” that complete tasks end-to-end. It finally feels like managing and not babysitting. We’re going open source soon 🚀 If you want early access, drop a “𝘁𝗿𝘆” in the comments and I’ll share it with you.
To view or add a comment, sign in
-
-
The problem with AI disruption? It's not entirely clear what it's disrupting. Clayton Christensen's disruption theory tells us to: - Identify safe vs. vulnerable parts of your business - Defend or retreat accordingly - Move upmarket as disruptors approach But with software development, it's not entirely clear which parts of which job are where on a continuum of automation risk. Most of the major engineering roles are complex, with many types of creative and technical responsibilities marbled together in complex ways. You can't simply execute a predetermined strategy when you don't know which capabilities will be disrupted, when, or how completely. So at Lineate, we're taking a different approach: We've challenged AI to beat us. Not just testing AI capabilities—actively trying to replace ourselves across coding, testing, documentation, planning, and client communication. Where AI wins? We integrate it and evolve our role. Where humans still win? We understand why—and prepare for when that changes. This systematic approach to discovering what can be automated is already reshaping how we think about software consulting's future. Full analysis in part 3 of our series: https://lnkd.in/es_CAPxc What's your company's approach to AI-driven disruption? #DisruptionTheory #AI #BusinessStrategy #Innovation
To view or add a comment, sign in
-
I’ve been deep in the trenches building my first real AI system for small businesses and a few lessons have become impossible to ignore. Speed > Perfection: You don’t need a flawless product — you need one that solves the problem. Waiting to launch until everything’s polished is a trap. Businesses care about one thing: does this fix my problem? If it helps them stop losing leads today, they’ll listen. Ship it, get feedback, improve it. Simplicity Wins: AI and automation sound intimidating to most business owners. The job isn’t to impress them with tech — it’s to make their life easier. Most small business owners just want something that works. They don’t need more data they need clarity. My job is to handle the complexity behind the curtain so they can keep running their business without interruptions. Fix the Leaks First: Business says, “I need more leads.” But most are bleeding the ones they already have missed calls, slow follow-ups, manual chaos. That’s the real problem, and where automation actually shines. I’m not trying to build perfect tech. I’m building useful tech and refining it with real feedback from the people I want to help. If you run a small business, what’s one process you wish you could automate without adding more tech headaches?
To view or add a comment, sign in
-
-
AI in Code: Hype vs. Reality In conversations around AI and software engineering, we often hear bold claims like “AI now writes 30% of production code.” But when we look closely, the reality is more nuanced. AI is a huge productivity gain; what once took a week can now take a fraction of that time. AI accelerates zero-to-one work, helping teams bootstrap new applications faster than ever. But when it comes to production-grade code at scale, the real figure is closer to 5–7%, not 30%. And that’s the real story: AI doesn’t replace human talent; it amplifies it. Coming from a background in accounting & finance, this resonates with me. In finance, precision and oversight are everything, and the same applies here. AI may handle repeatable tasks and accelerate outputs, but human judgment and expertise remain central to creating lasting business value. As someone working at the intersection of finance-driven strategy and AI-first consulting at CodeNinja, I see this blend every day: AI delivers measurable gains, but it’s the human-in-the-loop that ensures quality, compliance, and scalability. What’s your take? Are we overestimating AI’s role in production coding or underestimating the way it’s reshaping productivity?
To view or add a comment, sign in
-
VentureBeat's recent article on AI code risks should be a wake-up call for every tech leader. With experts predicting AI will write a significant amount of code within 6 months, we're facing a fundamental shift in how software gets built. But speed without oversight is dangerous. At one financial company, it was causing weekly outages. Here's what 20+ years of custom development have taught us about managing AI-generated code: 🔍 Discovery matters more than ever: AI can't understand your business context, legacy dependencies, or complex architectures the way human developers can through proper discovery ⚙️ Architecture trumps automation: AI excels at simple scripts but struggles with enterprise complexity — exactly where our Program Office methodology becomes critical 🤝 Human judgment remains irreplaceable: Our collaborative approach ensures every line of code serves your business goals, whether it's AI-generated or hand-crafted The bottom line: AI can accelerate development, but it can't replace strategic thinking, business understanding, or the kind of partnership that prevents costly mistakes. Read the full VentureBeat analysis: https://lnkd.in/eQKCXCFF #AI #TechSolutions #CustomSoftware
To view or add a comment, sign in
-
-
"Yes", with AI, one engineer can now do the work of two (or more). "No", that doesn’t mean we’ll need fewer engineers or that demand for engineers will drop. People often connect these two ideas as if they’ve uncovered some “productivity hack.” Thoughout the history, whenever the production capacity increased, the businesses made more money, hence poured more money back into the business to ..... yes, make more money. Now with AI, why do we assume that the demands of engineers will reduce if their production capacity has increased? Isn't it counterintuitive? If the same number of engineers can produce better and bigger software systems, faster, that'd mean businesses: - won't have to wait years to get their systems live - can run various proof of concepts in parallel, choose the ideas that worked, and start investing right away in them - can go live way sooner, start seeing positive impact right away, and start investing more to scale these systems globally across their business Does any of these outcomes sound like something that'd cause businesses to hire fewer engineers? Not to me. But yes, there are some cases where the demand will drop: - when they lack future vision of their product and don't know what to do after their initial idea comes to life - when they fail to operationalise the systems that engineers built, made a loss on their investment and shut down their operations - when they fail to market (which is the cause of failure of most startups), figure out they can't run their products, and shut down What's the common pattern in these cases? They all lacked proper vision and thought that software could solve their problems. When in fact, it was never the software. They needed to fix their operations and marketing first. So do I think the demand of software engineers is at risk anytime soon? I don't really think so. Though I wouldn't be surprised if things change in the next few months or years.
To view or add a comment, sign in
-
Building now leads to solving a problem most teams still struggle with: agents that degrade over multiple turns. Here's the framework they helps: 🔍 PHASE 1: Recognizing the Real Problem As you scale from chatbots to complex agents, a pattern emerged. Prompt engineering alone can't handle agents that loop through multiple inference turns, each generating more data. The breakthrough? Understanding they needed to manage an entire information ecosystem, not just write better prompts. ⚖️ PHASE 2: Identifying the Core Constraint Context windows are finite. No matter how perfect your prompt, agents drowning in, extra tool outputs, old message history, and irrelevant external data. Context became their most critical resource to optimize. 🔄 PHASE 3: Building Cyclical Refinement Unlike static prompts you write once, their system curates information at every inference cycle: What's essential now? What can be summarized? What can be dropped? Key Takeaway: The best AI teams aren't just writing better prompts anymore. They're building context management systems. What challenges are you facing with agent reliability? I'd love to hear what strategies are working in your stack.
To view or add a comment, sign in
-
Most people want shortcuts. I’m building systems. It’s not about working harder - it’s about building smarter. Over the past few months, I’ve learned that the real edge in AI, marketing, and career growth doesn’t come from finding the next hack (Believe me, I’ve looked!). It comes from building repeatable frameworks that keep working long after the trend fades. Shortcuts can get you quick wins. Systems will get you sustainable ones. That’s what I’ve been doing, connecting tools like Make.com, Airtable, Supabase, and Loveable into something cohesive. Every connection compounds. Every automation teaches me something new. AI moves fast. But if you have a system, you move faster and further. Because systems don’t just scale your work - they scale you. Do you have a system you’ve built that keeps working even when you’re not?
To view or add a comment, sign in
-
-
We made an unconventional choice: we're not a software company. We're an AI implementation team. Here's why that matters. The problem is distance. Traditional software companies build, ship, and move on. Long feedback loops. Surface-level integration. When things break, you're on your own. We kept seeing powerful AI tools collecting dust because nobody bridged the gap between capability and actual use. So we borrowed Palantir's playbook. We deploy engineers directly with clients-on-site, in meetings, handling requests in real-time. They embed with your team, learn your workflows, and adapt as needs evolve. Because implementing AI isn't just technical. It's human. There are a thousand subtleties in how teams actually work. Being there closes the gap faster than any ticket system ever could. Is it harder? Yes. Our engineers must be technical experts AND translators, bridging AI capabilities and business needs daily. But real value doesn't come from models alone. It comes from integration, feedback loops, and understanding what actually needs to happen. We're not selling software. We're embedding to make AI work. What's been your experience implementing AI? I'd love to hear what worked-and what didn't.
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Principal Software Engineer
2wThat is so true, and even before writing the first line of code, the meta-problem is to find the market fit for the product or feature being built. And no-code or low-code tools like Lovable won't help with that: we will just produce faster and more products that will never find an audience.