Your AI field manual 2 – Project management for the damned

You’re now in the trenches of AI PM

Let’s skip the motivational fluff. That was Part 1. I know you’re not here for vision decks or synergy workshops. You’re here because you’ve seen what happens when a PowerPoint wielding exec says “Let’s do AI together”. And with together he meant ‘you’. And you’re no newbie, cause you’ve smelled the burn of budgets on fire, from the previous hype, and you yourself have worked with project management models that work beautifully in books like PMBOK, but die in real life, rather like mayflies.

Because you know by now that an AI project is not a real project.

It’s a campaign.

And your usual project management playbook isn’t of much use here. The traditional PM world loves predictability and milestones, and they want to draw color-coded Gantt charts, but I’m telling you, the AI doesn’t because AI is a living organism with ADHD that evolves, forgets, hallucinates, and occasionally sets your project on fire just to see what happens.

So yes, it’s a bit like me.

This means, managing it requires something between military discipline and controlled madness. You’ll need resilience, lots of coffee, and the ability to smile when upper management explains “agile” to you as if it's a new discovery.

I haven’t written this field guide for you as a set of “best practices”, because that would take a whole book. No, this piece is about surviving long enough to deliver something that works, and ideally doesn’t end up as an internal postmortem or a McKinsey case study titled “Lessons learned from the damned and the unemployed”

So sit back, but don’t relax, grab your data pipeline, and clutch a change management grenade.

Welcome to the trenches of managing an AI project.

Article content

More rants after the messages:


  1. Connect with me on Linkedin 🙏
  2. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  3. Please comment, like or clap the article. Whatever you fancy.



Rules of engagement for the terminally hopeful

Before you start marching, tattoo these four AI project truths somewhere you can see them whenever your optimism resurfaces,

→ You gotta embrace uncertainty

An AI project is a research project with a deadline and an audience of executives who think “data” means Excel. You will not know if your data is usable. You will not know which model will work. Your “plan” is a hypothesis, and your mission isn’t about controlling uncertainty (if that was what you were thinking) - it is to de-risk it before it eats your timeline.

→ Know that data = ammo

Code is king in regular software projects, but In AI, code is decoration. Data is the warhead. Clean (relevant), and unbiased data is the difference between a precision strike and friendly fire. If your data pipeline leaks, your model will hallucinate in your PowerPoints and it will gaslight you in the sprint reviews.

→ The machine learning model is not the product

That beautiful .pkl file that you and your mates just trained. . . yeah well, safe to say it’s not the product despite what every project manager I meet seems to be hell-bent on believing. No, it is a liability because the real product is the pipeline. Yes, because - as we say in Dutch “tie this into a knot in your ears” - it’s about thee self-sustaining machine that retrains, validates, and deploys without drama. You are building a factory that makes models that survive, and can adapt, and can prove they did so by design. That means versioning every dataset, every feature, every experiment, and every dependency. You have to automate retraining when it’s triggered by drift detection, CI/CD pipelines that redeploy, and monitoring systems that catch bias creep (well, at least before the regulators do).

The mature AI product is an operational ecosystem. In real-world MLOps terms, this includes things like model registries (think MLflow or Vertex Model Registry), orchestration tools (Kubeflow, Airflow), and monitoring stacks (Arize, WhyLabs, Evidently).

A .pkl file is a snapshot in time. It is guaranteed to decay at the moment your data distribution changes. A true AI system is alive. It logs decisions, tracks lineage, and it retrains itself when reality shifts.

→ Yes, us humans are still in the loop (for now)

“Fully autonomous AI” doesn’t exist. That’s a myth. Humans label, monitor, interpret, and clean up after the algorithm, and your job as PM is to choreograph that colab between human interpretation and machine’s logical judgement without starting a war.

Keep these in mind. They are your safety rails, and they’re your last defense against the spreadsheet optimists upstairs.

Article content

Assembling the misfit platoon

No one wins an AI war on their own. You will need a diverse and most likely, dysfunctional geek-squad who are occasionally drunk on having consumed too much data. Lookie lookie at what you’re gonna have to invest in . .

→ The project manager (that’s of course you)

You are the translator and the human firewall that prevents your team from burning up. You convert corporate wishlists into reality and reality back into slide decks. You remove blockers, absorb the blame, and remind leadership that AI is not a verb.

→ The data scientist (can be you, not required, but it would help manage expectations)

Is the brain of the operation, 80% janitor, 20% magician. They spend most of their time cleaning data (still true), and arguing with pandas (learn this word, and try to understand their language, for some strange reason pandas speak fluent python), and they’re the ones who are pretending to enjoy meetings about “business alignment”. People say it’s the end of the Data Scientist, because gen AI can do it for you. But nah, as I said before - 80% janitor - the 20% AI can help with, but the rest is still a human task.

→ The ML engineer (definitely not you)

Your bridge between (Jupyter) notebooks and production. They can code, containerize (but that’s going out of fashion I’ve heard them say), of course deploy, and curse Kubernetes in three languages (another dead ened). Finding a good one is harder than finding an unbiased dataset. And luckily them peeps who work in the trenches read my shit - dunno why - yet they do, so have a go at it and make them an offer they can’t refuse.

→ The data engineer (don’t even think about it)

This is the quiet logistics officer who builds your data pipelines. Without them, the entire system starves. If they quit, your project becomes a Netflix documentary. The good ones are hard to find, but I’m pretty sure they’re secretly reading this as well. So, when you’re a pipeline guru and you speak fluent SQL - props to you my friend !

→ The domain expert (nah, cause tomorrow you’ll be working in another one)

Your local guide through the swamp of context. Usually part-time, always critical, and chronically underappreciated. When you’re reading this, do drop an applause in the comments for the ever underrated element of your project!

And if you’re venturing into Generative AI territory, you’ll need the new weirdos like (system-) prompt engineers, alignment specialists, and one AI ethicist who always quote Asimov unironically for some reason.

Get them early, arm them with laptops with NPUs and enough RAM so they can play Call-of-Duty while their model calculates. Oh yeah, also give them clear roles.

And always keep in mind that without this crew, you are nothing but a PowerPoint.

Article content

The nine circles of AI hell

Before the 14th century Durante di Alighiero degli Alighieri (Dante) descended into Hell, he mapped nine circles. And each of those circles was reserved for a particular flavor of human failure.

He named one Limbo for the confused but well-intentioned. Another one he baptized Lust, for those who chase desire without direction. Of course, my favorite is Gluttony for the ones who can’t stop consuming. And also Greed for the hoarders, Wrath for the perpetually angry, Heresy for the deluded, Violence for the destructive, Fraud for the clever liars. And for the worst kind he reserved Treachery for those who betray what they once swore to protect.

And here’s the trick. Just swap Sinners for stakeholders, the flames for Slack threads, and you have built yourself the average enterprise AI project.
Article content

And yes, each circle has its punishment - I think you saw that one coming, didn’t you? - a tailor-made torment for each member of your project, but corporate edition. Optimism you had at the onset of your project gets charred in the slow burning furnace called bureaucracy. Budgets freeze in procurement purgatory where they are waiting for signatures that never come. Deliverables are gnawed by the undead jaws of compliance. They re-write them, re-scope them, and ritualistically sacrificed them at every steering committee.

And in this particular inferno, the damned have titles instead of sins.

Data scientists crawl across the desert of barren data lakes, begging for a clean and juicy dataset. Product owners are condemned to eternal “alignment meetings” that produce slides instead of progress. Project managers wander endless loops of governance checklists, and are cursed to update Jira tickets that really no one reads. Developers push hotfixes that vanish, and their commits are found too light by the Temple of Anubis, also known as GitHub, where their code is judged by the automated gods of CI/CD, weighed against the feather of unit-testing, and tossed into the abyss of failed builds. Security officers hold on to their audit logs as if they were holy relics, they are truly convinced they can ward off the regulators with documentation alone. Legal teams chant GDPR clauses in the form of incantations, with which they’re summoning paperwork that is thicker than a bible.

End users wander the ninth circle, frozen in confusion, where they have to constantly click to “Accept All Cookies” just to escape the eternal pop-ups. They curse the AI you built because it doesn’t “feel intuitive”, and then proudly revert back to using Excel to recreate the same shit, but slower and mostly wronger (that even a word?). . .

Stakeholders hover above them in eternal purgatory, forever requesting “one small change” that always ends up rewriting the entire data model. They feast on status updates, and vanish the moment something actually ships.

And last, and certainly least are all the consultants, who are floating serenely in Limbo where they’re found neither guilty nor innocent. They’re like neutrino’s - they look like matter, but they don’t interact with anything, and therefore cannot get caught doing anything wrong.

Descending through the nine realms

And so, your journey doesn’t end with the damned, the data scientists and their barren lakes, the PMs circling the governance drain, the developers pleading before the Temple of Anubis. Their punishments are only the beginning of something worse.

Beneath them lie the Nine Realms. The strata of the AI underworld. And here, the deliverables are the souls, Jira is scripture, and every milestone you celebrate actually marks another layer of descent down the layers of hell.

And down here in Dante’s AI hell, where each phase of your project is a realm unto itself. Limbo for the clueless scoping sessions, Lust for the endless chase for data, Greed for the metrics that never satisfy. Every realm tests a different vice like hubris, haste, denial, and of course the eternal belief that “this time, the model will work in production”.

But the thing is that you will pass through them all.

This is no simple project lifecycle.

This is the descent through the nine realms of AI hell where project teams go to suffer, and every success must first be purified in flames.

Every AI project has to move through these phases.

Here’s how it goes . . .

1 → Mission scoping (Limbo = The Delusion Stage)

Executives say “We are building AI to improve customer experience” and everyone nods like it means something, but in fact no one knows what they want. It starts with you being trapped in Limbo with endless ideation, no clarity, just vaporware and lots of hype. Define success in metrics, not metaphors is my advise. Translate it into “Increase retention by 10%”, that beats “optimize engagement”. But few make it out alive.

Article content

2→ Data recon (Lust = The endless search for more)

Everyone wants their data to be morer (I’m liking it!), fresher, richer and bigger. You’ll spend months chasing it through spreadsheets, APIs, and forgotten SharePoint graveyards. Half your datasets are duplicates, the other half corrupted. You call it “data collection,” but it’s really data lust — endless desire, zero satisfaction.

Article content

3→ Data cleaning (Gluttony = The feast of filth)

Here begins the pig trough of preprocessing. You gorge yourself on CSVs, impute missing values, and strip PII until your soul leaves your body. Every outlier you remove spawns two more which you will have to exorcize. You will eat filth for weeks and call it “feature engineering”. No one thanks you. No one even knows you exist because you sit very close to the database that rests in the fiery pits of the data center.

Article content

4→ Model training (Greed = The worship of metrics)

Finally, some visible progress! Charts! Accuracy scores! Confusion matrices! Then - as always - human greed takes over. Every stakeholder wants “just a bit more accuracy”. You chase diminishing returns like a coke addict down to his last line of white. “Why is it only 93%?” they ask. Because physics, Karen.

Article content

5→ Alignment & fine-tuning (Wrath = The therapy phase)

This is where the model starts fighting back. Especially in Generative AI. It hallucinates, offends, refuses, argues. You fine-tune, prompt-tweak, and RLHF until your patience combusts. You become part scientist, part therapist, part exorcist. You yell at your laptop, and it yells back. Wrath is mutual.

Article content

6→ Deployment (Heresy = The great betrayal)

Everything worked in staging. Then you pushed to prod, and the system manifested the devil. Latency spikes, APIs choke, containers die, and of course marketing tweets “AI is live!” while you’re still SSH-ing into servers trying to stop the bleeding. You start to question your faith in agile. You heretic.

Article content

7→ Monitoring (Violence = Against time, sanity, and data)

You thought it was over? No. Now the model drifts. The data shifts. And the dashboards lie. You build more dashboards to check the first dashboards. You’re locked in a constant battle with entropy, and it’s winning.

Article content

8→ Continuous retraining (Fraud = The sisyphean loop)

You automate retraining “to stay current”. Well, theoretically. In reality, you keep reusing half-broken scripts because you don’t have time to fix them. You pretend your MLOps pipeline is “self-healing”, but it’s mostly duct taped together. Fraud isn’t always intentional. Sometimes it’s just policy.

Article content

9→ Change management (Treachery = The human rebellion)

You’ve made it this far. You ship. You celebrate. Then users revolt. They don’t trust it. They don’t like it. Someone says the AI “feels condescending”, and others contribute negative outcomes (for them) to “hallucination”. Adoption flatlines. The same executives who blessed you now turn on you in all-hands meetings. You are Judas in your own story when you are betrayed by the very people you tried to save from their spreadsheets.

Article content

And that’s the full descent into the nine circles of AI hell.


The art of not dying early

Yes, AI project failure can be prevented. The patterns are so predictable they deserve their own Darwin Awards.

You can see them forming from miles away, the same mistakes and optimism sold in PowerPoints. And it is your task as the project leader AI to learn these, tattoo them on your heart, and perhaps you will survive long enough to d-ploy something that isn’t immediately embarrassing upon it’s release.

Here are seven patterns I collected,

  • The science project (a.k.a. model pilot) → Technically brilliant, but totally business-useless. The classic rookie sin. The model performs beautifully in Jupyter with 99% accuracy, gorgeous ROC curves, but it solves a problem that really nobody cares about. The team proudly presents it at a conference, where geeks applaud, but the business quietly cuts the funding. How to fix: Define ROI before code. Tie every experiment to a measurable outcome (revenue increaste, retention, fraud detection).


  • The data swamp → You drowned in raw data, it never saw daylight. You thought you had a “data lake”. Turns out it’s a swamp. Man-o-man, it is one unstructured mess, it is unclean, and it’s haunted by duplicates. Multiple data ingestion pipelines collapse under their own complexity, and every new source promises more clarity yet delivers more chaos. How to fix: Start with a small, clean slice. Build a single trustworthy dataset, prove value, and scale slowly. Data governance is life support, but it only yields something tangible after 1-2 years.


  • The throw-it-over-the-wall → Scientists build, engineers rewrite, and time flies. Data scientists prototype in notebooks. Engineers rebuild in production. They argue about dependencies, containers, and “why the model won’t serialize”. Weeks become months, and the model decays before it ever sees daylight. How to fix: Embed MLOps and DevOps from day one. Create joint ownership and put scientists and engineers in the same pipeline, using the same tools. If your teams aren’t cohabiting, they are in fact . . . competing.


  • The zombie model → Deployed once, never retrained, now it wanders dead in production. The dashboard looks fine until it doesn’t. The model’s logic is outdated, its assumptions fossilized. It keeps making predictions (all wrong) and nobody notices until the quarterly review. How to fix: Monitor, retrain, repeat. Automate drift detection. Track performance decay like you’d track revenue loss. A model that doesn’t evolve is already dead. Stop paying for the electricity to keep it twitching like a fresh corpse.


  • The Frankenstein stack → Every new problem spawns a new tool. Soon you have a graveyard of platforms consisting of TensorFlow for one model, PyTorch for another, five tracking tools, and a pipeline stitched together by Slack threads and Excell attachments in email. When something breaks, nobody knows which monster part to fix. How to fix: Standardize early. Pick one stack, document everything, and resist the urge to chase new frameworks like you’re a magpie in a shining GPU store.


  • The pilot that never grew up → Perpetual proof-of-concept (purgatory). The team delivers a successful pilot, but no one funds the scale-up. Why is that, I hear you say, that’s because no one planned for operationalization, governance, or maintenance. Your AI project becomes a case study in “potential unrealized”. How to fix: Plan the rollout from the start. Secure post-pilot funding before you write the first line of code. If it can’t scale, it’s not innovation (it’s a hobby).


  • The compliance quicksand → Auditors arrive, and you realize you have no lineage, no logs, and no idea who trained what. Suddenly, you’re rewriting history under subpoena-level pressure. How to fix: Build compliance into the workflow, not as an afterthought. Version everything (datasets, models, parameters, etc). If you can’t reproduce your own results, neither can the regulator, and that’s not a gamble you will win.

Most failures in AI projects are managerial self-owns created from bad scoping, zero to worse communication, and no change management whatsoever.

Real success in AI projects comes from brutally clear problem definition, cross-functional discipline, continuous retraining, and documentation like your future job depends on it (it does).

Article content

Change management for the shell-shocked

Let’s talk about the real enemy of your AI project → yes, humans.

AI doesn’t scare me.

People do.

Especially middle managers. They have survived every wave of automation since the fax machine and have developed evolutionary resistance to change. They will attend your AI literacy training, nod, then quietly tell their teams to “wait this one out”.

I have written about them in yesterday’s field manual.

You have to understand their fear is rational. Studies show 64% of employees worry AI will make them irrelevant. 44% of managers expect pay cuts, 41% expect layoffs. And they are right because AI is eating every boring ritual that made them feel important.

And your job is to convert fear into participation. Make them part of the design, not the casualties.

Buy them lunch. Listen. Build reverse mentoring pairs with young AI-savvy staff teach older managers the tech, and older managers teach them how to survive office politics.

Everyone wins. Especially you, my friend.

Never sell AI as “cost reduction”. Nope-sure-ee! Because that’s office-code for “panic”. Sell it as “automating boring and repetitive tasks”, or something else tedium. Always make it personal. “AI will help you skip the parts of your job you already hate” is the only sales pitch that lands.

Because if you don’t manage their fear, they will manage you.

Article content

The boring salvation of governance

Now, the part nobody wants to hear compliance.

You can have the sexiest model alive, but without governance, you’re one headline away from a regulatory mugshot. The EU AI Act, the FTC, ISO 42001 — they all want receipts. Auditability. Explainability. Version control so precise it makes your data scientist weep.

Here’s your triage kit,

  • AI governance committee that governs. Cross-functional, not decorative. Legal, IT, data, HR, security, and the owner meet monthly to approve models, assess risk, and stop pet projects before they go feral.
  • Responsible AI team (your internal conscience). Tests for bias, drift, fairness, compliance. Runs impact assessments, model cards, and documentation checks before launch. Kills your bad ideas early.
  • Model risk manager (MRM). Tracking model inventory, runs validation cycles, and version history. This person will demand independent review and reproducibility. Finance already mandates it and others will follow.
  • Continuous monitoring infrastructure. Tools like Arize, Truera, WhyLabs, whatever your fancy, or built-in cloud versions like Azure ML Monitor, SageMaker Clarify, Vertex AI Monitoring, they track drift, bias, and performance in real time. Trigger retraining automatically when thresholds go kaputt.
  • Version and document everything. Datasets, features, hyperparameters, environments. Use MLflow, DVC, or Weights & Biases for lineage tracking. Automate documentation so you can reproduce yesterday’s decision on demand.

Yes, my friend, I know. It is tedious. Yes, it’s expensive. But so is your legal defense. A full AI compliance audit actually costs more than the system itself, and - you gotta love this one - the fines can reach 7% of global turnover. Good governance averages 20 to 30% of project cost but in the end it’s still cheaper than a reputational firing squad.

Article content

Tip: Run a pre-mortem

Before you start, run a pre-mortem. Imagine it’s a year later and your project has failed spectacularly. Document what killed it like you’re Hercule Poirot. Was it missing data, overambitious goals, political sabotage, or simply the butler?

Write it all down.

Then build your plan to avoid each trap.

This single exercise is worth more than a thousand dashboards because it gives you awareness.

I do these exercises all the time. They give me insight into potential risks. But don’t put your death scenario into a PowerPoint, else it will take on form and people will start to believe you’re a doom-thinker (which you are).

Article content

Final marching orders for my comrades in AIrms

By now, you understand that AI project management is not about control, but containment. It is crisis management, but you present it with prettier charts.

You will be fighting in the trenches against bureaucracy, fear, and entropy and your only weapon is PowerPoint.

Deliver something small, reproducible, and defensible. Then log every win, every loss, every tweak, until your audit trail reads is a survival diary, because when the regulators show up, or - heaven forbid - the CFO, or even the press come a knockin’ at your door, you will show them evidence.

That’s how you win in this war.

Now get back to the trenches, you grunt!

Te saluto. Vale,

Marco

Article content

I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.

👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇



  1. I may have found a solution to Vibe Coding's technical debt problem | LinkedIn
  2. Shadow AI isn’t rebellion it’s office survival | LinkedIn
  3. Macrohard is Musk’s middle finger to Microsoft | LinkedIn
  4. We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
  5. Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
  6. Living in the post-human economy | LinkedIn
  7. Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
  8. Workslop is the new office plague | LinkedIn
  9. The funniest comments ever left in source code | LinkedIn
  10. The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
  11. OpenAI finally confesses their bots are chronic liars | LinkedIn
  12. Money, the final frontier. . . | LinkedIn
  13. Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
  14. China’s AI+ plan and the Manus middle finger | LinkedIn
  15. Autopsy of an algorithm - Is building an audience still worth it these days? | LinkedIn
  16. AI is screwing with your résumé and you’re letting it happen | LinkedIn
  17. Oops! I did it again. . . | LinkedIn
  18. Palantir turns your life into a spreadsheet | LinkedIn
  19. Another nail in the coffin - AI’s not ‘reasoning’ at all | LinkedIn
  20. How AI went from miracle to bubble. An interactive timeline | LinkedIn
  21. The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
  22. The Buy Now - Cry Later company learns about karma | LinkedIn

Marc Drees

Adviseur ux & usability

1w

Sogno doro, Marco mio, sogno doro

Marc Drees

Adviseur ux & usability

1w

tl;dr: 🫨 😭 🤯

To view or add a comment, sign in

More articles by Marco van Hurne

Explore content categories