Your AI field manual 2 – Project management for the damned
You’re now in the trenches of AI PM
Let’s skip the motivational fluff. That was Part 1. I know you’re not here for vision decks or synergy workshops. You’re here because you’ve seen what happens when a PowerPoint wielding exec says “Let’s do AI together”. And with together he meant ‘you’. And you’re no newbie, cause you’ve smelled the burn of budgets on fire, from the previous hype, and you yourself have worked with project management models that work beautifully in books like PMBOK, but die in real life, rather like mayflies.
Because you know by now that an AI project is not a real project.
It’s a campaign.
And your usual project management playbook isn’t of much use here. The traditional PM world loves predictability and milestones, and they want to draw color-coded Gantt charts, but I’m telling you, the AI doesn’t because AI is a living organism with ADHD that evolves, forgets, hallucinates, and occasionally sets your project on fire just to see what happens.
So yes, it’s a bit like me.
This means, managing it requires something between military discipline and controlled madness. You’ll need resilience, lots of coffee, and the ability to smile when upper management explains “agile” to you as if it's a new discovery.
I haven’t written this field guide for you as a set of “best practices”, because that would take a whole book. No, this piece is about surviving long enough to deliver something that works, and ideally doesn’t end up as an internal postmortem or a McKinsey case study titled “Lessons learned from the damned and the unemployed”
So sit back, but don’t relax, grab your data pipeline, and clutch a change management grenade.
Welcome to the trenches of managing an AI project.
More rants after the messages:
          
        
          
      
        
    
          
        
Rules of engagement for the terminally hopeful
Before you start marching, tattoo these four AI project truths somewhere you can see them whenever your optimism resurfaces,
→ You gotta embrace uncertainty
An AI project is a research project with a deadline and an audience of executives who think “data” means Excel. You will not know if your data is usable. You will not know which model will work. Your “plan” is a hypothesis, and your mission isn’t about controlling uncertainty (if that was what you were thinking) - it is to de-risk it before it eats your timeline.
→ Know that data = ammo
Code is king in regular software projects, but In AI, code is decoration. Data is the warhead. Clean (relevant), and unbiased data is the difference between a precision strike and friendly fire. If your data pipeline leaks, your model will hallucinate in your PowerPoints and it will gaslight you in the sprint reviews.
→ The machine learning model is not the product
That beautiful .pkl file that you and your mates just trained. . . yeah well, safe to say it’s not the product despite what every project manager I meet seems to be hell-bent on believing. No, it is a liability because the real product is the pipeline. Yes, because - as we say in Dutch “tie this into a knot in your ears” - it’s about thee self-sustaining machine that retrains, validates, and deploys without drama. You are building a factory that makes models that survive, and can adapt, and can prove they did so by design. That means versioning every dataset, every feature, every experiment, and every dependency. You have to automate retraining when it’s triggered by drift detection, CI/CD pipelines that redeploy, and monitoring systems that catch bias creep (well, at least before the regulators do).
The mature AI product is an operational ecosystem. In real-world MLOps terms, this includes things like model registries (think MLflow or Vertex Model Registry), orchestration tools (Kubeflow, Airflow), and monitoring stacks (Arize, WhyLabs, Evidently).
A .pkl file is a snapshot in time. It is guaranteed to decay at the moment your data distribution changes. A true AI system is alive. It logs decisions, tracks lineage, and it retrains itself when reality shifts.
→ Yes, us humans are still in the loop (for now)
“Fully autonomous AI” doesn’t exist. That’s a myth. Humans label, monitor, interpret, and clean up after the algorithm, and your job as PM is to choreograph that colab between human interpretation and machine’s logical judgement without starting a war.
Keep these in mind. They are your safety rails, and they’re your last defense against the spreadsheet optimists upstairs.
Assembling the misfit platoon
No one wins an AI war on their own. You will need a diverse and most likely, dysfunctional geek-squad who are occasionally drunk on having consumed too much data. Lookie lookie at what you’re gonna have to invest in . .
→ The project manager (that’s of course you)
You are the translator and the human firewall that prevents your team from burning up. You convert corporate wishlists into reality and reality back into slide decks. You remove blockers, absorb the blame, and remind leadership that AI is not a verb.
→ The data scientist (can be you, not required, but it would help manage expectations)
Is the brain of the operation, 80% janitor, 20% magician. They spend most of their time cleaning data (still true), and arguing with pandas (learn this word, and try to understand their language, for some strange reason pandas speak fluent python), and they’re the ones who are pretending to enjoy meetings about “business alignment”. People say it’s the end of the Data Scientist, because gen AI can do it for you. But nah, as I said before - 80% janitor - the 20% AI can help with, but the rest is still a human task.
→ The ML engineer (definitely not you)
Your bridge between (Jupyter) notebooks and production. They can code, containerize (but that’s going out of fashion I’ve heard them say), of course deploy, and curse Kubernetes in three languages (another dead ened). Finding a good one is harder than finding an unbiased dataset. And luckily them peeps who work in the trenches read my shit - dunno why - yet they do, so have a go at it and make them an offer they can’t refuse.
→ The data engineer (don’t even think about it)
This is the quiet logistics officer who builds your data pipelines. Without them, the entire system starves. If they quit, your project becomes a Netflix documentary. The good ones are hard to find, but I’m pretty sure they’re secretly reading this as well. So, when you’re a pipeline guru and you speak fluent SQL - props to you my friend !
→ The domain expert (nah, cause tomorrow you’ll be working in another one)
Your local guide through the swamp of context. Usually part-time, always critical, and chronically underappreciated. When you’re reading this, do drop an applause in the comments for the ever underrated element of your project!
And if you’re venturing into Generative AI territory, you’ll need the new weirdos like (system-) prompt engineers, alignment specialists, and one AI ethicist who always quote Asimov unironically for some reason.
Get them early, arm them with laptops with NPUs and enough RAM so they can play Call-of-Duty while their model calculates. Oh yeah, also give them clear roles.
And always keep in mind that without this crew, you are nothing but a PowerPoint.
The nine circles of AI hell
Before the 14th century Durante di Alighiero degli Alighieri (Dante) descended into Hell, he mapped nine circles. And each of those circles was reserved for a particular flavor of human failure.
He named one Limbo for the confused but well-intentioned. Another one he baptized Lust, for those who chase desire without direction. Of course, my favorite is Gluttony for the ones who can’t stop consuming. And also Greed for the hoarders, Wrath for the perpetually angry, Heresy for the deluded, Violence for the destructive, Fraud for the clever liars. And for the worst kind he reserved Treachery for those who betray what they once swore to protect.
And here’s the trick. Just swap Sinners for stakeholders, the flames for Slack threads, and you have built yourself the average enterprise AI project.
And yes, each circle has its punishment - I think you saw that one coming, didn’t you? - a tailor-made torment for each member of your project, but corporate edition. Optimism you had at the onset of your project gets charred in the slow burning furnace called bureaucracy. Budgets freeze in procurement purgatory where they are waiting for signatures that never come. Deliverables are gnawed by the undead jaws of compliance. They re-write them, re-scope them, and ritualistically sacrificed them at every steering committee.
And in this particular inferno, the damned have titles instead of sins.
Data scientists crawl across the desert of barren data lakes, begging for a clean and juicy dataset. Product owners are condemned to eternal “alignment meetings” that produce slides instead of progress. Project managers wander endless loops of governance checklists, and are cursed to update Jira tickets that really no one reads. Developers push hotfixes that vanish, and their commits are found too light by the Temple of Anubis, also known as GitHub, where their code is judged by the automated gods of CI/CD, weighed against the feather of unit-testing, and tossed into the abyss of failed builds. Security officers hold on to their audit logs as if they were holy relics, they are truly convinced they can ward off the regulators with documentation alone. Legal teams chant GDPR clauses in the form of incantations, with which they’re summoning paperwork that is thicker than a bible.
End users wander the ninth circle, frozen in confusion, where they have to constantly click to “Accept All Cookies” just to escape the eternal pop-ups. They curse the AI you built because it doesn’t “feel intuitive”, and then proudly revert back to using Excel to recreate the same shit, but slower and mostly wronger (that even a word?). . .
Stakeholders hover above them in eternal purgatory, forever requesting “one small change” that always ends up rewriting the entire data model. They feast on status updates, and vanish the moment something actually ships.
And last, and certainly least are all the consultants, who are floating serenely in Limbo where they’re found neither guilty nor innocent. They’re like neutrino’s - they look like matter, but they don’t interact with anything, and therefore cannot get caught doing anything wrong.
Descending through the nine realms
And so, your journey doesn’t end with the damned, the data scientists and their barren lakes, the PMs circling the governance drain, the developers pleading before the Temple of Anubis. Their punishments are only the beginning of something worse.
Beneath them lie the Nine Realms. The strata of the AI underworld. And here, the deliverables are the souls, Jira is scripture, and every milestone you celebrate actually marks another layer of descent down the layers of hell.
And down here in Dante’s AI hell, where each phase of your project is a realm unto itself. Limbo for the clueless scoping sessions, Lust for the endless chase for data, Greed for the metrics that never satisfy. Every realm tests a different vice like hubris, haste, denial, and of course the eternal belief that “this time, the model will work in production”.
But the thing is that you will pass through them all.
This is no simple project lifecycle.
This is the descent through the nine realms of AI hell where project teams go to suffer, and every success must first be purified in flames.
Every AI project has to move through these phases.
Here’s how it goes . . .
1 → Mission scoping (Limbo = The Delusion Stage)
Executives say “We are building AI to improve customer experience” and everyone nods like it means something, but in fact no one knows what they want. It starts with you being trapped in Limbo with endless ideation, no clarity, just vaporware and lots of hype. Define success in metrics, not metaphors is my advise. Translate it into “Increase retention by 10%”, that beats “optimize engagement”. But few make it out alive.
2→ Data recon (Lust = The endless search for more)
Everyone wants their data to be morer (I’m liking it!), fresher, richer and bigger. You’ll spend months chasing it through spreadsheets, APIs, and forgotten SharePoint graveyards. Half your datasets are duplicates, the other half corrupted. You call it “data collection,” but it’s really data lust — endless desire, zero satisfaction.
3→ Data cleaning (Gluttony = The feast of filth)
Here begins the pig trough of preprocessing. You gorge yourself on CSVs, impute missing values, and strip PII until your soul leaves your body. Every outlier you remove spawns two more which you will have to exorcize. You will eat filth for weeks and call it “feature engineering”. No one thanks you. No one even knows you exist because you sit very close to the database that rests in the fiery pits of the data center.
4→ Model training (Greed = The worship of metrics)
Finally, some visible progress! Charts! Accuracy scores! Confusion matrices! Then - as always - human greed takes over. Every stakeholder wants “just a bit more accuracy”. You chase diminishing returns like a coke addict down to his last line of white. “Why is it only 93%?” they ask. Because physics, Karen.
5→ Alignment & fine-tuning (Wrath = The therapy phase)
This is where the model starts fighting back. Especially in Generative AI. It hallucinates, offends, refuses, argues. You fine-tune, prompt-tweak, and RLHF until your patience combusts. You become part scientist, part therapist, part exorcist. You yell at your laptop, and it yells back. Wrath is mutual.
6→ Deployment (Heresy = The great betrayal)
Everything worked in staging. Then you pushed to prod, and the system manifested the devil. Latency spikes, APIs choke, containers die, and of course marketing tweets “AI is live!” while you’re still SSH-ing into servers trying to stop the bleeding. You start to question your faith in agile. You heretic.
7→ Monitoring (Violence = Against time, sanity, and data)
You thought it was over? No. Now the model drifts. The data shifts. And the dashboards lie. You build more dashboards to check the first dashboards. You’re locked in a constant battle with entropy, and it’s winning.
8→ Continuous retraining (Fraud = The sisyphean loop)
You automate retraining “to stay current”. Well, theoretically. In reality, you keep reusing half-broken scripts because you don’t have time to fix them. You pretend your MLOps pipeline is “self-healing”, but it’s mostly duct taped together. Fraud isn’t always intentional. Sometimes it’s just policy.
9→ Change management (Treachery = The human rebellion)
You’ve made it this far. You ship. You celebrate. Then users revolt. They don’t trust it. They don’t like it. Someone says the AI “feels condescending”, and others contribute negative outcomes (for them) to “hallucination”. Adoption flatlines. The same executives who blessed you now turn on you in all-hands meetings. You are Judas in your own story when you are betrayed by the very people you tried to save from their spreadsheets.
And that’s the full descent into the nine circles of AI hell.
The art of not dying early
Yes, AI project failure can be prevented. The patterns are so predictable they deserve their own Darwin Awards.
You can see them forming from miles away, the same mistakes and optimism sold in PowerPoints. And it is your task as the project leader AI to learn these, tattoo them on your heart, and perhaps you will survive long enough to d-ploy something that isn’t immediately embarrassing upon it’s release.
Here are seven patterns I collected,
          
      
        
    
          
      
        
    
          
      
        
    
          
      
        
    
          
      
        
    
          
      
        
    
          
      
        
    
Most failures in AI projects are managerial self-owns created from bad scoping, zero to worse communication, and no change management whatsoever.
Real success in AI projects comes from brutally clear problem definition, cross-functional discipline, continuous retraining, and documentation like your future job depends on it (it does).
Change management for the shell-shocked
Let’s talk about the real enemy of your AI project → yes, humans.
AI doesn’t scare me.
People do.
Especially middle managers. They have survived every wave of automation since the fax machine and have developed evolutionary resistance to change. They will attend your AI literacy training, nod, then quietly tell their teams to “wait this one out”.
I have written about them in yesterday’s field manual.
You have to understand their fear is rational. Studies show 64% of employees worry AI will make them irrelevant. 44% of managers expect pay cuts, 41% expect layoffs. And they are right because AI is eating every boring ritual that made them feel important.
And your job is to convert fear into participation. Make them part of the design, not the casualties.
Buy them lunch. Listen. Build reverse mentoring pairs with young AI-savvy staff teach older managers the tech, and older managers teach them how to survive office politics.
Everyone wins. Especially you, my friend.
Never sell AI as “cost reduction”. Nope-sure-ee! Because that’s office-code for “panic”. Sell it as “automating boring and repetitive tasks”, or something else tedium. Always make it personal. “AI will help you skip the parts of your job you already hate” is the only sales pitch that lands.
Because if you don’t manage their fear, they will manage you.
The boring salvation of governance
Now, the part nobody wants to hear → compliance.
You can have the sexiest model alive, but without governance, you’re one headline away from a regulatory mugshot. The EU AI Act, the FTC, ISO 42001 — they all want receipts. Auditability. Explainability. Version control so precise it makes your data scientist weep.
Here’s your triage kit,
          
      
        
    
Yes, my friend, I know. It is tedious. Yes, it’s expensive. But so is your legal defense. A full AI compliance audit actually costs more than the system itself, and - you gotta love this one - the fines can reach 7% of global turnover. Good governance averages 20 to 30% of project cost but in the end it’s still cheaper than a reputational firing squad.
Tip: Run a pre-mortem
Before you start, run a pre-mortem. Imagine it’s a year later and your project has failed spectacularly. Document what killed it like you’re Hercule Poirot. Was it missing data, overambitious goals, political sabotage, or simply the butler?
Write it all down.
Then build your plan to avoid each trap.
This single exercise is worth more than a thousand dashboards because it gives you awareness.
I do these exercises all the time. They give me insight into potential risks. But don’t put your death scenario into a PowerPoint, else it will take on form and people will start to believe you’re a doom-thinker (which you are).
Final marching orders for my comrades in AIrms
By now, you understand that AI project management is not about control, but containment. It is crisis management, but you present it with prettier charts.
You will be fighting in the trenches against bureaucracy, fear, and entropy and your only weapon is PowerPoint.
Deliver something small, reproducible, and defensible. Then log every win, every loss, every tweak, until your audit trail reads is a survival diary, because when the regulators show up, or - heaven forbid - the CFO, or even the press come a knockin’ at your door, you will show them evidence.
That’s how you win in this war.
Now get back to the trenches, you grunt!
Te saluto. Vale,
Marco
I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.
👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
          
        
          
        
          
      
        
    
Adviseur ux & usability
1wSogno doro, Marco mio, sogno doro
Adviseur ux & usability
1wtl;dr: 🫨 😭 🤯