Generative AI has taken the enterprise world by storm—but behind the excitement, many companies are stumbling. In this episode of Today in Tech, host Keith Shaw sits down with Prem Natarajan, Executive Vice President, Chief Scientist, and Head of Enterprise AI at Capital One, to uncover what organizations often get wrong about AI readiness, and how to do it right.Drawing on decades of experience, Prem explains why conversational fluency doesn’t equal capability, and how real success comes from long-term investments in data quality, tech infrastructure, and talent. He breaks down Capital One’s platform-first approach, their early move to the public cloud, and the importance of governance, explainability, and human-in-the-loop frameworks.Other key topics include:* The critical distinction between outputs vs. outcomes* How Capital One built its own agentic AI platform for real-world applications* Why trust in AI must be earned, not assumed* What AI can learn from home assistants and how that maps to enterprise trust* Prem’s hopes and fears for the future of AI, including its impact on human connectionThis episode offers a grounded, strategic view of what it takes to move beyond pilots, and into real enterprise impact.
Register Now
Keith Shaw: We keep hearing that many AI projects are failing — but is this really the case? Are we moving from pilots to actual platforms and projects, especially at larger companies?
On this episode of Today in Tech, we’re going to look at how AI is shifting everything in the workplace.
Hi everybody, welcome to Today in Tech. I'm Keith Shaw. Joining me on the show today is Prem Natarajan. He is the Executive Vice President, Chief Scientist, and Head of Enterprise AI at Capital One. Welcome to the show, Prem. Prem Natarajan: Delighted to be here, Keith.
Thank you for inviting me. Keith: Three titles. That’s pretty cool.
Prem: I know — it reminds me of when I went into academia and realized that titles are not scalars; they’re vectors. So there you go. Keith: On a lot of these AI episodes, I like to start with an icebreaker.
Do you remember the first time AI felt magical to you? What was it? Was it generative AI specifically, or earlier experiences with machine learning?
Prem: If we’re just talking about generative AI, the magical moment is probably the same as for many of us around the world. It’s hard to forget the first time you tried ChatGPT — the flexibility, the agility.
It felt like, “My God, have we passed the Turing test?” And then we started debating whether the Turing test is even enough to define human cognition.
I remember spending so much time just playing with it, asking questions — about everything from complex topics to fun ones, like imagining interactions between iconic Bollywood characters.
And the responses were not only entertaining, but also coherent in a way that was truly unforgettable — especially for someone like me who has been in this field for over three decades.
Keith: I’ve always wondered if those who’ve been in AI for decades feel the same awe that newcomers do. I mean, as a tech journalist, I can be cynical about hype versus true innovation.
Prem: You used the words “wonder” and “awe.” I think the wonder was shared broadly, but for those of us who’ve been in the field a long time, the awe might have been even greater.
We’ve lived through years of rigid generations of technology — even as we made progress in speech recognition, language modeling, and translation, things were still relatively brittle. The idea that a system could now carry on a fluid, open-ended conversation? That was awe-inspiring.
I think many of us transitioned from skepticism to awe within just minutes of interacting with these systems. Keith: Right.
And now that it’s been a couple of years since the release of generative AI tools, what are some of the biggest misconceptions you see from the public, executives, or even the media?
Prem: One of the biggest misconceptions is that people equate conversational fluency with general intelligence. When someone speaks articulately, we naturally impute other cognitive abilities to them. And that’s understandable — Alan Turing’s test was based on this premise.
But generative AI, while highly fluent, doesn’t inherently possess the reasoning or problem-solving capabilities people assume it has. Just because it can talk about the theory of relativity doesn’t mean it can schedule a meeting or answer a financial query reliably. Keith: Exactly.
There’s this assumption that if it can talk, it can do. But then we hear stories of it losing a chess match to an Atari cartridge. People assume we’re already at AGI.
So, bringing it back to enterprise AI — especially at Capital One and other financial firms — are we finally past the “kicking the tires” phase?
Prem: Oh, we’re well past it. If you want to use a car metaphor, we’re not just test-driving anymore — we’re taking weekend family trips. But that’s only possible because of a few key precursors.
First, we’ve spent years caring for and curating our data — its quality, availability, and discoverability. Second, we modernized our tech stack, moving fully to the public cloud years ago. Third, we invested in talent — building an ecosystem that brings all three together.
Keith: So what happens when companies didn’t make those investments? Prem: They can run into trouble in two ways. First, they fall behind competitively. It’s like not preparing for the final exam — you don’t get detention, but you don’t move forward either. Second, they face more immediate challenges.
Without the right tech, talent, or governance, they may deploy irresponsibly. I often see that companies who didn’t invest in the tech didn’t invest in the culture and maturity either.
Keith: At Capital One, did you take a top-down approach or build centers of excellence? How did you manage company-wide adoption? Prem: Great question. We’re a platform-centric company, and that approach came directly from our CEO’s vision. If we build reusable, robust platforms, we scale innovation, governance, and talent.
So from the beginning, we said: "You can build — but build on this platform." That allowed us to maintain security and speed.
The alternative is buying fragmented point solutions. But that leads to operational headaches — reviewing each one for risk, monitoring them individually, applying different governance layers. A unified platform means shared observability, shared compliance, shared optimization.
Keith: Does that help as you move into agentic AI — AI that can act and make decisions? Prem: Absolutely. We’ve built our own agentic workflow platform internally — code-named MACA. It allows different agents to be strung together for real business outcomes.
That’s the key: moving from just generating outputs to delivering outcomes. And that requires access to enterprise knowledge, APIs, and legacy systems.
One of our first agentic applications is Chat Concierge, which supports our auto lending business. It helps customers on dealer websites get information, compare options, and even schedule meetings — all without waiting for a human.
Keith: That sounds powerful. But do agents need to be fast and accurate? What about trust? Prem: Yes — speed and accuracy are table stakes. But the real value comes from relieving cognitive burden on humans.
AI should take on the heavy lifting, especially for routine or repetitive tasks, freeing humans to focus on higher-value work.
Keith: Where does “human in the loop” still matter? Prem: It’s still very relevant. In software development, for example, humans will still review and oversee AI-generated code. In agentic systems, we’re starting to give users visibility into an agent’s plan before execution. That builds trust and allows for collaboration.
The goal is for AI to remove friction — not add it. But the technology needs to earn trust. That’s our design challenge as technologists.
Keith: Are there places where AI should not make autonomous decisions yet? Prem: Yes. Decisions like credit approvals are too impactful and still require human judgment — and legal frameworks. We’re not there yet, but I do believe we’ll get there when the systems prove themselves.
Keith: What about companies that skip governance, explainability, and bias mitigation? Prem: Governance is essential. It’s the “vegetables” of AI — often neglected, but vital. At Capital One, we embed governance during design and development, not just after deployment. That way, we avoid the rollback headlines you see elsewhere.
Keith: Last question.
What’s your hope for the future of AI — and your biggest fear? Prem: * Hope: That AI can truly transfer cognitive burden from human to system, democratize access to learning, and elevate daily life — from planning vacations to improving education.
* Fear: That we become too dependent on AI, and it reduces meaningful human-to-human interaction. We already see signs of this with screen time and social isolation.
Keith: I feel the same way. Growing up without the internet, we had to go outside and play. I worry about what future generations might miss out on. Prem, thank you again for this incredible conversation. We’ll definitely have you back. Prem: Thanks, Keith.
I can’t believe that was nearly an hour — it flew by. Keith: That’s going to do it for this week’s Today in Tech. Be sure to like the video, subscribe to the channel, and leave your thoughts in the comments. Thanks for watching!