From AI hype to tangible value - Innovation Roundtable insights

From AI hype to tangible value - Innovation Roundtable insights

Beyond the buzzwords

Every boardroom right now is buzzing about AI, but many leaders quietly worry - is it all hype and no payoff? 

In our latest Innovation Roundtable, a select group of business leaders and digital consultants cut through the buzzwords over lunch (and possibly some wine) at 20 Stories, Manchester.  

The verdict was clear. Whilst innovation like AI is advancing at breakneck speed, a human-centred technology strategy and a strong innovation-focussed culture are imperative for longer term business success. 

The key themes we discussed:

  • The widening gap between AI hype and meaningful business value.
  • Aligning innovation efforts with core business outcomes (and avoiding “shiny object” syndrome).
  • Building a culture of experimentation and psychological safety.
  • The rise of “Agentic AI” (think AutoGPT or Sintra AI) - and what autonomous agents actually mean for businesses.
  • The ethics of AI - governance, risk, bias, and maintaining brand trust.
  • Drawing inspiration from other sectors and “fringe” innovations.
  • Why sustainability and purpose have become central to innovation strategies.

In short, we argued that while AI and tech are evolving rapidly, it’s the human side of innovation - clear purpose, thoughtful design, and a fearless culture - that ultimately future-proofs a business. 

Mind the Gap | AI hype vs. actual business value

There’s no denying the excitement around AI. Nearly three-quarters of companies say they’re now using generative AI in some form (emergingtechbrew.com). 

Funding for AI startups is soaring, and tech headlines have declared 2025 “the year of the AI agent” (alvarezandmarsal.com). Yet beneath the hype, most organisations aren’t yet seeing significant ROI from AI. 

In fact, four in five companies using gen AI report no tangible bottom-line impact at the enterprise level so far (emergingtechbrew.com). Gartner analysts have predicted that only 15% of AI initiatives succeed - let alone deliver positive ROI (venturebeat.com). That leaves an awful lot of AI pilots, chatbots, and automation projects that either never make it out of the lab or fail to move the needle.

Why the disconnect? 

Often, businesses have jumped on the AI bandwagon without a clear strategy, or they expected a magic wand. “AI is a tool, not a strategy” as one of our roundtable participants put it. The novelty of AI led some teams to deploy it everywhere possible last year - only to find diminishing returns. As Bryce Hall, a McKinsey associate partner said;

“Executives are rightfully looking for a return on their AI investments. In many cases, they are paring back their strategies from trying to apply GenAI everywhere to prioritising the domains that have the greatest potential.” (emergingtechbrew.com)

In other words, the initial hype is giving way to a more sober, targeted approach. Perhaps we’re finally inching from the “peak of inflated expectations” to the “slope of enlightenment.”

Basically, don’t buy the buzz without a business case. It’s telling that a recent KPMG survey found nearly 45% of organisations invested in AI because they felt their competitors were doing it (mooncamp.com). 

Peer pressure is not a strategy. Instead, focus on why and where AI (or any new tech) can truly solve a problem or create value for your business. The companies that are getting results with AI tend to be more selective and strategic in its application - and they pair tech with process changes and upskilled people. In the sections below, we’ll explore exactly how to do that.

While cloud computing and analytics are nearly ubiquitous, only about one-third of companies have fully implemented AI solutions so far (mooncamp.com). Many firms are still just piloting or considering AI - a sign that practical adoption is lagging behind the hype.

Innovation with purpose | Aligning tech with business outcomes

One theme echoed loud and clear at our roundtable: innovation isn’t about chasing cool tech - it’s about advancing your core business goals. Too often, companies fall into the trap of “innovation theater,” playing with trendy technologies with no clear link to strategy or KPIs. 

It’s no surprise that 70% of digital transformation projects flop, frequently because they lacked strategic alignment or a clear value target (cdotimes.com). As the old saying (updated for 2025) goes: “Culture eats strategy for breakfast,” and bad strategy eats innovation for lunch…

So how do we better connect innovation to outcomes? First, start with the business problem, not the technology. 

Are you trying to reduce customer churn, speed up your supply chain, improve employee productivity? Let that clarity guide your innovation investments. 

Leaders at the roundtable shared war stories of past tech projects that fizzled because, in hindsight, they weren’t solving a real pain point or didn’t have executive buy-in to change the status quo. 

On the other hand, when innovation efforts zero in on a defined business outcome - say, improving net promoter score or cutting manufacturing downtime by 20% - the chance of success shoots up.

And the data backs this up. 

A recent study found 68% of companies cite modernisation of operations as the top reason for digital transformations, rather than “because everyone’s doing it” (mooncamp.commooncamp.com). 

And when transformations succeed, the rewards are real. On average 63% of organisations saw improved performance from their digital initiatives in the past two years, and a majority even saw profitability upticks over 10% (mooncamp.com). 

The difference between winners and losers often comes down to strategic focus. Successful innovators ensure every tech project has a “line of sight” to a core business metric. They also kill projects that don’t - a courageous but necessary discipline in an age where it’s easy to get distracted by the next shiny object.

Finally, aligning innovation to outcomes requires leadership commitment. Innovation cannot be an isolated skunkworks in IT; it needs to be championed by business executives and integrated into the overall strategy. 

One practical step is to establish an innovation governance or steering group that includes both tech and business stakeholders, who regularly vet projects for strategic fit. In short, treat innovation as a business process, not a playground. That means setting goals, measuring results, and asking “how does this drive our mission or margin?” at every turn.

Culture | The secret sauce of innovation

If strategy alignment is the brain of innovation, culture is the heart. Our roundtable guests were aligned on this point… You can pour millions into new tech, but it will stall without a supportive culture. In fact, research shows that cultural resistance is the number one barrier to digital transformation success (cdotimes.com). As one McKinsey report put it, cultural factors often “contribute more to successful outcomes than technological readiness” (cdotimes.com).

What does an innovation-friendly culture look like? It’s one that encourages experimentation, rewards learning, and makes it safe to fail fast. People need psychological safety to propose bold ideas or flag problems without fear. Google’s famed Project Aristotle study found that psychological safety was the top driver of high-performing teams (bcg.com). Conversely, as Harvard’s Amy Edmondson has noted, “When teams lack psychological safety, innovation is one of the first things to suffer.” Why? Because if your culture punishes mistakes, employees will stick to the status quo and innovation will wither on the vine.

Creating a culture of innovation starts at the top. Leaders must model vulnerability and curiosity - admitting when things don’t work, sharing lessons, and celebrating small wins. 

One mantra from the roundtable was “small, iterative wins drive real transformation.” 

Instead of betting the farm on a grand project, effective innovators run many small experiments and scale up the ones that show promise. This agile approach not only delivers quick wins that build momentum, but also normalises the idea that some experiments won’t pan out - and that’s okay.

Crucially, employees need to feel included in the journey. Front-line teams often have the best insights into customer pain points or process inefficiencies, but they’ll only speak up if the culture welcomes it. 

One attendee described how his company launched an “innovation champions” program - volunteers from different departments who get training and time to work on innovative solutions - which not only generated new ideas but boosted morale. 

Empower your people to be innovators, not just implementers. When workers see that their ideas can shape the company’s future, magic happens. And as a bonus, you’ll retain talent. Companies with strong cultures of innovation and psychological safety enjoy significantly higher employee engagement and retention.

Culture isn’t the soft stuff - it’s the soil in which innovation either thrives or dies. Cultivate a culture that gives your people permission to experiment and a purpose to rally around. A strong sense of purpose (like sustainability or social impact) can also energise innovation efforts. 

Agentic AI | Hype, reality, and opportunity

Among one of the hottest - and most hyped - topics in tech right now is “Agentic AI” - AI systems that can act autonomously to accomplish goals. Ever since experimental tools like AutoGPT burst onto the scene, there’s been talk of AI agents that could function like virtual employees or “co-pilots” handling complex tasks and businesses are intrigued. According to one report, over 95% of developers are now actively experimenting with AI agents (alvarezandmarsal.com), and major cloud providers (from AWS to Microsoft) are racing to offer agent-building frameworks.

So what exactly is an AI agent, and is it ready for prime time? 

An AI agent is more than just a chatbot answering FAQs. Agents combine large language models (LLMs) with reasoning, memory, and the ability to take actions (via code, APIs, etc.) to achieve objectives (alvarezandmarsal.com). 

For example, a generative AI like ChatGPT might draft an email if you prompt it - but an AI agent could autonomously draft the email and then schedule it to send, follow up based on responses, maybe even update your CRM, all with minimal human input. The promise is a “set it and forget it” mode of automation. Give an AI agent a high-level goal and it figures out the steps and executes them.

If that sounds a bit like science fiction, it partly is - today. Our roundtable discussion on agentic AI noted that we’re early in the maturity curve. Yes, there are startups like Sintra AI advertising “AI employees that never sleep” to handle routine business tasks. And early enterprise pilots have shown impressive efficiency gains - 50% improvement in some functions using AI agents, according to one study (alvarezandmarsal.com). 

But there have also been reality checks. One innovation lead candidly shared how their attempt to use AutoGPT to automate a workflow “that I thought would take 30 minutes ended up taking days” to get right. Anyone who’s played with these agents knows they can be glitchy, prone to looping or getting stuck, and overly confident in wrong answers. (Indeed, AutoGPT - once hyped as the future - has largely fizzled due to such reliability issues and endless loops.

The truth is, today’s agentic AI is powerful but needs careful engineering and oversight. LLM-driven agents tend to hallucinate facts, and chaining multiple AI decisions amplifies the risk of error if you don’t put guardrails in place. They also can be resource-hungry; running a complex multi-agent workflow on the latest GPT-4 model can be slow and expensive. None of these are insurmountable challenges - they’re just signs that the tech is immature. We’re probably at the “awkward teenager” stage of agentic AI.

So, what’s a business leader to do? My advice is to proceed with both optimism and caution. 

The roundtable consensus was that agentic AI will be transformative in the long run - if we apply it thoughtfully. Here are some guidelines distilled from the discussion and industry best practices:

  • Start with narrow, low-risk use cases - Look for repetitive processes where an AI agent can save time (e.g. generating weekly reports, triaging customer inquiries), but where mistakes won’t be catastrophic. This lets you pilot the tech and learn. “Launching small-scale pilot programs allows teams to validate effectiveness and build internal confidence before scaling up” great advice from an Alvarez & Marsal report.
  • Keep a human in the loop - For now, Agentic AI works best with human oversight, not as a fully hands-off system. Have people monitor agent outputs, double-check critical results, and be ready to step in. Not only does this catch errors, it also builds user trust - which tends to be low when decisions come from a “black box” AI. Over time, as the AI proves itself (or as agents get safer), you can dial up autonomy.
  • Choose the right partners/tools - The ecosystem is evolving fast. Whether you use an open-source framework or a vendor platform, do your due diligence. Evaluate reliability, security, and compliance. Make sure the tool allows you to set necessary constraints (for example, can you define what the agent is not allowed to do or access?). As one participant noted, “just because it’s an ‘autonomous’ agent doesn’t mean you abdicate responsibility.”
  • Plan for integration and maintenance - Agents don’t operate in a vacuum - they need to tie into your databases, software, and workflows. That often means integration work and ongoing maintenance as things change. Budget time for your IT or R&D team to properly integrate the agent and keep it up to date. And establish clear accountability: if the agent makes a bad call, who or what process catches it? One useful tactic is an “AI audit log” where agents document their steps and reasoning, so humans can review how a decision was made if needed.

When done right, Agentic AI can indeed be a game-changer - automating complex tasks 24/7, surfacing insights, even collaborating with human teams. But it’s not a plug-and-play miracle in 2025. Businesses that tread thoughtfully - blending automation with human judgment - will find the real value amidst the hype. Those that don’t… well, they may end up as cautionary tales, perhaps in court.

Ethics and AI | Earning trust in an algorithmic world

It’s been said that “with great power comes great responsibility,” and AI’s power makes no exception. As companies deploy AI in products and decisions, ethical pitfalls abound - from biased algorithms and privacy breaches to outright brand-damaging blunders. The roundtable tackled this head-on. How can organisations innovate with AI responsibly, maintaining trust with customers, employees, and society?

First, the trust problem… Public confidence in AI is shaky. A recent Gallup survey found 77% of Americans do not trust businesses to use AI responsibly (news.gallup.com). Even among highly tech-savvy folks, skepticism is high - nearly 7 in 10 of those “extremely knowledgeable” about AI say they have little to no trust in corporate AI use (news.gallup.com). 

In the UK, only 42% of people are willing to trust AI systems at all (techuk.org), with misinformation and bias being top concerns. This widespread distrust is a huge red flag: if customers and employees don’t trust your AI, they won’t use it - and they might even revolt against it.

What drives this distrust? For one, many AI systems have exhibited bias - from facial recognition that works better on white males than others, to recruitment algorithms that unintentionally filtered out female candidates. People are (rightfully) concerned about “black box” AI making unfair or inscrutable decisions. There’s also fear of job displacement and data misuse. And frankly, companies haven’t helped their case when they rush AI products to market without adequate testing. We’ve seen examples of AI gone wrong, from chatbots spewing offensive remarks to an airline’s AI customer service giving out false information. (In one case, Air Canada’s chatbot provided incorrect info about bereavement fares, leading to a customer lawsuit - and the court held the airline accountable. Ouch.)

So how do we bridge the trust gap? AI ethics and governance must move from powerpoint to practice. Here are some concrete actions discussed at the roundtable and echoed by industry leaders:

  • Establish clear AI principles and policies - Many companies are now creating an “AI manifesto” or a code of conduct for AI usage. This typically covers things like fairness (commitment to avoid biased outcomes), transparency (letting people know when they’re interacting with AI or how AI-driven decisions are made), privacy (safeguarding data), and accountability (AI systems will have human oversight and clear escalation paths). Writing it is the easy part - the key is enforcing it. Bake these principles into your product development lifecycle and vendor selection. Make ethics a feature, not a afterthought.
  • Invest in AI governance and risk management - Treat advanced AI models with the same rigor as other major risks. This might mean forming an AI ethics committee, conducting regular bias audits of models, and stress-testing AI systems for worst-case scenarios. For example, if you deploy an AI to screen loan applications, periodically review its decisions for disparate impact on protected groups. If you use a generative AI in customer service, have humans review a sample of its chats to ensure it’s not going rogue. According to techUK, 80% of people believe regulation is needed to rein in AI misuse (techuk.org) - regulators are certainly moving in, from the EU’s upcoming AI Act to the FTC’s warnings in the US. Smart companies will get ahead of this by self-regulating before someone does it for them.
  • Educate and involve your workforce - Employees need guidance on how to use AI tools responsibly. Alarming stat: 54% of UK workers admit they’ve made mistakes at work due to AI tools, and 39% have even pasted confidential company info into public AI (like ChatGPT). That’s a recipe for compliance nightmares. It’s crucial to train staff on the dos and don’ts (e.g. don’t feed sensitive data into unapproved AI services, double-check AI outputs, etc.). Some companies have created internal “AI SWAT teams” or centers of excellence that employees can consult when building an AI solution. Also, encourage employees to speak up if they notice an AI behaving oddly or unethically - a blameless reporting culture can catch issues early.
  • Be transparent with customers and stakeholders - If you’re using AI in ways that affect people (say, an AI-assisted hiring process or an algorithmically personalized pricing), be open about it. Offer opt-outs or human alternatives where feasible. And communicate the steps you’re taking to ensure the AI is fair and secure. Transparency can mitigate the knee-jerk fear by showing people you have nothing to hide. It also forces your team to continuously earn trust through actions, not just words.

The upside of doing AI ethics right is brand advantage. In an era of low trust, companies that can honestly say “our AI is audited, fair, and accountable” will stand out. They’ll attract customers who are on the fence about AI and talent who want to work at a place with values. Conversely, one high-profile AI scandal can deeply damage brand trust. The stakes are high. As one of our roundtable CEOs put it, “We’re not just competing on products and services anymore - we’re competing on trust.”

Cross-pollinating innovation | Ideas from the fringe

Another insight from our roundtable: if you want to drive truly game-changing innovation, look beyond your own backyard. Industries tend to develop tunnel vision - banks copy ideas from other banks, retailers benchmark other retailers. But breakthroughs often come from the fringes or from cross-sector inspiration. As innovation consultant Duncan Wardle quipped, “Some of the most groundbreaking ideas have emerged from looking beyond industry borders.” (duncanwardle.com)

Examples of cross-industry inspiration are everywhere once you start looking. One classic tale: the invention of the roll-on deodorant. Back in the 1950s, deodorant was a messy cream until a clever product developer at Helen Diserens borrowed the idea of the ballpoint pen for a smooth roll-on applicator. The result created an entirely new product category. Fast forward to more recent times: When Disney was trying to solve long queues at theme parks, they drew inspiration from a high-tech pharmacy in Tokyo that used RFID tags to manage pickups.

That led to Disney’s MagicBand - the now-famous RFID wristband that acts as your ticket, hotel key, and payment method, dramatically cutting wait times and improving guest experience. Or consider how Speedo designed shark-skin-inspired swimsuits to reduce drag, an innovation so effective that it helped athletes break world records (before regulators intervened). The common thread is “innovation through analogy” - applying a solution from one domain to a problem in another.

So how can business leaders systematically tap into cross-sector and fringe innovation? A few suggestions that emerged from our conversation:

  • Encourage external learning and curiosity - Too many corporate teams have blinders on, only attending their industry conferences or reading their sector news. Shake that up. Send your fintech engineers to a healthcare innovation summit. Invite a speaker from the gaming industry to talk to your retail team. Create an internal forum for sharing cool innovations people have seen in unrelated fields. The goal is to spark “Ah-ha, we could try a version of that here!” moments.
  • Diversity helps innovation - And not just demographic diversity (though that’s important), but diversity of backgrounds and expertise. Cross-functional teams - mixing, say, a biologist with a data scientist with a marketer - can collide insights in creative ways. Even hiring people from outside your industry can inject fresh perspectives. As one attendee noted, when they brought an automotive engineer into their consumer electronics firm, she applied lean manufacturing ideas that dramatically cut assembly time for their devices. Fresh eyes see new solutions.
  • Scan the fringes - Pay attention to startups, academia, and even art/design for nascent ideas. Today’s fringe tech (like quantum computing or brain-computer interfaces) might seem irrelevant to your business - until suddenly it isn’t. The roundtable group emphasised setting aside a small portion of time/budget for exploratory projects or partnerships that might not pay off immediately, but could leapfrog the competition if they do. Think of it as an innovation portfolio: a mix of core improvements and some moonshots or “crazy” ideas. The crazy ones sometimes yield the biggest breakthroughs (or as one leader laughed, “today’s crazy is tomorrow’s competitive necessity”).

Remember, innovation loves intersections. When you cross-pollinate ideas from different domains, you often get novel solutions that pure focus within one silo would never produce. In practical terms, you might start an “Innovation Exchange” where you regularly convene folks from different industries or departments to swap challenges and ideas. Or simply make it a habit in meetings to ask, “Has anyone seen how another industry tackles this problem?” It’s a simple question that can spark powerful answers.

Purpose-driven innovation | Sustainability as a North Star

In 2025, innovation isn’t just about profit - it’s about purpose. A striking theme from our roundtable (and indeed, from many conversations with leaders lately) is how sustainability and social purpose have moved from the periphery to the centre of innovation agendas. Companies are recognising that solving big problems - climate change, resource constraints, societal inequities - is the new frontier for value creation. And stakeholders are pushing them in that direction too.

Consider these statistics - 78% of consumers expect brands to contribute to a sustainable future (forbes.com), and more than three-quarters of Americans say a sustainable lifestyle is important to them (mckinsey.com). 

In practice, 46% of consumers report actively buying more sustainable products to reduce their carbon footprint, even if it costs a bit more (greenplaces.com). Gen Z and millennials, in particular, vote with their wallets - they are ~27% more likely to purchase from sustainable brands than older generations (greenplaces.com). Employees care too - 67% of job seekers are more willing to work for a company they see as environmentally sustainable (greenplaces.com). 

This isn’t tree-hugging fluff either! it’s market demand. Sustainable products and services are driving growth - in fact, products making ESG (environmental, social, governance) claims accounted for 56% of all growth in the consumer goods sector over the past five years (greenplaces.com). Companies ignoring this will get left behind both by consumers and talent.

Businesses are responding. A 2023 KPMG survey of 2,100 executives found that 48% had made ESG goals a central priority for their technology teams in the next two years (mooncamp.com). In other words, nearly half of digital transformation initiatives are now directly tied to sustainability and social impact targets. This is a huge shift from even a few years ago. We’re seeing everything from banks using AI to optimise for carbon-efficient investment portfolios, to manufacturers overhauling processes to be circular (reducing waste and recycling materials), to retailers innovating with sustainable packaging and supply chains. Sustainability is no longer a side project handled by CSR - it’s becoming a lens through which core business is done.

Our roundtable experts stressed that purpose-driven innovation actually fuels financial performance in the long run. Why? Because it opens new markets and mitigates risks. For example, the transition to a low-carbon economy is spawning whole new industries (renewable energy tech, electric mobility, carbon capture). Companies innovating in these areas are not only doing good but positioning themselves for huge future revenue streams. Conversely, those failing to innovate around sustainability may find their business models disrupted or face regulatory and reputational risks. As one attendee succinctly put it, “If you’re not part of the solution, you’ll be part of the fallout.”

So how can organisations put purpose at the core of innovation? A few ideas:

  • Tie innovation metrics to sustainability metrics - Just as you track ROI, track carbon reduction or community impact of new initiatives. This sends a message internally that success is measured on multiple bottom lines (financial, environmental, social). Some firms now even include sustainability KPIs in executives’ performance reviews to drive accountability.
  • Leverage partnerships for purpose - Innovating for big issues often requires collaboration across sectors. We heard about public-private partnerships delivering smart city solutions, and competitors forming pre-competitive alliances to set eco-standards. Don’t go it alone - partner with startups, NGOs, universities, or even rivals where it makes sense to move the needle on systemic challenges.
  • Empower employees to innovate with purpose - Many employees, especially younger ones, are passionate about making a difference. Tap into that. Hackathons or innovation challenges themed around sustainability can generate both enthusiasm and new ideas. One company mentioned at the roundtable started an internal “Green Shark Tank,” inviting employees to pitch eco-innovation ideas with seed funding for winners. The program unearthed several efficiency improvements that saved money and emissions - and made employees feel great about contributing.

Purpose-driven innovation isn’t just morally gratifying - it builds brand trust and resilience. Brands that stand for something bigger tend to inspire stronger loyalty. And focusing on sustainability often forces creative thinking that yields operational benefits (for instance, designing a product to use less energy or materials often makes it cheaper too). As we look to the future, the line between doing good and doing well will continue to blur. Innovation that matters - to customers, to society, to the planet - is the innovation that will ultimately drive enduring business value.

Future-proofing | Through people and purpose

Circling back to our initial question - How can businesses cut through the hype and ensure their innovation investments actually pay off? 

The insights from our Innovation Roundtable can be distilled into a simple language - Keep it human. Yes, technology will keep changing at a dizzying pace - AI will get more powerful, new tools will emerge - but the companies that thrive will be those that marry high-tech with high-purpose and high-trust.

While your competitors chase the next big thing, you can differentiate by creating the right conditions for innovation to flourish:

  • Align every innovation with a strategy and outcome - No more tech for tech’s sake. Pick your bets based on clear business value and set success metrics up front. This strategic discipline will spare you wasted pilots and “me too” projects.
  • Set an “AI Manifesto” and governance framework - Establish principles for how your organisation will (and won’t) use AI and automation. Ensure bias checks, privacy safeguards, and human oversight are baked into your AI projects. By proactively self-regulating, you build trust and avoid costly missteps.
  • Build a culture of innovation and safety - Invest in leadership training and team practices that foster psychological safety. Encourage experimentation, celebrate wins and learnings, and destigmatise failure. When employees feel safe to speak up and try new ideas, you unlock their full creativity - and that is a competitive superpower.
  • Think cross-sector and embrace the fringe. Don’t silo your innovation efforts. Cross-pollinate ideas from different industries and disciplines to find creative solutions. Keep an eye on emerging technologies and fringe trends - today’s “out there” idea could be your industry’s next big disruption. Cultivate networks and curiosity beyond your usual circles.
  • Embed sustainability and purpose at the core. Let purpose be a guiding star for innovation. Use your R&D and tech prowess to tackle meaningful problems - whether it’s reducing your carbon footprint or making your product more accessible. Purpose-driven innovation energises your team, appeals to customers, and opens new avenues for growth. It’s innovation with soul, and it’s what the world needs now.

Finally, remember that transformation is a journey, not a destination. As I often remind my clients, “small iterative wins drive real transformation.” By consistently aligning tech with strategy, nurturing a fearless culture, staying ethical, and keeping your eyes and heart open to the world beyond your office walls, you will create a business that not only keeps up with change but helps drive it. The future will undoubtedly be full of shiny new tools - but it’s the human touch, the clarity of purpose and the strength of culture, that will turn those tools into true business value. 

That’s how you future-proof your organisation in an age of endless disruption. By elevating the human elements that no AI can replicate - vision, trust, and the courage to innovate with purpose.

Disclaimer: AI supported me in writing this article! :)

Zara D.

Fractional marketing partner for Creative Businesses | I tell stories that attract clients

4mo

Well done you! I know some people who maaaay be interested - is your list open ?

Kevin White

Energy Systems Explorer | Spiritual Tech Support | Advocate for A.C.E.S. (Authors, Coaches, Entrepreneurs, Speakers)

5mo

Appreciate the focus on real-world ROI and aligning innovation with purpose.

Chris Marsh

Freelance Business Development <Agents Of> and Founder of Circles MCR

5mo

Some brilliant people in this photo!

Ben Thomson

Founder and Ops Director @ Full Metal Software | Improving Efficiency and Productivity using bespoke software

5mo

Fantastic event and attendees, looking forward to taking the conversation further (and to the next one of course!)

Like
Reply

To view or add a comment, sign in

More articles by Josh Cole Bolland

Others also viewed

Explore content categories