The live poll, conducted across Okta's Oktane on the Road events in Sydney and Melbourne, captured responses from hundreds of technology and security executives. The findings show that AI adoption and awareness are accelerating, but that governance frameworks and identity systems now need to mature at the same pace.
Key findings
-
41% of respondents said no single person or function currently owns AI security risk in their organisation.
-
Only 18% said they were confident they could detect if an AI agent acted outside its intended scope.
-
Shadow AI, unapproved or unmonitored tools, was identified as the top security blind spot (35%), followed by data leakage through integrations (33%).
-
Just 10% said their identity systems were fully equipped to secure non-human identities such as AI agents, bots, and service accounts, while 52% said they were partially equipped.
-
Board awareness is improving, with 70% saying their boards are aware of AI-related risks, but only 28% said boards are fully engaged in oversight.
The findings highlight Australia's enthusiasm for AI and the growing recognition that security and governance must evolve in parallel.
"Australian organisations are embracing AI with real momentum, and that's a positive sign," said Mike Reddie, Vice President and Country Manager, Okta ANZ. "We are seeing a shift from early experimentation to responsible, strategic adoption. The next step is ensuring governance and security evolve at the same pace."
"Securing AI isn't about slowing progress; it's about starting with the right foundation. When identity is strong, trust follows, and that's what enables innovation to scale safely and sustainably," Reddie added.
The results show that most organisations already view identity as central to building AI trust. However, many are still adapting traditional access controls to the new risks posed by AI agents and automation.
Okta's recent AI at Work 2025 and Customer Identity Trends reports found that 91% of organisations globally are already using or experimenting with AI agents. Yet, fewer than 10% have a strategy to secure them.
As AI becomes more mainstream, it must apply the same discipline to securing AI agents as they do to human users, ensuring every agent has a verified identity, defined permissions, and full auditability.
|
|
Okta AI Security Poll Results – Australia 2025
Overview
New data from Okta's Oktane on the Road events in Sydney and Melbourne reveals that Australian organisations are leading the charge in AI adoption but remain in the early stages of building governance and security frameworks to match.
The poll, conducted live among hundreds of IT and security executives, highlights a clear opportunity: while confidence in AI's potential is high, visibility, accountability, and governance are still catching up.
Key Findings
-
AI risk ownership remains fragmented:
41% of respondents said no single person or function currently owns AI security risk in their organisation.
Only 34% said the CISO or security function holds accountability. -
Detection capability is low:
Just 18% of organisations are confident they could detect if an AI agent acted outside its intended scope.
40% said they are not confident, and 22% do not currently monitor AI agent activity. -
Shadow AI is the top concern:
35% identified Shadow AI, unapproved or unmanaged tools, as their greatest AI security blind spot.
Data leakage followed closely behind at 33%. -
Identity systems are lagging AI innovation:
Only 10% said their identity and access management (IAM) systems are fully equipped to secure AI agents and bots.
52% said their systems are only partially equipped, revealing a significant maturity gap. -
Board engagement is rising but uneven:
70% said their boards are aware of AI-related risks, but only 28% said boards are fully engaged in governance.
21% reported limited awareness, and 8% said AI has not yet been discussed at board level. -
Most organisations are balancing innovation and governance:
58% described their AI approach as balanced, innovating with governance in mind.
22% said they prioritise speed and innovation, while 15% are cautious and 5% have paused or restricted AI use.
Summary
-
Australian organisations are moving rapidly to embrace AI, but governance and identity readiness are still catching up.
-
Leaders recognise the opportunity of AI yet acknowledge critical gaps in ownership, visibility, and detection.
Boards are becoming more engaged, but the findings reveal a clear need for stronger accountability frameworks and identity-first security foundations. -
Identity remains the critical missing link in AI governance. Organisations that strengthen identity controls for both human and non-human users will be better positioned to innovate confidently and protect trust in the AI era.
