Trust, Attitudes and Use of Artificial Intelligence | prepared by KPMG in collaboration with The University of Melbourne
This report examines the evolving landscape of #AI adoption, #publictrust, and organizational deployment strategies, highlighting the nuanced interplay between technological capability, regulatory frameworks, and societal perception. Drawing on survey data from over 5,400 participants across #Australia, #theUnitedKingdom, and #theUnitedStates, and interviews with 87 senior executives, the study situates AI not merely as a tool but as a socio-technical system where trust, governance, and ethical alignment directly impact adoption rates and value creation. The research frames AI deployment within three interdependent dimensions: organizational readiness, user perception, and accountability mechanisms, revealing systemic disparities across sectors and geographies that influence both uptake and efficacy.
Quantitative findings underscore the critical role of trust in AI adoption: 64% of respondents expressed cautious optimism about AI’s benefits, yet only 38% indicated strong confidence in existing oversight mechanisms. In organizations actively deploying AI, 71% reported integrating ethical review boards or AI governance committees, yet only 29% had formalized explainability protocols for end-users. Cross-sector analysis highlights that highly regulated industries such as finance and healthcare demonstrate higher trust scores—averaging 58%—relative to #consumertech and #retail, where confidence falls to 41%. Moreover, AI literacy emerged as a significant predictor of acceptance: participants with moderate to high AI familiarity were 2.3 times more likely to endorse AI-assisted decision-making in critical operations, emphasizing the interaction between competence, transparency, and trust.
The report introduces a conceptual framework of “Trust-Enabled AI,” integrating perceptions, governance structures, and adoption metrics into a single evaluative model. Pilot case studies illustrate that firms embedding continuous monitoring, algorithmic transparency, and stakeholder engagement in AI systems achieve adoption rates 22–27% higher than peers with minimal governance interventions. Additionally, public sentiment analysis reveals that narrative framing and proactive communication about AI limitations can improve perceived legitimacy, thereby mitigating reputational and #operationalrisks.
In summary, AI adoption and efficacy are inextricably linked to the triad of trust, competence, and governance. Building resilient AI ecosystems requires organizations to implement robust oversight, enhance user literacy, and align algorithmic decision-making with societal values. The findings signal that without deliberate #investment in trust infrastructure, AI #risks engendering skepticism and uneven uptake, undermining its transformative potential across both public and private sectors.
        
            
            
              
          
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
      
            
      
        
                    
              
                    41
              
            
       
  
                  
                  
                  
                  
                  
                  
                  
            
      
        
                1 Comment