Global Influence of AI Governance

Explore top LinkedIn content from expert professionals.

  • View profile for Eugina Jordan

    CEO and Founder YOUnifiedAI I 8 granted patents/16 pending I AI Trailblazer Award Winner

    40,899 followers

    The G7 Toolkit for Artificial Intelligence in the Public Sector, prepared by the OECD.AI and UNESCO, provides a structured framework for guiding governments in the responsible use of AI and aims to balance the opportunities & risks of AI across public services. ✅ a resource for public officials seeking to leverage AI while balancing risks. It emphasizes ethical, human-centric development w/appropriate governance frameworks, transparency,& public trust. ✅ promotes collaborative/flexible strategies to ensure AI's positive societal impact. ✅will influence policy decisions as governments aim to make public sectors more efficient, responsive, & accountable through AI. Key Insights/Recommendations: 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐍𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬: ➡️importance of national AI strategies that integrate infrastructure, data governance, & ethical guidelines. ➡️ different G7 countries adopt diverse governance structures—some opt for decentralized governance; others have a single leading institution coordinating AI efforts. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 & 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 ➡️ AI can enhance public services, policymaking efficiency, & transparency, but governments to address concerns around security, privacy, bias, & misuse. ➡️ AI usage in areas like healthcare, welfare, & administrative efficiency demonstrates its potential; ethical risks like discrimination or lack of transparency are a challenge. 𝐄𝐭𝐡𝐢𝐜𝐚𝐥 𝐆𝐮𝐢𝐝𝐞𝐥𝐢𝐧𝐞𝐬 & 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤𝐬 ➡️ focus on human-centric AI development while ensuring fairness, transparency, & privacy. ➡️Some members have adopted additional frameworks like algorithmic transparency standards & impact assessments to govern AI's role in decision-making. 𝐏𝐮𝐛𝐥𝐢𝐜 𝐒𝐞𝐜𝐭𝐨𝐫 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 ➡️provides a phased roadmap for developing AI solutions—from framing the problem, prototyping, & piloting solutions to scaling up and monitoring their outcomes. ➡️ engagement + stakeholder input is critical throughout this journey to ensure user needs are met & trust is built. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬 𝐨𝐟 𝐀𝐈 𝐢𝐧 𝐔𝐬𝐞 ➡️Use cases include AI tools in policy drafting, public service automation, & fraud prevention. The UK’s Algorithmic Transparency Recording Standard (ATRS) and Canada's AI impact assessments serve as examples of operational frameworks. 𝐃𝐚𝐭𝐚 & 𝐈𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞: ➡️G7 members to open up government datasets & ensure interoperability. ➡️Countries are investing in technical infrastructure to support digital transformation, such as shared data centers and cloud platforms. 𝐅𝐮𝐭𝐮𝐫𝐞 𝐎𝐮𝐭𝐥𝐨𝐨𝐤 & 𝐈𝐧𝐭𝐞𝐫𝐧𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: ➡️ importance of collaboration across G7 members & international bodies like the EU and Global Partnership on Artificial Intelligence (GPAI) to advance responsible AI. ➡️Governments are encouraged to adopt incremental approaches, using pilot projects & regulatory sandboxes to mitigate risks & scale successful initiatives gradually.

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,262 followers

    This pre-print of Marco Almada's chapter "The EU AI Act in a Global Perspective" discusses the ambition and potential limitations of the EU's AI Act as a global regulatory standard. Almada positions the AI Act as an important initiative in AI governance but suggests its influence may be less pervasive globally compared to other EU digital regulations like the GDPR. Considerations against the global influence of the EU AI Act include: - Institutional specificity: The AI Act is closely tied to the unique features and political compromises of the EU's institutional framework, which may not be replicable or desirable in other jurisdictions. - Complex horizontal and vertical regulatory structures: -- Horizontally, the EU AI Act mandates intricate cooperation across the AI supply chain and relies on European Standardization Organizations to establish technical standards, which might be difficult to replicate elsewhere. -- Vertically, it requires a robust network of state oversight, including market surveillance authorities and centralized regulation by the EU Commission, demanding significant coordination which may not seamlessly integrate with other nations' legal systems. - Risk-based regulatory approach: The AI Act's risk-based approach to regulation could serve as both an asset and a barrier. While it aims to manage AI's risks effectively within the EU, other countries might struggle to adopt its extensive requirements and institutional arrangements. * * * While the author argues that the global influence of the EU AI Act might be less than expected, he still outlines several compelling reasons why it could achieve significant global recognition: - The EU's pioneering role in AI regulation provides a first-mover advantage that sets a precedent for others to follow. - The economic power and size of the EU market make compliance with the AI Act appealing for global AI providers who wish to access this lucrative market ("market access effect"). - Multinational companies may adopt EU standards globally for economic efficiency ("Brussels Effect"). - The Act's extraterritorial provisions mean that non-EU entities interacting with the EU market must comply, extending its influence beyond EU borders. - The EU's active participation in international negotiations and its reputation as a regulatory leader in digital matters also contribute to the potential for the AI Act to shape global AI regulations, setting a benchmark that other nations might emulate or adapt to their contexts. * * * The author also mentions alternative regulatory approaches. Specifically, jurisdictions like Brazil and the Council of Europe's Framework Convention on AI emphasize individual rights more than the EU AI Act’s product safety approach. This indicates that some countries might prefer regulatory models that place a higher priority on individual freedoms as opposed to the EU's framework, which is oriented towards managing product safety and market compliance.

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    Lead at the MIT AI Risk Repository | MIT FutureTech

    63,085 followers

    "The rapid evolution and swift adoption of generative AI have prompted governments to keep pace and prepare for future developments and impacts. Policy-makers are considering how generative artificial intelligence (AI) can be used in the public interest, balancing economic and social opportunities while mitigating risks. To achieve this purpose, this paper provides a comprehensive 360° governance framework: 1 Harness past: Use existing regulations and address gaps introduced by generative AI. The effectiveness of national strategies for promoting AI innovation and responsible practices depends on the timely assessment of the regulatory levers at hand to tackle the unique challenges and opportunities presented by the technology. Prior to developing new AI regulations or authorities, governments should: – Assess existing regulations for tensions and gaps caused by generative AI, coordinating across the policy objectives of multiple regulatory instruments – Clarify responsibility allocation through legal and regulatory precedents and supplement efforts where gaps are found – Evaluate existing regulatory authorities for capacity to tackle generative AI challenges and consider the trade-offs for centralizing authority within a dedicated agency 2 Build present: Cultivate whole-of-society generative AI governance and cross-sector knowledge sharing. Government policy-makers and regulators cannot independently ensure the resilient governance of generative AI – additional stakeholder groups from across industry, civil society and academia are also needed. Governments must use a broader set of governance tools, beyond regulations, to: – Address challenges unique to each stakeholder group in contributing to whole-of-society generative AI governance – Cultivate multistakeholder knowledge-sharing and encourage interdisciplinary thinking – Lead by example by adopting responsible AI practices 3 Plan future: Incorporate preparedness and agility into generative AI governance and cultivate international cooperation. Generative AI’s capabilities are evolving alongside other technologies. Governments need to develop national strategies that consider limited resources and global uncertainties, and that feature foresight mechanisms to adapt policies and regulations to technological advancements and emerging risks. This necessitates the following key actions: – Targeted investments for AI upskilling and recruitment in government – Horizon scanning of generative AI innovation and foreseeable risks associated with emerging capabilities, convergence with other technologies and interactions with humans – Foresight exercises to prepare for multiple possible futures – Impact assessment and agile regulations to prepare for the downstream effects of existing regulation and for future AI developments – International cooperation to align standards and risk taxonomies and facilitate the sharing of knowledge and infrastructure"

  • View profile for Glen Cathey

    Advisor, Speaker, Trainer; AI, Human Potential, Future of Work, Sourcing, Recruiting

    66,642 followers

    Check out this massive global research study into the use of generative AI involving over 48,000 people in 47 countries - excellent work by KPMG and the University of Melbourne! Key findings: 𝗖𝘂𝗿𝗿𝗲𝗻𝘁 𝗚𝗲𝗻 𝗔𝗜 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻 - 58% of employees intentionally use AI regularly at work (31% weekly/daily) - General-purpose generative AI tools are most common (73% of AI users) - 70% use free public AI tools vs. 42% using employer-provided options - Only 41% of organizations have any policy on generative AI use 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗥𝗶𝘀𝗸 𝗟𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲 - 50% of employees admit uploading sensitive company data to public AI - 57% avoid revealing when they use AI or present AI content as their own - 66% rely on AI outputs without critical evaluation - 56% report making mistakes due to AI use 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝘃𝘀. 𝗖𝗼𝗻𝗰𝗲𝗿𝗻𝘀 - Most report performance benefits: efficiency, quality, innovation - But AI creates mixed impacts on workload, stress, and human collaboration - Half use AI instead of collaborating with colleagues - 40% sometimes feel they cannot complete work without AI help 𝗧𝗵𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗚𝗮𝗽 - Only half of organizations offer AI training or responsible use policies - 55% feel adequate safeguards exist for responsible AI use - AI literacy is the strongest predictor of both use and critical engagement 𝗚𝗹𝗼𝗯𝗮𝗹 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 - Countries like India, China, and Nigeria lead global AI adoption - Emerging economies report higher rates of AI literacy (64% vs. 46%) 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 - Do you have clear policies on appropriate generative AI use? - How are you supporting transparent disclosure of AI use? - What safeguards exist to prevent sensitive data leakage to public AI tools? - Are you providing adequate training on responsible AI use? - How do you balance AI efficiency with maintaining human collaboration? 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗢𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 - Develop clear generative AI policies and governance frameworks - Invest in AI literacy training focusing on responsible use - Create psychological safety for transparent AI use disclosure - Implement monitoring systems for sensitive data protection - Proactively design workflows that preserve human connection and collaboration 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗜𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹𝘀 - Critically evaluate all AI outputs before using them - Be transparent about your AI tool usage - Learn your organization's AI policies and follow them (if they exist!) - Balance AI efficiency with maintaining your unique human skills You can find the full report here: https://lnkd.in/emvjQnxa All of this is a heavy focus for me within Advisory (AI literacy/fluency, AI policies, responsible & effective use, etc.). Let me know if you'd like to connect and discuss. 🙏 #GenerativeAI #WorkplaceTrends #AIGovernance #DigitalTransformation

  • View profile for Prukalpa ⚡
    Prukalpa ⚡ Prukalpa ⚡ is an Influencer

    Founder & Co-CEO at Atlan | Forbes30, Fortune40, TED Speaker

    45,947 followers

    The EU just said "no brakes" on AI regulation. Despite heavy pushback from tech giants like Apple, Meta, and Airbus, the EU pressed forward last week with its General-Purpose AI Code of Practice. Here's what's coming: → General-purpose AI systems (think GPT, Gemini, Claude) need to comply by August 2, 2025. → High-risk systems (biometrics, hiring tools, critical infrastructure) must meet regulations by 2026. → Legacy and embedded tech systems will have to comply by 2027. If you’re a Chief Data Officer, here’s what should be on your radar: 1. Data Governance & Risk Assessment: Clearly map your data flows, perform thorough risk assessments similar to those required under GDPR, and carefully document your decisions for audits. 2. Data Quality & Bias Mitigation: Ensure your data is high-quality, representative, and transparently sourced. Responsibly manage sensitive data to mitigate biases effectively. 3. Transparency & Accountability: Be ready to trace and explain AI-driven decisions. Maintain detailed logs and collaborate closely with legal and compliance teams to streamline processes. 4. Oversight & Ethical Frameworks: Implement human oversight for critical AI decisions, regularly review and test systems to catch issues early, and actively foster internal AI ethics education. These new regulations won’t stop at Europe’s borders. Like GDPR, they're likely to set global benchmarks for responsible AI usage. We're entering a phase where embedding governance directly into how organizations innovate, experiment, and deploy data and AI technologies will be essential.

  • View profile for Branka Panic

    AI for Peace Founder | Human-Centered AI | AI for Good | Peacebuilding | Human Rights | Democracy | Human Security

    9,513 followers

    📚 I've been teaching Foreign Policy & AI to diplomats across the world and I always start with that now-famous 2017 moment when Putin said: "𝘞𝘩𝘰𝘦𝘷𝘦𝘳 𝘭𝘦𝘢𝘥𝘴 𝘪𝘯 𝘈𝘐 𝘸𝘪𝘭𝘭 𝘳𝘶𝘭𝘦 𝘵𝘩𝘦 𝘸𝘰𝘳𝘭𝘥." Naturally, people ask: So where has Russia been on the AI front since then? 🤔 Russia’s AI ambitions are not dead - they’ve just found a new stage. Moscow has turned to the BRICS bloc, whose founding members include #Brazil, #Russia, #India, #China, and #SouthAfrica, to build a parallel AI ecosystem. Here's what I’ve been reflecting on: 🤖 Russia has adopted the 2021 National Security Strategy, emphasizing the role of advanced technologies, including #AI, in strengthening #nationaldefense and #economic resilience. 🪆 The Ministry of Foreign Affairs’ 2023 Concept of the Foreign Policy highlights AI growth and deeper BRICS cooperation. 🧠 Russia sees AI as a pillar of its long-term global strategy. Despite sanctions and brain drain, it’s doubling down on #AI via #BRICS cooperation. 🌐 BRICS has become Moscow’s AI sandbox. What began as a geopolitical bloc is morphing into a tech and governance alliance, with AI at the center. 📈 BRICS now makes up 35% of the global economy, and with new members like UAE, Iran, Egypt, and Ethiopia, it’s evolving into a parallel AI ecosystem, beyond Western influence. BRICS introduced some significant AI governance efforts: 🔹 Established an AI Study Group to “develop AI governance frameworks and standards 🔹 Russia led the creation of the BRICS AI Alliance - a strategic initiative promoting collaborative joint research and regulation.  🔹 Advocated for the BRICS adoption of "Russia's Code of AI Ethics", signaling a clear Russian leadership in the AI governance space. 🔹 Building partnerships to deploy Russian/Chinese AI infrastructure in the Global South. 🔹 Encouraging BRICS nations to shift away from OpenAI and U.S.-centric models - 100 of the largest companies in BRICS nations are shifting away from Western models like OpenAI, toward emerging Chinese, Russian, and Emirati models. 💥 Recent moves include: 👉 A Russia–China Joint Declaration on AI Cooperation. 👉 A strategic AI pact with Iran. 👉 BRICS’ own AI Study Group, with ambitions to define global standards. 💡 BRICS is no longer just a diplomatic club. It's a strategic AI force — and we need to treat it as such. #AI #ForeignPolicy #BRICS #AIforPeace #Geopolitics #AIgovernance #TechDiplomacy #Russia #China #GlobalSouth #ArtificialIntelligence #InternationalRelations #DigitalSovereignty #DemocracyAndTech #AIAlliance

  • View profile for Patrick Sullivan

    VP of Strategy and Innovation at A-LIGN | TEDx Speaker | Forbes Technology Council | AI Ethicist | ISO/IEC JTC1/SC42 Member

    9,799 followers

    ❗Enterprise AI Assurance: The Market Is Moving Faster Than Regulation❗ AI governance isn’t waiting for regulators to catch up. Enterprise adoption of ISO42001 is proving that market-driven AI assurance will outpace regulatory mandates. AWS and Google Cloud have both achieved ISO42001 certification for their AIMS. Microsoft has gone a step further, integrating ISO42001-based AI governance requirements into its supplier security and privacy program (SSPADPR v10). These aren’t just compliance exercises, they’re market-driven moves to standardize AI risk management across global supply chains. The distinction is critical. Regulations create a baseline, but markets set the standard. ➡️Regulatory vs. Market Pressure: What’s Driving AI Assurance? 🔹Regulatory Pressure: Governments are defining AI laws, but enforcement is staggered. The EU AI Act is rolling out compliance deadlines over the next two years. California is introducing AI audit requirements. Other regions are still debating how far they’ll go. 🔹 Market Pressure: Enterprise customers aren’t waiting for laws to dictate AI assurance. Major cloud providers are already making ISO42001 a prerequisite for AI services and vendor relationships. Companies that want to sell into enterprise markets will need structured AI governance whether regulations require it or not. The reality? Market-driven AI assurance will extend beyond regulatory borders. ISO42001 isn’t tied to a single country’s AI laws, it’s a globally recognized framework. That means a company operating outside the EU AI Act’s jurisdiction will still feel the pressure to align with ISO42001 if it wants to do business inside enterprise ecosystems. ➡️Why Market Pressure Influences More Than Regulation ✔ Regulations define legal obligations. Market expectations define operational reality. Enterprise customers expect AI governance beyond minimum legal thresholds, they need consistent, certifiable AI assurance frameworks across regions. ✔ Market-driven AI governance moves faster than legislation. The EU AI Act was first proposed in 2021. It took three years to pass. Meanwhile, AWS, Google, and Microsoft have already made ISO42001 a working standard for AI assurance. ✔ Market adoption scales globally. A company that achieves ISO42001 certification isn’t just proving compliance in one region, it’s building AI assurance into its business model. That makes it easier to sell into regulated and unregulated markets alike. ➡️What This Means for AI Vendors, Suppliers, and Enterprises Companies that provide, produce, or use AI in enterprise environments will need structured AI governance. 🔸Suppliers: If your customers require ISO42001 compliance, you won’t be able to sidestep AI assurance. 🔸AI Service Providers: Offering ISO42001-certified AI solutions is a competitive advantage. 🔸Enterprises: Market-driven AI governance reduces business risk beyond regulatory mandates, you should expect to see ISO42001 in procurement requirements soon.

  • View profile for Victoria Beckman

    Associate General Counsel - Cybersecurity & Privacy

    31,336 followers

    The OECD - OCDE published the paper "Assessing potential future AI risks, benefits and policy imperatives,” summarizing insights from surveying its #artificialintelligence’s Expert Group and discussing the top 10 priorities for each category. Priority risks: - Facilitation of increasingly sophisticated malicious #cyber activity - Manipulation, #disinformation, fraud and resulting harms to democracy and social cohesion - Races to develop and deploy #AIsystems cause harms due to a lack of sufficient investment in AI safety and trustworthiness - Unexpected harms result from inadequate methods to align #AI system objectives with human stakeholders’ preferences and values - Power is concentrated in a small number of companies or countries - Minor to serious AI incidents and disasters occur in critical systems - Invasive surveillance and #privacy infringement that undermine human rights - Governance mechanisms and institutions unable to keep up with rapid AI evolution - AI systems lacking sufficient explainability and interpretability erode accountability - Exacerbated inequality or poverty within or between countries. Priority benefits: - Accelerated scientific progress - Better economic growth, productivity gains and living standards - Reduced inequality and poverty - Better approaches to address urgent and complex issues - Better decision-making, sense-making and forecasting through improved analysis of present events and future predictions - Improved information production and distribution, including new forms of #data access and sharing - Better healthcare and education services - Improved job quality, including by assigning dangerous or unfulfilling tasks to AI - Empowered citizens, civil society, and social partners - Improved institutional transparency and governance, instigating monitoring and evaluation. Policy priorities to help achieve desirable AI futures: - Establish clearer rules for AI harms to remove uncertainties and promote adoption - Consider approaches to restrict or prevent certain “red line” AI uses (uses that should not be developed) - require or promote the disclosure of key information about some types of AI systems - Ensure risk management procedures are followed throughout the lifecycle of AI systems - Mitigate competitive race dynamics in AI development and deployment that could limit fair competition and result in harms - Invest in research on AI safety and trustworthiness approaches, including AI alignment, capability evaluations, interpretability, explainability and transparency - Facilitate educational, retraining and reskilling opportunities to help address labor market disruptions and the growing need for AI skills - Empower stakeholders and society to help build trust and reinforce democracy -  Mitigate excessive power concentration -  Take targeted actions to advance specific future AI benefits. Annex B contains the matrices with all identified risks, benefits and policy imperatives (not just the top 10)

  • View profile for Dewey Murdick

    Professor | Researcher | Data Scientist | Advisor

    4,600 followers

    Fascinating insights from last week’s CSET panel on the global state of AI governance in 2024! Here's what caught my attention: 1️⃣Three distinct governance approaches emerged this year: - US: Decentralized, currently executive order-led effort, approaches balancing innovation and safety - EU: Comprehensive AI Act legislation with a lot of implementation details to be worked out - China: Strong state control, highly iterative with increased international outreach 2️⃣ The EU's AI Act was meant to set global standards, but implementation challenges are making other nations think twice about following their framework. 3️⃣ "Sovereign AI" is gaining traction across Asia-Pacific. Countries like South Korea, Singapore, and India are racing to develop their own AI capabilities - will this lead to fragmentation or new collaborative opportunities? 4️⃣ China's strategic engagement with the Global South on AI development could be a game-changer, especially in developing smaller, locally-adapted AI models. This long-term investment might reshape international AI norms. Want to learn more from Owen J. Daniels, Mina Narayanan, Mia Hoffmann, and Cole McFaul? Watch their discussion here: https://lnkd.in/efAVrajc #AIGovernance #GlobalTech #TechPolicy #ArtificialIntelligence

  • View profile for James Manyika
    James Manyika James Manyika is an Influencer

    SVP, Google-Alphabet

    89,539 followers

    For the past year, I’ve had the privilege of co-chairing together with Carme Artigas the UN’s High-level Advisory Body on AI,which included 38 members from 33 countries. We were tasked with developing a blueprint for sharing AI’s transformative potential globally, while identifying and addressing the risks and filling the gaps that limit participation.  Following our interim report in Dec 2023, today we’re sharing our final report which outlines our key findings and recommendations to enhance global cooperation on AI governance. The report was informed by extensive consultation, including more than 2000 participants from all regions, 18 deep dives with 500 expert participants, 250 written submissions, 100+ virtual discussions, as well as research and surveys. AI has the potential to assist people in everyday tasks to their productive and creative endeavors, enable entrepreneurs and small and large businesses, transformation of sectors from healthcare to agriculture, power economic growth, advance science in ways that benefit society, and contribute to achieving the UN’s Sustainable Development Goals. At the same time, as with any powerful technology, it poses risks, challenges and complexities ranging from bias, misapplication and misuse, impact on work, to potentially widening global inequities. Our work highlighted many of these themes as well as key gaps in governance and the capacity for all to fully benefit from AI.  To harness AI’s potential and mitigate its risks, we need a truly inclusive and international effort – and current governance structures are missing too many voices. Our recommendations focus on these and other findings and I encourage you to read the report. Thank you to the UN’s Tech Envoy Amandeep Gill and his team, my co-chair Carme Artigas, and my fellow members of the advisory body -- from whom I learned a lot -- for their expertise and diverse views and vantage points, partnership, persistence and commitment to governing and harnessing AI’s potential benefits for all of humanity. https://lnkd.in/gFhFWWEh Carme Artigas, Anna Christmann, Anna Abramova, Omar Sultan AlOlama, @Latifa Al-Abdulkarim, Estela Aranha, Ran Balicer, Paolo Benanti, Abeba Birhane, Ian Bremmer, Anna Christmann,Natasha Crampton, Nighat Dad, Vilas Dhar, Virginia Dignum, @Arisa Ema, @mohamed farahat, Wendy Hall, Rahaf Harfoush, Hiroaki Kitano, Haksoo Ko, Andreas Krause, Maria Vanina Martinez, Seydina M. Ndiaye, @Moussa Ndiaye, Mira Murati, Petri Myllymäki, Alondra Nelson, Nazneen Rajani, Craig Ramlal, @Ruimin He, Emma Ruttkamp-Bloem, Marietje Schaake, @Sharad Sharma, @Jaan Tallinn, Ambassador Philip Thigo, MBS, Jimena Viveros LL.M., Yi Zeng, @Zhang Linghan

Explore categories