“In 2025, the global trend surrounding AI significantly shifted course from regulation toward the enhancement of competitiveness. Amid this geopolitical dynamism, Japan is building a unique position under the consistent banner of becoming the ‘most AI-friendly country in the world.’” In a new report, Hiroki Habuka breaks down Japan's AI strategy. He explores Japan's AI Promotion Act and analyzes how the country is emerging as a trusted global nexus in AI governance. 🔗 Read the full report here: https://lnkd.in/ePxGhPeB
CSIS Wadhwani AI Center
Think Tanks
Washington, District of Columbia 725 followers
CSIS's Wadhwani AI Center conducts research on the governance, geopolitics, and national security implications of AI
About us
The Wadhwani AI Center at CSIS conducts high-impact research on the policy implications of artificial intelligence, with a focus on national security, geopolitics, economic competitiveness, and global governance. Since 2023, the Wadhwani AI Center has been shaping the global conversation on AI policy through expert analysis and as a trusted source of insights for the U.S. government as well as international allies and partners. Our team frequently advises bipartisan U.S. government officials and civil servants, including Congressional testimony, private briefings, and published reports. As part of our research, we regularly engage with leading AI companies and academic researchers.
- Website
-
https://www.csis.org/programs/wadhwani-ai-center
External link for CSIS Wadhwani AI Center
- Industry
- Think Tanks
- Company size
- 2-10 employees
- Headquarters
- Washington, District of Columbia
- Type
- Nonprofit
- Founded
- 2023
Locations
-
Primary
1616 Rhode Island Ave NW
Washington, District of Columbia 20036, US
Employees at CSIS Wadhwani AI Center
Updates
-
The UN Global Dialogue on AI Governance is a powerful signal of political will to ensure AI benefits for the many rather than the few. But its potential impact is limited by U.S. rejection of any and all multilateral AI governance. What does this mean for the future of AI governance? Dr. Laura Caroli and Matt Mande unpack the Global Dialogue as a barometer of shifting geopolitical dynamics in AI governance. Read here: https://lnkd.in/ezbSFy8Q
-
In a new report, Kateryna Bondar analyzes the crucial role of digitization in sustaining Ukraine's hybrid defense ecosystem striking a balance between agile grassroots innovation and the institutional structures necessary to wage war at scale. 🔗 Read the full report below: https://lnkd.in/esDKpjew
My new CSIS report — How and Why Ukraine’s Military Is Going Digital — examines how Ukraine is reengineering its wartime defense ecosystem to move beyond improvisation toward institutional capacity and scale. After three years of full-scale war, Ukraine’s decentralized innovation system—built on startups, volunteers, and rapid battlefield feedback—has proven remarkably agile but also chaotic. A famous “zoo of technologies” might deliver quick wins, yet it cannot be scaled or sustained. The report explores how the Ministry of Defence is addressing this by digitizing procurement, integrating soldier feedback directly into development cycles, and creating digital platforms like Army+ and DOT-Chain Defence to connect frontline needs with national capability building. This marks a turning point: Ukraine is transforming its wartime agility into a model of coordinated, digital-era defense governance. Read here: https://lnkd.in/eWiuZqUK #Ukraine #DefenseInnovation #DigitalTransformation #MilitaryTechnology #AI #CSIS
-
🎙️ New on the AI Policy Podcast: Joseph Majkut, Director of the CSIS Energy Security and Climate Change Program, joins Gregory C. Allen for a deep dive into energy and AI, including: 📌 The current state of the U.S. electrical grid 📌 Bottlenecks in scaling AI data centers 📌 How U.S. energy efforts compare to China’s buildout 🎧 Listen: https://lnkd.in/ees34Gag 📺 Watch: https://lnkd.in/eNMpHcTD
-
🎙️New AI Policy Podcast episode is out! In this week's news roundup, we discuss: 📌How today's massive AI infrastructure investments compare to the Manhattan project 📌China's Nvidia chip ban and implications for export control policy 📌Anthropic's $1.5 billion copyright settlement 📌Recent multibillion-dollar AI investments by Nvidia and ASML Tune in here: https://lnkd.in/ecyQmz9c Watch the conversation here: https://lnkd.in/evf5h7YE
Is China Done with Nvidia’s AI Chips?
https://www.youtube.com/
-
This week on the AI Policy Podcast: Why is China's AI Sector Booming? 📈 Join Gregory C. Allen on a deep dive of China's AI industry, including: 💡China's focus on AI adoption 💡Underlying factors driving investor enthusiasm 💡National security implications for the U.S. Listen to the episode: https://lnkd.in/eBQPgZhu
-
"[The Code of Practice] is a way that you can facilitate compliance and show that [AI] is a trustworthy technology that can actually be assessed for risk." This week, Marietje Schaake joins us on the AI Policy Podcast to unpack the EU AI Act Code of Practice. As Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers, Schaake played a critical role in drafting the Code's Safety and Security chapter. We discuss: 📝 Development and drafting of the EU AI Act and Code of Practice ✅ The Code's role in helping companies like OpenAI and Google demonstrate compliance with the AI Act ☣️ Systemic risks the AI Act is seeking to address Listen to the episode: https://lnkd.in/e6y2jTM8 Watch the conversation: https://lnkd.in/eXy_GTwc
-
New on the AI Policy Podcast: In this episode, Gregory C. Allen and Brielle Hill dive into two big developments shaping U.S.-China tech competition: 💡 The Trump administration’s $8.9B deal for a 9.9% stake in Intel and what it means for U.S. industrial policy 💡 Why Nvidia abruptly halted H20 chip production for China and what’s next with a Blackwell-based chip design 🎧 Listen: https://lnkd.in/e-VGJ9QD 📺 Watch: https://lnkd.in/ebdnWkHN
U.S. Takes 10% Stake in Intel and Nvidia Halts H20 Production for China
https://www.youtube.com/
-
Upcoming Event: Inside Europe’s AI Strategy Lucilla Sioli, Director of the EU AI Office and lead policymaker on European AI policy, joins Dr. Laura Caroli for a discussion unpacking: 🏛️ The latest developments in EU regulation and innovation of AI 📰 The releases of the EU AI Code of Practice and the EU AI Continent Action Plan ⏰ What's next for AI in Europe 📅 August 28 at 10:30 AM ET 📍 Virtual livestream Register and watch the event here: https://lnkd.in/ehK4YuGR Center for Strategic and International Studies (CSIS) European Commission
-
🧬New report on AI and bioterrorism: As AI capabilities grow, the barriers to designing and deploying engineered pathogens are rapidly falling and U.S. biosecurity policy is struggling to keep up. In a new report, Georgia Adamson and Gregory C. Allen explore how large language models and biological design tools could lower the threshold for bioterrorism and highlight how current safeguards may soon be outpaced by AI-generated threats. The report outlines key policy recommendations, including increased funding for NIST and CAISI, systematic testing of biological design tools, and the development of an AI-enabled screening system that can detect novel biological agents. 🔗 Read the full report: https://lnkd.in/ekujabja