The Future Society’s cover photo
The Future Society

The Future Society

Non-profit Organizations

Boston, Massachusetts 28,216 followers

Aligning artificial intelligence through better governance.

About us

The Future Society (TFS) is an independent nonprofit organization based in the US and Europe with a mission to align AI through better governance.

Industry
Non-profit Organizations
Company size
11-50 employees
Headquarters
Boston, Massachusetts
Type
Nonprofit
Founded
2014
Specialties
Public Policy, Emerging Technologies, AI Safety, Technological Convergence, Artificial Intelligence, Privacy, Politics, Technology Governance, Governance, AI, Regulations, Independent Audit, International Governmental Organizations, Responsible AI, Global Governance, and Technology Policy

Locations

  • Primary

    867 Boylston St

    5th floor

    Boston, Massachusetts 02116, US

    Get directions

Employees at The Future Society

Updates

  • The Future Society reposted this

    💙 Without Partners, There’s No Progress: A Thank You to Our Champions Behind every impactful summit is a network of dedicated partners who believe in building something greater than themselves. 🙏 On behalf of AI ON US We extend heartfelt thanks to: MIT FutureTech, ARBORUS, Institute for Global Negotiation , HP Federal, Epitech - L'école de l'excellence informatique, The Future Society, Open Ethics, Numalis, French Tech Pays Basque 🐓🌊 , Pays Basque Digital 💻, NextGen Ethics, Digital Peace, Institut du Numérique Responsable, Epitech - L'école de l'excellence informatique, PME-ETI France, CCI BAYONNE PAYS BASQUE and many more. Their support helped shape the Playbook IA 2025, power the 11 hours of summit content, and set global benchmarks for AI responsibility. 🌍 You can still learn from them. 🎥 ➡ Access all sessions via replay → www.ai-on-us.com #AIOnUsPartners #ThankYou #EthicalAI #ReplayNow #SummitSupporters

    • No alternative text description for this image
  • We're thrilled to welcome Yohann Ralle as our new Director of EU AI Governance at The Future Society. We look forward to shaping a responsible and human- centered AI future together. #TeamTFS

    View profile for Yohann Ralle

    Director of European AI Governance @ The Future Society - Previously Deputy National Coordinator for France’s AI Strategy

    Paris --> Brussels : catching the next AI train After three great years building and implementing France’s AI strategy, I’m taking on a new challenge as Director of EU AI Governance at The Future Society. TFS was among the first to warn about AI’s systemic risks, well before ChatGPT. We’ve all seen how “simple” recommendation algorithms have eroded the foundations of our democracies. Today, more powerful and sycophantic models amplify those harms and introduce new threats. Disinformation and AI-generated slop content spread at scale, pushing us closer to a post-truth era. Soaring energy use undermines climate goals, while unreliable AI agents amplify cyber vulnerabilities and failure modes. Europe has the opportunity to define a different path for AI: one built on reliability, responsibility, and public purpose. Civil society has a crucial role to play, working alongside startups and public institutions to design alternative technologies that reflect Europe’s values and strengthen our strategic autonomy. TFS’ team is deeply engaged on many high-priority topics for Europe’s and a better AI future. If that resonates with your work, let’s connect ! 🤝 Nick Moës ; Toni Lorente ; Amin Oueslati ; Robin Staes-Polet ; Jonathan Schmidt ; Sven Herrmann ; Kathrin Gardhouse ; Radina Kraeva ; Ellen O'Connell ; Niki Iliadis ; Caroline Jeanmaire ; Mai Lynn M. ; Eloise Dunn

  • In Case You Missed It: The first Key Update to the International AI Safety Report was released this past week. To help policymakers keep pace with general-purpose AI development, this update offers key insights into new AI capabilities and breakthroughs—from performance improvements to real-world adoption to stronger safeguards—and what they mean for safety and governance.

    View profile for Yoshua Bengio

    Full professor at Université de Montréal, President and Scientific Director of LawZero, Founder and Scientific Advisor at Mila

    In my role as Chair of the International AI Safety Report, an effort backed by over 30 countries and international organisations including the European Union, OECD - OCDE and United Nations, I work with 100 researchers to help policymakers understand the capabilities and risks of general-purpose AI. The field is clearly changing far too quickly for a single annual report to suffice. That’s why today we’re introducing Key Updates: shorter, focused reports on critical developments in AI that will be published between editions of the full report. Our first Key Update focuses on advancements in AI capabilities, and what they mean for AI safety. You can read it here: https://lnkd.in/eKVGF7dy Some of the key findings it covers include: ➡️ Impressive performance improvements. Several AI systems can now solve International Mathematical Olympiad problems at gold medal level and complete a majority of problems in several databases of real-world software engineering tasks. ➡️ The rise of “reasoning” models. Recent gains have come mainly from training and deployment techniques that allow AI models to generate interim steps before producing final answers. This demonstrates that AI capabilities can advance significantly through post-training techniques and additional computing power at inference time, not just through scaling model size. ➡️ Some signals of real-world adoption. In a recent StackOverflow survey, a majority of software developers report using AI tools daily to help design experiments, process data, and write reports. Yet we still don’t know much about AI use in many other domains, nor crucially about how AI use affects productivity overall. ➡️ Stronger safeguards from developers. Leading AI developers recently activated enhanced protections on their most capable models as a precautionary measure, given possibilities like misuse to build weapons. ➡️ Emerging oversight challenges. AI models increasingly demonstrate an ability to distinguish evaluation tasks from real-world tasks, possibly complicating our ability to reliably test their capabilities before deployment. These developments raise further questions about control, monitoring, and governance as AI systems become more capable.

  • What can civil society learn from the EU AI Act policymaking process? At Mozilla Festival in Barcelona on November 7, our Senior Associate Toni Lorente will share insights from The Future Society’s involvement in helping draft the Code of Practice over the past year. With a focus on the challenge of industry capture, this session will provide concrete lessons about navigating and challenging hidden power dynamics and known problems in AI policy development. Mozilla Festival is the world's leading festival for the open internet movement. Learn more about our session, see the full schedule, and register to attend: https://lnkd.in/gMhbYVGR #MozFest #MozillaFestival

  • Read key insights from our Global AI Governance Associate Delfina Belli, who spoke at a seminar in the Vatican City on AI for Peace, Social Justice, and Integral Human Development ⬇️

    View profile for Delfina Belli

    Associate, AI Governance @ The Future Society

    How do we face the high stakes of AI together as global citizens? And how do we remain grounded in our humanity? These questions framed the two-day conversation last week in the Vatican City, during the Seminar on Digital Rerum Novarum: Artificial Intelligence for Peace, Social Justice, and Integral Human Development. The discussions brought together thinkers from across disciplines and traditions: theologians, technologists, ethicists and economists, all reflecting in different ways on how we can govern AI in service of humanity rather than power. We discussed AI sustainability, not only in environmental or economic terms, but as the refusal of unsustainable risks: the unacceptable, irreversible harms that threaten dignity, agency, social cohesion and all those elements that make up our humanity. If AI is to work for people, we must define red lines: clear, enforceable limits on what these systems can and cannot do and what they must never become. Because we are already seeing the costs of inaction: manipulative and unsafe chatbots, AI‑generated child abuse material, autonomous weapons that remove human control. These are not future hypotheticals, but present realities. This pivotal moment carries real momentum, but the window of opportunity is closing. Agreeing upon and enforcing global AI red lines will not be easy: geopolitical fragmentation, unpredictable AI behaviors, and weak oversight structures remain major hurdles. Yet through trust‑based coalition‑building and coordinated international dialogue, precedents show that progress is within reach. What matters now is political will, transparency, and the courage to translate principles into verifiable commitments. We all recognize how arduous the path ahead is, but failing to act would expose us to risks we simply can’t afford. Grateful to the Pontifical Academy of Social Sciences for convening such a thoughtful space and for keeping human dignity at the center of technological progress and a special thank you to Gustavo Beliz for the invitation. Thankful also to the amazing people I met and had the opportunity to exchange with Paul Nemitz, Carme Artigas, Rebecca Finlay, Jimena Viveros LL.M., Maria Vanina Martinez, Daniel Innerarity Grau, Mary Ellen O'Connell, Mophat Okinyi, Molly Kinder, Ricardo Chavarriaga, Rodrigo Durán Rojas, Luca Belli, Hector Palacios, JORGE VILAS DIAZ COLODRERO, Gregory Reichberg, José Beliz, Macarena Santolaria and many others. I am sure our paths will continue to cross in the future. Photos: Gabriella C. Marino/PASS

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • The Future Society reposted this

    View profile for Deepika Raman

    AI Governance | Digital Futures and Society

    Pleased to share our upcoming discussion  on 📌“Establishing AI Risk Thresholds and Red Lines: A Critical Global Policy Priority”📌 as an official pre-summit event of the IndiaAI Impact Summit 2026! 🇮🇳 📅 Wed, October 22, 2025 | 🔗 Registration link in comments I’ll be kicking off the session with a short presentation offering background on how AI risk thresholds and red lines are being defined by both companies and governments. The discussion that follows will feature our stellar panelists — Sarah Myers West, Marc Rotenberg, Niki Iliadis, Leonie Koessler, and Nada Madkour, PhD — who will examine what effective guardrails for AI might look like in practice. 🔎 Huge thanks to Jessica Newman for leading this effort and moderating such an important discussion. Hosted by the Center for Long-Term Cybersecurity, as part of the UC Berkeley Tech Policy Week, this conversation brings together global perspectives on one of the most urgent challenges in AI governance today. This event builds on CLTC’s ongoing work on thresholds and the recently launched campaign on the Global Call for AI Red Lines, which urges governments to adopt enforceable limits on unsafe or rights-infringing AI systems by 2026. #IndiaAIImpactSummit2026 #AIGovernance #ResponsibleAI #AISafety Abhishek Singh | Kavita Bhatia | Khushal Wadhawan | Tanvi Thomas | Krystal Jackson | Evan R. Murphy | Charlotte Yuan | IndiaAI | UC Berkeley School of Information | AI Now Institute | Center for AI and Digital Policy | The Future Society

    • No alternative text description for this image
  • The Future Society reposted this

    View profile for Nicolas Miailhe

    AI Alignment & Governance

    ARE WE READY FOR AGI? This was the question we put — not hypothetically, but urgently — to 145 senior AI practitioners, diplomats and ministers at the 2nd edition of AI Safety Connect, held during #UNGA80 on September 25. Our event came just hours after the launch of the UN Global Dialogue on AI Governance (implementing Resolution A/RES/79/325), and days after the Global Call for AI Red Lines (https://red-lines.ai/) — now backed by over 300 prominent figures across science, diplomacy and civil society. And yet, despite all this momentum, one fact was impossible to ignore => Frontier AI companies are all openly racing toward Artificial General Intelligence (AGI), while global governance is nowhere near ready. Across sessions, we heard a converging set of messages repeated in different accents, from East and West alike: 4 Key Takeaways: 1/. AI Safety is no longer optional. Many warned that human-level AI systems could emerge within the decade. Few believe current trajectories are rooted in real scientific assurance or aligned incentives. Hope or complacency cannot b amount to a safety strategy. 2/. International cooperation is mandatory. Compute, data, talent and testing capacity are controlled by a tiny number of actors — pushing incentives toward speed and market capture, not prudence. We need shared rules, institutional guardrails and, above all, aligned incentives. And we need them fast. 3/. The world is now asking which AI Red Lines, not whether. Deception. Mass manipulation. Epistemic collapse. Uncontrollable systems. Catastrophic misuse or accidents. These are no longer abstract categories — they are pressing policy design questions. And they must be settled globally and proactively, not reactively and locally. 4/. We need an operational toolkit — not just declarations. Red lines mean nothing without verification, pre-deployment evaluation, incident reporting, and compute thresholds tied to capabilities. And not every issue needs the same diplomatic table: some coalitions will need to be broad, others truly restricted. A heartfelt THANK YOU to our stellar AI Safety Connect team starting with Cyrus Hodes for his vision & leadership; and to our stellar partners — The Future Society, FAR.AI, Mila - Institut québécois d'intelligence artificielle, UNDP, NYC AI Governance & Safety (NYAIGS), and the Permanent Missions of Singapore, Canada and Brazil — for helping place Advanced AI Safety firmly on the UN’s agenda. The awareness is now real. The question/challenge is whether we can turn momentum into machinery — before the next leap in capability. 👉 Follow AI Safety Connect (https://lnkd.in/eic8TMfV) #AISC #UNGA80 #AISafety #AGI #AIGovernance #GlobalCooperation Cc AISC core team Kay Kozaronek Uma Kalkar Leah Fayal 🙏🙏

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +5
  • The Future Society reposted this

    View profile for Nicolas Miailhe

    AI Alignment & Governance

    ARE WE READY FOR AGI? This was the question we put — not hypothetically, but urgently — to 145 senior AI practitioners, diplomats and ministers at the 2nd edition of AI Safety Connect, held during #UNGA80 on September 25. Our event came just hours after the launch of the UN Global Dialogue on AI Governance (implementing Resolution A/RES/79/325), and days after the Global Call for AI Red Lines (https://red-lines.ai/) — now backed by over 300 prominent figures across science, diplomacy and civil society. And yet, despite all this momentum, one fact was impossible to ignore => Frontier AI companies are all openly racing toward Artificial General Intelligence (AGI), while global governance is nowhere near ready. Across sessions, we heard a converging set of messages repeated in different accents, from East and West alike: 4 Key Takeaways: 1/. AI Safety is no longer optional. Many warned that human-level AI systems could emerge within the decade. Few believe current trajectories are rooted in real scientific assurance or aligned incentives. Hope or complacency cannot b amount to a safety strategy. 2/. International cooperation is mandatory. Compute, data, talent and testing capacity are controlled by a tiny number of actors — pushing incentives toward speed and market capture, not prudence. We need shared rules, institutional guardrails and, above all, aligned incentives. And we need them fast. 3/. The world is now asking which AI Red Lines, not whether. Deception. Mass manipulation. Epistemic collapse. Uncontrollable systems. Catastrophic misuse or accidents. These are no longer abstract categories — they are pressing policy design questions. And they must be settled globally and proactively, not reactively and locally. 4/. We need an operational toolkit — not just declarations. Red lines mean nothing without verification, pre-deployment evaluation, incident reporting, and compute thresholds tied to capabilities. And not every issue needs the same diplomatic table: some coalitions will need to be broad, others truly restricted. A heartfelt THANK YOU to our stellar AI Safety Connect team starting with Cyrus Hodes for his vision & leadership; and to our stellar partners — The Future Society, FAR.AI, Mila - Institut québécois d'intelligence artificielle, UNDP, NYC AI Governance & Safety (NYAIGS), and the Permanent Missions of Singapore, Canada and Brazil — for helping place Advanced AI Safety firmly on the UN’s agenda. The awareness is now real. The question/challenge is whether we can turn momentum into machinery — before the next leap in capability. 👉 Follow AI Safety Connect (https://lnkd.in/eic8TMfV) #AISC #UNGA80 #AISafety #AGI #AIGovernance #GlobalCooperation Cc AISC core team Kay Kozaronek Uma Kalkar Leah Fayal 🙏🙏

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +5
  • The Future Society reposted this

    View profile for Tereza Zoumpalova

    AI Policy @ The Future Society

    The morning after I landed in NYC for the UN General Assembly High-Level Week, I was preparing for the press conference while watching the UNGA opening ceremony and listened to Nobel Peace Prize winner Maria Ressa, a signatory of the Global Call for AI Red Lines, announce our campaign in her speech. It was an incredible start to the momentum that the call would generate in the days and weeks that followed. Last week I had the opportunity to discuss the campaign’s goals with Philipp Sandmann in a 20-minute interview (linked in the comments). The Global Call for AI Red Lines has also been covered in 300+ media outlets, including The New York Times, BBC, NBC, El País, and Le Monde (links to a few in the comments). Hundreds of prominent figures have now signed, including 10 former heads of state and ministers, 15 Nobel Prize and Turing Award recipients, high-level scientists across the largest AI companies, and over 90 organizations. Engaging with experts at events from AI Safety Connect at the UN headquarters to side events across NYC has been extremely valuable. It's inspiring to see the interest and support for the call, and to gather insights on how to move forward. If you are working on topics related to AI red lines, I’d love to connect to discuss next steps, our strategy to turn red lines into reality, and opportunities for collaboration. A huge thank you to the amazing team at The Future Society in NYC and beyond. I’m grateful to have done this work alongside Niki Iliadis, Anoush Rima Tatevossian, Su Cizem, Mehdi Kocakahya, our broader TFS team, and partners from Centre pour la Sécurité de l'IA - CeSIA (with our great partnership with Charbel-Raphaël Segerie and Arthur Grimonpont) and Center for Human-Compatible AI at UC Berkeley (Stuart Russell and Mark Nitzberg), as well as the AI Safety Connect with Nicolas Miailhe, Cyrus Hodes, Leah Fayal, Kay Kozaronek, Uma Kalkar, Zeynep Güzey and more.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • We’re lucky to welcome six new teammates to The Future Society. Each of them brings unique skills to support our mission of aligning AI through better governance. Welcome to: Radina Kraeva joins the European AI Governance team on the Talos Network Fellowship, with a background in Computer Science and Technology Policy (MPhil from University of Cambridge) and experience at KPMG and Microsoft. At The Future Society, she will focus on Digital Sovereignty and Competitiveness. Sven Herrmann joins the European AI Governance team as part of his Institute for AI Policy and Strategy (IAPS) fellowship, with experience in many contexts, including in academia and at a non-profit. At The Future Society, his work will focus on accident prevention approaches. Su Cizem joins as a Visiting Analyst on an Institute for AI Policy and Strategy (IAPS) fellowship. She brings experience supporting the EU’s GPAI Code of Practice, convening global coalitions on frontier AI risks, and advancing international cooperation on AI emergency preparedness. Mehdi Kocakahya joins our Global AI Governance team as a Mercator Fellow, bringing expertise in digital technologies and international AI policy. At The Future Society, he will focus on advancing efforts in international cooperation on AI. Andrea Fiegl is at The Future Society as an IAPS fellow, bringing her cross-sector experience to questions of technology, democracy, and governance to the U.S. AI Governance team. Sam Boger joins the U.S. AI Governance Team with a background in software engineering and computer security, and recent experience working on technology legislation in the US Senate. At The Future Society, Sam will focus on the cybersecurity impacts of powerful AI systems. We are delighted to have these exceptional colleagues with us for the coming months. Get to know the rest of our team at https://lnkd.in/d52Zmgah #Newjoiners #TeamTFS #AIGovernance

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

The Future Society 1 total round

Last Round

Grant

US$ 170.0K

See more info on crunchbase