Contrails AI’s cover photo
Contrails AI

Contrails AI

Software Development

Solving online safety problems using cutting-edge AI

About us

We help organizations detect deepfakes, prevent scams, and address high-risk safety challenges using advanced AI and agentic workflows. Our proprietary multi-modal detection engine analyzes video, audio, and images to accurately identify synthetic media, deepfakes, misinformation, and policy violations.

Industry
Software Development
Company size
2-10 employees
Type
Privately Held
Founded
2023
Specialties
AI, Trust and safety, Artificial intelligence, content moderation, and Synthetic Media detection

Employees at Contrails AI

Updates

  • Contrails AI reposted this

    𝐑𝐞𝐝𝐞𝐟𝐢𝐧𝐢𝐧𝐠 𝐎𝐧𝐥𝐢𝐧𝐞 𝐒𝐚𝐟𝐞𝐭𝐲 𝐰𝐢𝐭𝐡 𝐀𝐈 Contrails AI, has closed its seed funding round, co-led by IAN Group and Huddle Ventures. The startup is accelerating its mission to make the internet safer with a cyber forensic engine that detects deepfakes, misinformation, and synthetic media threats quickly and accurately. Entrepreneurs like Ami Kumar, Founder of Contrails AI, are driving a revolution in digital trust. Watch him share the journey of building a world-class AI startup from India and transforming online safety globally. Digvijay Singh Mayank Agarwal #IANGroup #PortfolioSuccess #AI #Innovation #Entrepreneurship

  • Contrails AI reposted this

    View organization page for Huddle Ventures

    20,608 followers

    Excited to announce our investment in Contrails AI led by Digvijay Singh and Ami Kumar. In an era that is experiencing its most profound transformation since the rise of social media, Generative AI has democratized creativity - making it possible for anyone to alter & morph videos, voices, images, and text at scale. But with this power comes an unrelenting wave of harmful use cases: deepfakes deployed in politics, AI scams extracting billions across industries, synthetic nudity and harassment spreading unchecked. Contrails AI is solving for this exact unsolved pain point emerging with the advancement of GenAI. At its core is the Deepfake Intelligence Toolkit (DIT) - a multimodal, real-time platform that detects, classifies, and labels content before it reaches users. Ishaan | Sanil | Rishiraj | Sarthak IAN Group Ajai Chowdhry News Flash: https://lnkd.in/gcaAUu64 Read more about our thesis here:

  • View organization page for Contrails AI

    1,197 followers

    🚀 Here we go! This one’s a big milestone for us. We’re proud to share that Contrails AI has raised $1M in pre-seed funding, led by Huddle Ventures and IAN Group, with support from Ajai Chowdhry, Co-Founder of HCL. This funding helps us deepen our research, scale our detection framework, and work alongside tech platforms that share our belief that the future of the internet depends on what (and who) we can trust. The goal remains simple: make the internet safe again. Grateful to our investors, team, and partners who believe that digital safety deserves deep tech, not just policy. 🌍 Onwards and Upwards! We're excited to be building the trust layer for the generative era. #ContrailsAI #TrustAndSafety #AIsecurity #DeepfakeDetection ===============================

    • No alternative text description for this image
  • Contrails AI reposted this

    View profile for Ami Kumar

    Driving AI and Trust in Safety Solutions | Co Founder - Contrails.ai

    🌍 Stanford University Trust & Safety Research Conference – What an Experience! This conference is truly a league apart. 🔥 From authentic presentations to companies openly sharing advancements (and vulnerabilities!), it was packed with real field data and grounded insights into the realities of Trust & Safety. 💡 It was incredibly refreshing to meet so many brilliant minds, all driven by the same mission – making the internet safer. For us at Contrails AI.ai, it was a massive learning experience that will directly shape how we align our roadmap to combat emerging threats. So inspiring to discuss the future of online safety with Julie Inman - Grant , Matthew Soeth , Rodrigo Tamellini, Avi Jager, PhD, Susan B., and so many more. Grateful to be part of this global gathering of practitioners passionate about changing the status quo. 🚀 #TrustAndSafety #Stanford #OnlineSafety #EmergingThreats #Contrails #AI #SaferInternet Dave Emmanuel Arnika Maria Vaishnavi David

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • 🛡️ Safer platforms don’t come from one perspective. They’re built through the daily work of operators, the insights of researchers, and the input of product leaders, policymakers, and civil society. The 4th Annual Trust & Safety Research Conference brings all of them together. Our Co-Founder, Ami Kumar, will be at Stanford University's Frances. C. Arrillaga Alumni Center. He will be joining conversations on how platforms can handle the toughest Trust & Safety challenges. From deepfakes to fraud to harmful content, we look forward to discussing threats already testing the limits of credibility online. It’s two days packed with research talks, workshops, panels, and even a poster session. If you’re working in Trust & Safety, this is where the future is debated: over coffee, in workshops, even at the happy hour. Be sure to catch Amitabh there. Get a seat here: http://bit.ly/4nhAVim 📆 September 25–26 📍 Stanford University #TSRConference #Stanford #DeepfakeDetection #Trust&Safety #ContrailsAI

    • No alternative text description for this image
  • Marketplace Risk New York 🗽 2025 was one for the books. There’s something about New York that makes conversations sharper. This year’s Marketplace Risk Conference had that energy from start to finish. You could feel it in the packed rooms and the hallway debates about what it will really take to keep trust intact online. Some risks look theoretical from a distance. And then there are the ones you can feel in the room because every operator, product lead, and risk manager is already living them. Deepfakes are firmly in that second camp. The highlight for us was our Co-Founder, Ami Kumar, joining Abhi Chaudhuri from LinkedIn, Bharath Teja Rapolu Teja Ropulu from Grubhub for a panel on deepfakes. It was called “Deepfake Detection: Safeguarding Trust in the Age of Synthetic Media,” but the reality is it’s about far more than detection. The conversation ran long, and the questions didn’t stop. That urgency is what makes this community so important. That kind of raw engagement is the clearest signal we could get: this problem is here, now, and growing.   Deepfakes are warping product listings, reviews, and even digital identities. They are multiplying fast and hitting marketplaces where it hurts most: trust. At Contrails AI, our mission is simple but bold: make the internet safer. We’re working hand-in-hand with leading marketplaces to make that a reality. 📸 A few highlights from an unforgettable week. Big thanks to the Marketplace Risk team for creating a space where tough conversations actually happen. #MRNYC25 #MarketplaceRisk #DeepfakeDetection #ContrailsAI #DigitalTrust Alice Kristin Garrett Jeff Josh Caroline Will Vaishnavi Christopher Alexandros Shegun Aidas Ashish Jaiman (we missed you on the panel, but will catch you on the podcast soon )

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +9
  • 🍎 Don’t we all love some Big Apple energy! Our Co-Founder, Ami Kumar, is hosting a panel at the Marketplace Risk New York Conference 2025. The panel will bring a range of perspectives to the table on "Deepfake Detection: Safeguarding Trust in the Age of Synthetic Media." What you’ll learn: Frameworks for detecting and explaining deepfakes Real-world success metrics for high-stakes AI Challenges and strategies for scaling AI responsibly Deepfakes are no longer hypothetical. With incidents up 900% since 2022, they affect product listings, user reviews, and platform credibility. This panel will show how AI and human expertise intersect to safeguard trust. Join Ami in the Sittercity Room alongside: Abhi Chaudhuri, Principal Product Manager, LinkedIn Ashish Jaiman, Director of Product Management, X Microsoft Bharath Teja Rapolu Teja Ropulu, Manager, Fraud & Risk Strategy, Grubhub 📆 Sept 18 🕥 10:45–11:15 AM If you're in New York, this is a session you wouldn’t want to miss. Join us and be part of the conversation that's shaping the future of digital safety. Book your seat here - Marketplace Risk #MRNYC25 #MarketplaceRisk #TrustAndSafety #DeepfakeDetection #ContrailsAI

    • No alternative text description for this image
  • Contrails AI reposted this

    View profile for Ami Kumar

    Driving AI and Trust in Safety Solutions | Co Founder - Contrails.ai

    🚨 The Take It Down Act just raised the stakes for platforms. A felony case in Eau Claire County is testing a new state law: six charges filed for AI-generated child abuse images, entirely synthetic. This is the kind of case that sets a precedent. I’ve been digging into how the Take It Down Act changes the game. Here’s what matters: Platforms must remove AI-generated “digital forgeries” of minors within 48 hours of a valid takedown request, or face FTC enforcement. The law doubles the stakes: individuals who publish face criminal charges; platforms must build real removal workflows or risk regulatory action. It’s not only for US-based platforms; if you're serving US users, you’re in the crosshairs too. Add to that the reality on the ground: The Internet Watch Foundation confirmed 1,286 AI-generated CSAM videos in just the first half of 2025, up from just 2 last year, and over 1,000 were category A (the worst of the worst). The law hinges on reactive takedowns. It’s step one. We also need real-time AI-centric detection tools, built-in escalation, and workflows that reflect this legal train that's already left the station. If you haven’t already, hit that launch button on your detection roadmap. The future of platform liabilities just got much more urgent. #TrustAndSafety #AICompliance #DeepfakeDetection #ContentModeration #ChildSafety

  • You've been asking for this. Michigan just delivered. For years, we've been fighting a battle against deepfakes 🎭 with one hand tied behind our backs. Now, things are finally shifting. 🇺🇸 Michigan just became the 48th state to pass a law, and it's a huge win. This law has teeth. It punishes creators 🧑💻 with prison time and huge fines 💰 . And if you suffer a financial loss because of a deepfake? It's a felony 🚨 . It’s inspiring to see the legal system finally catching up to the tech. The fight is tough, but with laws like this, we're one step closer to winning. Shout out to 404 Media ! You guys are brilliant !

Similar pages

Browse jobs

Funding