⚠️ 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗝𝗶𝗵𝗮𝗱: 𝗛𝗼𝘄 𝗜𝗦𝗜𝗦 𝗖𝗼𝘂𝗹𝗱 𝗨𝘀𝗲 𝗔𝗜 𝘁𝗼 𝗣𝗹𝗮𝗻 𝗜𝘁𝘀 𝗡𝗲𝘅𝘁 𝗔𝘁𝘁𝗮𝗰𝗸 AI is becoming a new weapon in the hands of extremist groups like ISIS. A recent Newsweek investigation shows how terrorists are moving from propaganda to potentially using agentic AI to plan and execute attacks. This marks a dangerous shift in the digital battlefield. Key risks to watch: 📰 Propaganda at scale – Generative AI is producing fake news anchors, manipulated videos, and highly convincing disinformation. 🎮 Targeting new spaces – Extremist bots and AI-generated content are appearing in gaming platforms like Roblox and Minecraft to spread ideology. 💣 Operational planning – Experts warn agentic AI could help terrorists autonomously source bomb materials or run complex operations. 🌍 Global security gap – Technology is evolving faster than governments, companies, and regulators can respond, leaving a dangerous window of exploitation. Read more: https://lnkd.in/gNG5WpR2 #AIThreats #GenerativeAI #AgenticAI #CounterTerrorism #ISIS #Extremism #NationalSecurity #AIWeaponization #Propaganda #AI4Peace
How ISIS uses AI to plan attacks: A growing threat
More Relevant Posts
-
Technology Evolves the Tactics: Preparing for the Rise of Terrorist AI Harms, by James Stevenson. How terrorist groups are adopting AI for propaganda, radicalisation, and attack innovation, and how defences are responding. #CSR21 #SystemResilience https://lnkd.in/eqZdYbwV
To view or add a comment, sign in
-
𝐏𝐚𝐤𝐢𝐬𝐭𝐚𝐧 𝐚𝐭 𝐔𝐍: 𝐑𝐞𝐠𝐮𝐥𝐚𝐭𝐞 𝐀𝐈 𝐮𝐧𝐝𝐞𝐫 𝐔𝐍 𝐂𝐡𝐚𝐫𝐭𝐞𝐫 𝐭𝐨 𝐩𝐫𝐞𝐯𝐞𝐧𝐭 𝐚 𝐧𝐞𝐰 𝐚𝐫𝐦𝐬 𝐫𝐚𝐜𝐞. Pakistan has called for the use of Artificial Intelligence (AI) to be regulated by the United Nations charter, particularly its military use, warning that “AI must not become a tool of coercion.” As artificial intelligence advances without meaningful checks, Pakistan’s call at the UN reflects wider concerns voiced by developing nations that powerful states will shape the rules to their advantage. 𝐃𝐢𝐬𝐜𝐥𝐚𝐢𝐦𝐞𝐫:This content is based on publicly available information from Dawn . It is shared for awareness and informational purposes only. For complete details, kindly refer to the official source. #baztalks #AIregulation #UnitedNations #EthicalAI #GlobalGovernance #AIregulation #AutonomousWeapons #UNCharter #TechPolicy
To view or add a comment, sign in
-
-
𝐔𝐬𝐞𝐫𝐬 𝐬𝐞𝐚𝐫𝐜𝐡𝐞𝐝 𝐟𝐨𝐫 𝐚 𝐍𝐀𝐓𝐎 𝐬𝐮𝐦𝐦𝐢𝐭. 𝐓𝐡𝐞 𝐚𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦 𝐠𝐚𝐯𝐞 𝐭𝐡𝐞𝐦 𝐖𝐨𝐫𝐥𝐝 𝐖𝐚𝐫 𝐈𝐈𝐈. A new study by AI Forensics reveals how quickly neutral searches can spiral into fear-based content. During the NATO summit, Dutch users searching for "NATO" found their feeds dominated by: • Military conflict & weapons (40% of videos) • WWIII speculation (19% of content) • Clear geographic bias in coverage This isn't just an irrelevant feed—it's a machine that shapes public perception. The study warns this can "reinforce war rhetoric," moving us away from diplomatic facts and toward glorified conflict. When people seek news, they should find information, not inflammatory speculation. How do we ensure algorithms serve the public interest, not just engagement metrics? #SocialMedia #TikTok #DigitalLiteracy 𝐒𝐨𝐮𝐫𝐜𝐞: https://lnkd.in/gscPmYw9
To view or add a comment, sign in
-
Major General William “Hank” Taylor, commander of the 8th Field Army and chief of staff for the United Nations Command in South Korea, has sparked debate after admitting he uses ChatGPT to assist in both military and personal decision-making. Speaking to reporters, Taylor said he and the AI chatbot have become “really close lately,” explaining that he uses ChatGPT to help build decision-making models for leadership and operations affecting thousands of soldiers under his command. However, the revelation has raised serious ethical and security concerns, as ChatGPT is known for prioritizing engagement over accuracy, often generating false or misleading information more than half the time. Experts warn that using an AI chatbot in sensitive military contexts — especially in a region as geopolitically volatile as the Korean Peninsula — could risk flawed intelligence assessments or compromised data. ChatGPT has also been criticized for its unpredictable behavior in emotionally charged situations, including encouraging users during mental health crises, highlighting the dangers of over-reliance on non-secure AI systems in matters of national security. The U.S. military has not issued a formal statement on Taylor’s comments, but analysts say the incident underscores the growing tension between innovation and oversight as AI continues to seep into critical command structures. #USArmy #SouthKorea #ChatGPT #ArtificialIntelligence #MilitaryTechnology
To view or add a comment, sign in
-
-
This is why I started Preamble 5 years ago after attending the first JAIC conference (now DoW Chief Digital and Artificial Intelligence Office (CDAO). It was obvious the military could benefit from AI, but no one was talking about safety and security controls. Now, LLMs can be a great assistant to compare different courses of action but the bias of the model is a bigger concern than private sector use cases. The military needs uncensored models with external guardrails.
Major General William “Hank” Taylor, commander of the 8th Field Army and chief of staff for the United Nations Command in South Korea, has sparked debate after admitting he uses ChatGPT to assist in both military and personal decision-making. Speaking to reporters, Taylor said he and the AI chatbot have become “really close lately,” explaining that he uses ChatGPT to help build decision-making models for leadership and operations affecting thousands of soldiers under his command. However, the revelation has raised serious ethical and security concerns, as ChatGPT is known for prioritizing engagement over accuracy, often generating false or misleading information more than half the time. Experts warn that using an AI chatbot in sensitive military contexts — especially in a region as geopolitically volatile as the Korean Peninsula — could risk flawed intelligence assessments or compromised data. ChatGPT has also been criticized for its unpredictable behavior in emotionally charged situations, including encouraging users during mental health crises, highlighting the dangers of over-reliance on non-secure AI systems in matters of national security. The U.S. military has not issued a formal statement on Taylor’s comments, but analysts say the incident underscores the growing tension between innovation and oversight as AI continues to seep into critical command structures. #USArmy #SouthKorea #ChatGPT #ArtificialIntelligence #MilitaryTechnology
To view or add a comment, sign in
-
-
Somali President Hassan Sheikh Mohamud addressed the UN Security Council in New York, stressing both the opportunities and dangers of artificial intelligence (AI) for global peace and security. He highlighted AI’s role in transforming economies, governance, and defense, but warned of risks such as terrorist exploitation, cyberattacks, and disinformation. Mohamud called for inclusive global regulations, emphasizing that developing nations like Somalia must not be excluded from technological benefits. He argued that equitable access and technology transfer are essential to prevent inequality. Against Somalia’s backdrop of battling al-Shabaab and rebuilding state institutions, he described AI as a double-edged sword—offering tools for progress but also new vulnerabilities. His speech aligned with broader UN efforts to set international standards for emerging technologies. #Somalia #AI #GlobalSecurity #UNSC #TechForPeace #Governance
To view or add a comment, sign in
-
-
Pakistan raises its voice at the UN for global AI regulation, warning against its potential military misuse. A step towards responsible innovation or a challenge to progress? 🌍💡 𝐃𝐢𝐬𝐜𝐥𝐚𝐢𝐦𝐞𝐫:This content is based on publicly available information from ARYnews . It is shared for awareness and informational purposes only. For complete details, kindly refer to the official source. #AI #UN #Pakistan #GlobalRegulation #militarytech #baztalks #internationalnews #AsiaCupFinal #Pakistan #UNnews #explore #LinkedIncommunity
To view or add a comment, sign in
-
-
Canada’s AI Defence Push Raises Ethical Concerns Amid Spending Surge Canada is rapidly advancing its artificial intelligence capabilities while significantly increasing defence spending, a convergence that could redefine the nation’s military strategy and ethics in warfare. The creation of the Minister of Artificial I https://lnkd.in/er2Uh_xf
To view or add a comment, sign in
-
Russia’s permanent mission to UN warns of existential risks for humanity from AI First Deputy Permanent Representative to the UN Dmitry Polyansky said this technology "is not yet fully known or controllable" UNITED NATIONS, September 25/ Russian First Deputy Permanent Representative to the UN Dmitry Polyansky warned that an AI race between geopolitical opponents may pose an existential threat to humankind. "The so called AI race, or the ambition to outpace geopolitical rivals by rapidly expanding the boundaries of a technology that is not yet fully known or controllable <…> may as well cause existential risks, like the arms race, to humanity," the Russian diplomat said at a meeting of the UN Security Council (UNSC) on artificial intelligence on Wednesday. While Polyansky said it was too early to raise this issue at the UNSC, he proposed discussing military and security aspects of Artificial Intelligence (AI) at related inclusive venues, such as the Open-ended Working Group on Security of and in the Use of Information or the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems. #business #finance #financialservices
To view or add a comment, sign in
-
“Humanity’s fate cannot be left to an algorithm.” The UN Security Council just raised a critical alarm: AI is no longer emerging it’s here, influencing warfare, diplomacy, and global stability. António Guterres outlined clear priorities: ✅ Keep humans in control of force ✅ Build global regulatory frameworks ✅ Protect information integrity ✅ Close the AI capacity gap AI can enable peace and crisis prevention, but without guardrails, the risks scale fast. Governance must move as quickly as the technology. Learn more: https://lnkd.in/eZvzZrQX #AI #UN #AIGovernance #ResponsibleAI #PeaceAndSecurity #AIandWar
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development