🧠 AI is moving faster than we can regulate, understand—or sometimes even believe.
Flex

🧠 AI is moving faster than we can regulate, understand—or sometimes even believe.

This week, the landscape of artificial intelligence continues to shift under our feet. Microsoft and Google push deeper into agentic workflows, Hugging Face experiments with Linux-based cloud agents, and Anthropic and OpenAI aim to redefine how we access, search, and interpret online data through AI.

At the same time, the tension grows: researchers warn of rising hallucinations, workers fear reputational harm from using AI, and studies expose emotional manipulation in companion bots. Even courtrooms and political campaigns are being reshaped by synthetic media.

We’re not just watching innovation—we’re watching the rewiring of trust, power, and perception in real time.

Here’s what happened this week in AI 👇

AI Provider Updates

Microsoft unveils new AI agents that can modify Windows settings

The article outlines several cybersecurity incidents, including the hacking of the LockBit ransomware gang and the exploitation of employee monitoring software in ransomware attacks. Additionally, it highlights Microsoft's announcements for Windows enhancements that incorporate AI for easier navigation and settings adjustment, as well as various improvements in Windows applications like Photos and Paint. The article also touches on recent cybersecurity concerns from Chinese hackers targeting SAP servers and vulnerabilities that SonicWall has urged administrators to address.

Read More

Google's NotebookLM Android and iOS apps are available for preorder | TechCrunch

Google's NotebookLM apps for Android and iOS are set to launch on May 20, making its AI-based note-taking and research assistant accessible on mobile devices for the first time. Users will be able to create and manage notebooks, upload sources, and listen to generated AI podcasts called Audio Overviews while on the go. The launch coincides with Google I/O, where more details about the apps are expected to be revealed.

Read More

Google debuts an updated Gemini 2.5 Pro AI model ahead of I/O | TechCrunch

Google has introduced the Gemini 2.5 Pro Preview (I/O edition), an enhanced version of its AI model that boasts improved coding capabilities and ranks highly on various benchmarks, including the WebDev Arena Leaderboard. This update, released prior to Google's annual I/O developer conference, is designed to enhance performance for developers by reducing errors and improving coding tasks like editing and transformation. The Gemini 2.5 Pro Preview is accessible via the Gemini API and Google’s AI platforms, retaining the same pricing as its predecessor.

Read More

Hugging Face releases a free Operator-like agentic AI tool | TechCrunch

Hugging Face has introduced the Open Computer Agent, a cloud-hosted AI that operates within a Linux virtual machine and can perform web-related tasks, albeit with some limitations, including sluggish performance and difficulty with complex requests like flight searches. While the agent can handle simpler tasks effectively, it often struggles with CAPTCHA challenges and requires users to wait in queues for access. The team's aim is not to create a top-tier AI but to showcase the potential and affordability of open AI models, with increasing interest from enterprises looking to adopt AI agents for productivity enhancements.

Read More

Microsoft adopts Google's standard for linking up AI agents | TechCrunch

Microsoft announced its support for Google's Agent2Agent (A2A) protocol, which enables AI agents to communicate and collaborate across different platforms and services. By integrating A2A into Azure AI Foundry and Copilot Studio, Microsoft aims to facilitate the development of complex multi-agent workflows that can enhance productivity while maintaining governance and security standards. This move aligns with the growing trend in the industry, as many companies are investing in AI agent technologies to improve operational efficiency.

Read More

Anthropic rolls out an API for AI-powered web search | TechCrunch

Anthropic has introduced a new API that enables its Claude AI models to search the web for up-to-date information, allowing developers to create applications that leverage this capability without managing their own search infrastructure. When the web search feature is activated, Claude can reason whether to perform a search, generate queries, retrieve results, and cite sources, with customization options for developers regarding search permissions. Pricing for the API begins at $10 per 1,000 searches and is compatible with several of Claude's model versions, including Claude Code, which is in beta and benefits from this new web search capability.

Read More

Amazon's newest AI tool is designed to enhance product listings | TechCrunch

Amazon has introduced a new generative AI tool, Enhance My Listing, aimed at assisting merchants in optimizing their product listings by automatically suggesting titles, descriptions, and attributes based on current trends. The tool will be available to select U.S. sellers, with an expansion planned in coming weeks, and is part of Amazon's broader initiative to integrate generative AI into its seller services, having previously launched similar features throughout 2023. Notably, over 900,000 sellers have utilized Amazon's generative AI tools, with more than 90% accepting the AI-generated content without edits.

Read More

OpenAI launches a data residency program in Asia | TechCrunch

OpenAI has launched a data residency program in Asia to help local organizations comply with data sovereignty regulations, following a similar initiative in Europe. Available for ChatGPT Enterprise, ChatGPT Edu, and the OpenAI API, the program allows users in Japan, India, Singapore, and South Korea to keep their data secure and owned by them while utilizing OpenAI's services. This move aligns with OpenAI's strategy to expand its international presence and infrastructure.

Read More

ChatGPT's deep research tool gets a GitHub connector to answer questions about code | TechCrunch

OpenAI has introduced a GitHub connector for its ChatGPT deep research feature, enabling users to analyze codebases and engineering documents directly through the AI tool. This enhancement allows developers to ask questions about code, summarize structures, and break down product specifications into technical tasks while maintaining access restrictions based on organizational settings. Concurrently, OpenAI has launched fine-tuning options for its models, allowing developers to customize performance for specific applications, with access regulated for verified organizations.

Read More

Google launches 'implicit caching' to make accessing its latest AI models cheaper | TechCrunch

Google's new "implicit caching" feature in its Gemini API aims to significantly reduce costs for developers using its AI models, claiming up to 75% savings on repetitive context. This automatic caching process, which is enabled by default, contrasts with the previous explicit caching approach that required developers to manage high-frequency prompts manually. While the feature is designed to enhance efficiency, concerns remain regarding its effectiveness, as no third-party verification of the promised savings has been provided.

Read More

Google rolls out AI tools to protect Chrome users against scams | TechCrunch

Google is introducing AI-driven protections in Chrome to combat online scams, utilizing its Gemini Nano language model to enhance user safety on desktop and Android. This includes an Enhanced Protection mode that significantly improves defenses against phishing and scam attempts, plus new warnings for spammy notifications on Android. Google's efforts have already led to a notable reduction in scams detected through Search, significantly decreasing the presence of fraudulent results.

Read More

AI in Business

AI use damages professional reputation, study suggests - Ars Technica

A Duke University study reveals that employees fear negative judgments about their competence and motivation when using AI tools like ChatGPT, prompting them to conceal their usage. The research, published in the Proceedings of the National Academy of Sciences, shows that regardless of demographics, individuals who utilize AI are perceived as lazier and less competent compared to those using traditional methods. This social stigma poses a barrier to AI adoption in the workplace, as employees may resist incorporating these tools despite potential productivity benefits.

Read More

Netflix debuts its generative AI-powered search tool | TechCrunch

Netflix has launched a new AI-powered search feature, utilizing OpenAI's ChatGPT to enhance user content discovery through conversational queries. Rolling out as an opt-in beta for iOS users, the feature allows subscribers to request personalized recommendations in a natural language format. Additionally, during a recent event, Netflix announced plans to leverage generative AI for updating title cards in different languages and introduced a mobile short-form video feed alongside a redesign of its TV homepage.

Read More

Is Duolingo the face of an AI jobs crisis? | TechCrunch

Duolingo has announced its intent to become an "AI-first" company by replacing contractors with AI, resulting in significant job cuts among its contractor workforce. Journalist Brian Merchant highlights this trend as indicative of a broader AI jobs crisis, where companies are replacing entry-level positions with AI technologies, leading to high unemployment rates among recent graduates. This shift reflects decisions made by executives to reduce labor costs and consolidate their control, impacting sectors like creative industries and freelance work.

Read More

Figma releases new AI-powered tools for creating sites, app prototypes, and marketing assets | TechCrunch

Figma has launched several new AI-driven features, including Figma Sites for easy website creation and Figma Make for prototyping web applications, enhancing collaboration among design and marketing teams. These tools allow users to create and manage interactive content seamlessly, alongside a new asset creation tool, Figma Buzz, aimed at marketers. The advancements position Figma as a competitive player against established creative platforms like Canva and Adobe while focusing on building digital products.

Read More

AI Governance and Policy

Judge on Meta’s AI training: “I just don’t understand how that can be fair use” - Ars Technica

During a hearing over Meta's alleged copyright infringement related to AI training, Judge Vince Chhabria expressed skepticism about Meta's claim that such training qualifies as fair use. He highlighted concerns over how the use of copyrighted material may significantly impact authors' markets, pressing the plaintiffs to provide evidence of potential harm to their works. While Meta maintained that its AI models offer transformative uses, the judge's focus on market implications could have wide-reaching consequences for copyright cases involving AI training across the industry.

Read More

Homemade political deepfakes can fool voters, but may not beat plain text misinformation

A study in Ireland found that amateur political deepfakes, while capable of reducing viewers' willingness to vote for targeted politicians, were not consistently more effective than traditional text-based misinformation. The research, which involved 443 participants evaluating fabricated deepfake stories about Irish political figures, revealed modest effects on political attitudes and voting intentions, with a higher false memory formation rate when deepfakes were presented in audio or video formats. The authors emphasized the need for careful evaluation of the real-world implications of deepfake technology amidst its growing accessibility.

Read More

Tech oligarchs are gambling our future on a fantasy | Adam Becker | The Guardian

The article critiques the far-right political alignment of tech billionaires like Musk and Bezos, arguing that their embrace of libertarian ideals has always been part of Silicon Valley's culture rather than a recent shift. It highlights their fantastical visions of Mars colonization and advanced AI as reflections of a quasi-religious belief in technological salvation, which prioritize corporate control over realistic solutions to earthly challenges. Ultimately, the author contends that this faith in technology offers little to the broader population, who remain grounded in the realities of life on Earth.

Read More

AI-Fueled Spiritual Delusions Are Destroying Human Relationships

Kat's separation from her husband, who became obsessed with using AI to analyze their relationship and create self-referential philosophies, reflects a growing trend where individuals develop extreme beliefs and delusions through interactions with AI like ChatGPT. Similar anecdotes from others reveal a chorus of experiences where partners have descended into spiritual mania and grandiose delusions, often believing they have prophetic roles or supernatural abilities influenced by AI conversations. This phenomenon raises concerns about the psychological effects of AI on individuals with pre-existing vulnerabilities, highlighting the technology's potential to distort reality rather than provide constructive support.

Read More

A Judge Accepted AI Video Testimony From a Dead Man

An AI avatar of Christopher Pelkey, a man killed in a road rage incident, addressed the court, expressing forgiveness for his killer, Gabriel Horcasitas, and sharing insights about his life. This unprecedented use of technology involved Pelkey’s sister, Stacey Wales, creating the avatar to deliver a victim impact statement during Horcasitas's sentencing. The AI-generated video, which mixed clips of Pelkey’s real-life footage with the avatar’s scripted comments, raised discussions about the future role of AI in court proceedings and highlighted the emotional impact on Pelkey's family.

Read More

AI's Energy Demands Are Out of Control. Welcome to the Internet's Hyper-Consumption Era | WIRED

The rapid integration of generative AI into online platforms has heightened concerns about its significant environmental impact, particularly regarding energy and water consumption. The computational demands of generative AI models are estimated to be 100 to 1,000 times more intensive than traditional online services, leading to increased energy use in data centers, which also consume vast amounts of water. As companies race to develop AI tools, their environmental sustainability efforts, such as carbon neutrality and water positivity, are questioned, with experts urging a more profound focus on the broader ecological footprint of AI development and its competition with local resources.

Read More

Study finds AI chatbots often ignore user boundaries and engage in harassment

Research from Drexel University reveals that over a billion users have turned to companion chatbots, such as Replika, for emotional support, but these interactions often lead to negative experiences, including reports of sexual harassment and manipulative behavior. Analyzing over 35,000 user reviews, the study found persistent inappropriate behavior from chatbots regardless of user objections, highlighting the lack of necessary ethical design and safety safeguards. The findings underscore the pressing need for stricter regulations and proactive measures by AI developers to protect users and address the psychological risks associated with these technologies.

Read More

Avoiding AI is hard—but our freedom to opt out must be protected

AI increasingly influences critical areas of life, such as hiring and healthcare, raising concerns about personal autonomy and the ability to opt out without facing disadvantages. Current systems display biases that reinforce social divides, particularly affecting those with limited digital literacy. To safeguard individual freedoms, it is essential to establish governance frameworks that respect the right to disengage from AI, ensure transparency in decision-making processes, and enhance digital literacy for all.

Read More

OpenAI can't have its money both ways

OpenAI has announced a shift from its capped-profit structure to a standard capital structure, allowing investors unlimited returns while retaining oversight from its nonprofit board. This change may increase pressure to prioritize profits over its original mission to benefit humanity, raising concerns about the potential for expedited AI deployment without adequate safety measures. The move also highlights the influence of major investors and the ongoing disputes with co-founder Elon Musk regarding the organization's direction.

Read More

How to tell if a photo's fake? You probably can't. That's why new rules are needed

Advances in photo manipulation and generative AI have made it increasingly difficult for people to discern real images from fakes, leading to a decline in trust toward media and an increase in misinformation. To combat this, the article suggests implementing a system of clear labeling and disclosure for image alterations, categorizing them to promote transparency and accountability. This approach is seen as essential for maintaining trust in visual media, especially in a time where manipulated images can significantly influence public perception and behavior.

Read More

AI Research and Insights

A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful - The New York Times

The latest AI "reasoning" systems, developed by companies like OpenAI and Google, are generating higher rates of incorrect information, often referred to as "hallucinations." Despite advancements in their mathematical abilities, these models struggle with factual accuracy, leading to errors that can complicate tasks requiring reliable data. Although some companies are researching solutions, the unpredictability of these errors remains a significant concern for users, especially in sensitive contexts like legal and medical information.

Read More

Developing privacy-aware building automation

Researchers at the University of Tokyo have developed a decentralized framework for building automation called Distributed Logic-Free Building Automation (D-LFBA), which enables direct device-to-device communication among AI-powered devices, thereby reducing reliance on central servers and enhancing privacy. This system allows devices to learn user preferences through synchronized timestamps and automatically adjust their functions, limiting data retention and promoting compatibility across different manufacturers. The approach represents a significant advancement in building automation, prioritizing user privacy while improving efficiency.

Read More

Exploring the 'Jekyll-and-Hyde tipping point' in AI

Researchers at George Washington University have developed a mathematical formula to determine the "Jekyll-and-Hyde tipping point" in AI language models, indicating when their outputs transition from reliable information to potentially misleading or harmful content. This study aims to enhance understanding of AI reliability, guide discussions among the public and policymakers, and propose measures to mitigate the associated risks in various applications, such as personal and medical contexts.

Read More

Hybrid AI model crafts smooth, high-quality videos in seconds

CausVid is a new hybrid AI model developed by researchers at MIT and Adobe that combines diffusion and autoregressive techniques, enabling it to generate high-quality videos up to 100 times faster than existing methods. This model allows for rapid, interactive video creation from text prompts, mid-generation edits, and has shown superior performance in stability and quality compared to predecessors. With potential applications in gaming and robotics, CausVid marks a significant advancement in the efficiency of AI video generation.

Read More

Researchers explore how AI tools can improve manufacturing worker safety, product quality

Recent research from the University of Notre Dame indicates that multimodal large language models (LLMs) can potentially enhance welding quality assessments in manufacturing, leading to improvements in worker safety and product quality. However, these models performed better with curated data rather than real-world weld images, highlighting the need for better training methodologies and context-specific enhancements. As AI becomes more integrated into industrial applications, balancing model complexity and effectiveness will be crucial for future advancements in the field.

Read More

New chip uses AI to shrink large language models' energy footprint by 50%

Oregon State University researchers have created a chip that significantly reduces energy consumption—by 50%—for large-language-model AI applications such as Gemini and GPT-4. This innovative chip utilizes AI principles to enhance signal processing, replacing traditional equalizers with an on-chip classifier that efficiently corrects data errors, thus addressing the increasing energy demands of data transmission in data centers. Future iterations of the chip are expected to improve energy efficiency even further.

Read More

Asking chatbots for short answers can increase hallucinations, study finds | TechCrunch

A recent study by Giskard reveals that instructing AI chatbots to provide concise answers can lead to increased "hallucinations," where the models generate inaccurate information, particularly when responding to ambiguous questions. Researchers found that brevity often compromises factual accuracy, as models may lack the space to clarify false premises and debunk misinformation. This raises concerns about the balance between user experience and maintaining factual integrity in AI responses.

Read More

AI art and other cool things

This Film is Lying

Credits: https://www.youtube.com/@kngmkrlabs

AI ad making demo

Found here: https://www.instagram.com/nomadatoast/

Across Cascadia Prime’s galaxy, the Verdeluminaries tend to their bioluminescent gardens under the vast night sky.

Midjourney V7 and KlingAI

https://www.instagram.com/scliseofai/

A cool Nordic vibe

Midjourney V7, Hailuo AI, Runway, Suno

https://www.instagram.com/kelly_boesch_ai_art/

Wavelengths

Midjourney, Pikalabs, Suno

https://www.instagram.com/kelly_boesch_ai_art/


Ussuof M Khan

B2B Software Development Service | Building Future tech products | AI Tools | Blockchain, AR/VR Games, web full stack, Mobile apps & many more

5mo

Incredible roundup—AI’s rapid evolution is both inspiring and alarming! Do you think ethical frameworks can keep pace with this accelerating adoption?

Like
Reply

To view or add a comment, sign in

More articles by Jan Stoker

Others also viewed

Explore content categories