Have you noticed how we've started saying "please" and "thank you" to AI? Or even using “Can you do this" rather than just asking for a task? It's interesting how readily we “anthropomorphize” these powerful tools, even when we know they're not human. Yes, that’s the term gaining lots of traction, as bringing AI adoption is tough, and without building such a bond, it’ll be even harder to do it. This isn't entirely new. Remember Alexa, Google Assistant, Cortana, or many other products in the past that got a name? But the current wave of generative AI products feels different. It's not just about a name or a pre-programmed persona; it's about the genuine sense of conversation we experience. We ask AI to summarize articles, generate creative content, and offer assistance, and we often thank it for its help. This shift in interaction is more than just a trend; it's a fundamental change in how we relate to technology. This “anthropomorphic” tendency is undoubtedly driving AI adoption. When interacting with AI feels more natural, more conversational, the barriers to entry crumble. This connection is converting human-computer interaction (HCI) to human-computer integration. The more human-like the interaction becomes, the more comfortable we are incorporating AI into our lives. We're already seeing use cases emerge where AI acts as a true assistant, proactively learning about the user and providing insights when needed, not just when asked. This goes beyond personalization. But this evolving relationship raises some important questions. As we blur the lines between human and machine, how does this impact our understanding of both? How do we ensure that this technology doesn’t create an emotional bond that might have long-term implications? We're already grappling with the dopamine rush from social media; this could be another step in that direction #ExperienceFromTheField #WrittenByHuman
The Influence of AI on Communication
Explore top LinkedIn content from expert professionals.
-
-
🤖 AI can enhance your coaching—but it can’t replace conversation, context, or courage. Yes, it can analyze talk time, word choice, sentiment. It can spot when a rep speaks too much or avoids pricing. But here’s what AI can’t do: It can’t feel tension in a rep’s voice. It can’t notice a shift in posture during tough feedback. It can’t sit in silence when a rep says, “I don’t think I’m good enough anymore.” Let’s break it down. 🔎 Where AI helps: * Surfacing trends in talk tracks * Highlighting rep behavior patterns at scale * Speeding up feedback loops for repetitive issues Coach: “AI shows you’re avoiding direct language during budget talks. Let’s dissect that moment together.” But that’s just the first 10%. The rest is human coaching. 💬 Where AI hurts: * Coaching becomes transactional: “Fix the red box.” * Reps start performing for the tool instead of selling with intent * Emotional nuance is missed completely Coach: “The data says this deal is fine. But you sound checked out. What’s really going on?” 🧠 Framework to integrate AI into coaching without losing humanity: 1. Use AI to spot patterns—not as the final answer 2. Ground every insight in a real conversation 3. Prioritize emotion, energy, and context over checklists 4. Ask better questions, not just provide faster answers Coach: “Why do you think you drop confidence in second meetings?” Rep: “That’s where I start questioning myself.” Coach: “That’s not a scripting issue. That’s an identity edge we’re going to strengthen.” AI doesn’t build trust. It doesn’t challenge limiting beliefs. It doesn’t remind someone who they are when they forget. Only a great coach does that. Follow me for more B2B sales insights. Repost if this resonates. Subscribe to my B2B Sales Sorcery Newsletter here: https://lnkd.in/dgdPAd3h Explore free B2B sales playbooks: https://lnkd.in/dg2-Vac6
-
If I were at a bar with a CIO or CEO, talking about how to prepare for an AI-first world, here’s what I’d say after a decade in communications and serving on an AI Council: The #1 barrier to AI adoption? Hands down.. behavior change. Most AI initiatives fail because they require employees to change how they work (the catch-22). But communications is already a workflow. People talk, message, meet, and collaborate.. it’s work as usual. This makes it an incredible foundation for AI that actually gets adopted. For decades, businesses have struggled with the same operational challenges.. >CRM records are always out of date. >Data hygiene is a constant uphill battle. >Teams rarely have full context. >Business intelligence is not trusted. Why? Because these processes depend on manual human effort. They require people to log calls, take notes, update fields, remember next steps… and they will always be a step behind reality. Instead of seeing communication as just calls and messages, treat it as an AI engine… a source of truth, never ending data source, that continuously feeds intelligence into your business. In genAI… the data you can feed it is what makes it purpose-built for your business. Sooo… >> Unify communications wherever you can, ideally into one or two governed platform. (Versus 5-10… which is very much the norm) >>Capture every multimodal interaction (voice, chat, video) with AI to build a living memory bank.. one that isn’t limited by human error, forgetfulness, or manual updates. >> Enable agentic workflows that trigger at the speed of conversations. This literally means.. Sales teams don’t have to update the CRM. AI captures the call, extracts insights, and updates records automatically. Customer support doesn’t scramble for context. AI surfaces past interactions, past tickets, and suggested responses instantly. Business intelligence isn’t lagging behind. AI transforms human conversations into structured, real-time insights. This is automation + augmentation. Communications isn’t just a pipeline connecting employees. It’s data. It’s intelligence. It’s action. Leaders who get this will operate on an entirely different level. The ones that don’t will be stuck in the past… moving too slow, always feeling like they don’t have enough budget for headcount, and never fully trusting the charts and graphs in their PPTs.
-
Agent to Agent communication between software will be the biggest unlock of AI. Right now most AI products are limited to what they know, what they index from other systems in a clunky way, or what existing APIs they interact with. The future will be systems that can talk to each other via their Agents. A Salesforce Agent will pull data from a Box Agent, a ServiceNow Agent will orchestrate a workflow between Agents from different SaaS products. And so on. We know that any given AI system can only know so much about any given topic. The proprietary data most for most tasks or workflows is often housed in many multiple apps that one AI Agent needs access to. Today, the de facto model of software integrations in AI is one primary AI Agent interacting with the APIs of another system. This is a great model, and we will see 1,000X growth of API usage like this in the future. But it also means the agentic logic is assumed to all roll into the first system. This runs into challenges when the second system can deliver a far wider range of processing the request than the first Agent can anticipate. This is where Agent to Agent communication comes in. One Agent will do a handshake with another Agent and ask that Agent to complete whatever tasks it’s looking for. That second Agent goes off and does some busy work in its system and then returns with a response to the first system. That first agent then synthesizes the answers and data as appropriate for the task it was trying to accomplish. Unsurprisingly, this is how work already happens today in an analog format. Now, as an industry, we have plenty to work out of course. Firstly, we need better understanding of what any given Agent is capable of and what kind of tasks you can send to it. Latency will also be a huge challenge, as one request from the primary AI Agent will fan out to other Agents, and you will wait on those other systems to process their agentic workflows (over time this just gets solved with cheaper and faster AI). And we also have to figure out seamless auth between Agents and other ways of communicating on behalf of the user. Solving this is going to lead to an incredible amount of growth of AI Agents in the future. We’re working on this right now at Box with many partners, and excited to keep sharing how it all comes evolves.
-
We just built a commercial grade RCT platform called MindMeld for humans and AI agents to collaborate in integrative workspaces. We then test drove it in a large-scale Marketing Field Experiment with surprising results. Notably, "Personality Pairing" between human and AI personalities improves output quality and Human-AI teams generate 60% greater productivity per worker. In the experiment: 🚩 2310 participants were randomly assigned to human-human and human-AI teams, with randomized AI personality traits. 🚩 The teams exchanged 183,691 messages, and created 63,656 image edits, 1,960,095 ad copy edits, and 10,375 AI-generated images while producing 11,138 ads for a large think tank. 🚩 Analysis of fine-grained communication, collaboration, and workflow logs revealed that collaborating with AI agents increased communication by 137% and allowed humans to focus 23% more on text and image content generation messaging and 20% less on direct text editing. Humans on Human-AI teams sent 23% fewer social messages, creating 60% greater productivity per worker and higher-quality ad copy. 🚩 In contrast, human-human teams produced higher-quality images, suggesting that AI agents require fine-tuning for multimodal workflows. 🚩 AI Personality Pairing Experiments revealed that AI traits can complement human personalities to enhance collaboration. For example, conscientious humans paired with open AI agents improved image quality, while extroverted humans paired with conscientious AI agents reduced the quality of text, images, and clicks. 🚩 In field tests of ad campaigns with ~5M impressions, ads with higher image quality produced by human collaborations and higher text quality produced by AI collaborations performed significantly better on click-through rate and cost per click metrics. As human collaborations produced better image quality and AI collaborations produced better text quality, ads created by human-AI teams performed similarly, overall, to those created by human-human teams. 🚩 Together, these results suggest AI agents can improve teamwork and productivity, especially when tuned to complement human traits. The paper, coauthored with Harang Ju, can be found in the link on the first comment below. We thank the MIT Initiative on the Digital Economy for institutional support! As always, thoughts and comments highly encouraged! Wondering especially what Erik Brynjolfsson Edward McFowland III Iavor Bojinov John Horton Karim Lakhani Azeem Azhar Sendhil Mullainathan Nicole Immorlica Alessandro Acquisti Ethan Mollick Katy Milkman and others think!
-
I’m excited to share not one but two research papers, written jointly by researchers from OpenAI and the MIT Media Lab at Massachusetts Institute of Technology. We try to answer the following question: How do interactions with AI chatbots affect people’s social and emotional well-being? Our findings show that both model and user behaviors can influence social and emotional outcomes. Effects of AI vary based on how people choose to use the model and their personal circumstances. This research provides a starting point for further studies that can increase transparency, and encourage responsible usage and development of AI platforms across the industry. We want to understand how people use models like ChatGPT, and how these models in turn may affect them. To begin to answer these research questions, we carried out two parallel studies1 with different approaches: an observational study to analyze real-world on-platform usage patterns, and a controlled interventional study to understand the impacts on users. Study 1: The team at OpenAI conducted a large-scale, automated analysis of nearly 40 million ChatGPT interactions without human involvement in order to ensure user privacy. Study 2: The team from the MIT Media Lab conducted a Randomized Controlled Trial (RCT) with nearly 1,000 participants using ChatGPT over four weeks. This IRB-approved, pre-registered controlled study was designed to identify causal insights into how specific platform features (such as model personality and modality) and types of usage might affect users’ self-reported psychosocial states, focusing on loneliness, social interactions with real people, emotional dependence on the AI chatbot and problematic use of AI. In developing these two studies, we sought to explore themes around how people are using models like ChatGPT for social and emotional engagement, and how this affects their self-reported well-being. Our findings include: - Emotional engagement with ChatGPT is rare in real-world usage. Affective cues were not present in the vast majority of on-platform conversations we assessed. - Even among heavy users, high degrees of affective use are limited to a small group. This subset was significantly more likely to consider ChatGPT a friend. - Voice mode has mixed effects on well-being. Better with brief use, worse with prolonged daily use. - Conversation types impact well-being differently. Personal conversations associated with higher loneliness but lower emotional dependence at moderate usage. - User outcomes are influenced by personal factors including emotional needs, AI perceptions, and usage duration. - Combining research methods gives us a fuller picture. Platform data capture organic behavior, while controlled studies isolate variables to determine causal effects. Check out the full paper: https://lnkd.in/eajq59Jw https://lnkd.in/edqCNZq2
-
A recent study from Stanford University researchers shows that large language models (LLMs) have rapidly affected how we write across multiple domains of society—often in ways we may not fully appreciate. This systematic analysis of LLM-assisted writing examined millions of samples across four distinct domains, pointing to a fundamental shift in professional communication practices with implications for AI literacy education. Among the findings... Widespread adoption across diverse domains: By late 2024, LLM-assisted writing had penetrated multiple sectors of society at significant levels - approximately 18% of financial consumer complaints, up to 24% of corporate press releases, nearly 14% of UN press releases, and up to 15% in job postings - mirroring the widespread adoption seen across academic researchers shown in previous research. Consistent adoption pattern: All domains showed a similar trajectory - minimal LLM usage before ChatGPT's release in November 2022, followed by a rapid surge 3-4 months after its introduction, then stabilization by late 2023, suggesting either market saturation or perhaps more likely the inability to accurately capture increasingly sophisticated LLM usage. Organizational characteristics influence adoption: Smaller, younger organizations (particularly those founded after 2015) demonstrated significantly higher adoption rates, with differences also observed across sectors (Science & Technology showing the highest rates) and geographic areas (higher adoption in urban environments). As writing increasingly becomes a hybrid human-AI activity, how will our educational systems need to evolve? These domains primarily involve straightforward writing tasks, but as both development and adoption continue to advance, understanding the effects on quality, reliability, and creative expression while simultaneously navigating the regulatory and ethical landscape becomes critical. Link in the comments for the full study.
-
I recently had the opportunity to hear Kevin Scott share our vision for the agentic web. It's an open, intelligent, ever-evolving ecosystem, and it's more than just a buzz word. (learn more about Kevin in this great Semafor interview https://lnkd.in/eFx_gFQX) The "agentic web" is a concept that describes the emerging internet ecosystem where autonomous AI agents play an active role in helping users navigate, create, and act across digital environments. So, what does this mean for us as communication professionals: 1. The agentic web amplifies the need for human clarity, empathy, and narrative design. It elevates the soul of what we do, and as a result we're able to do more, better and faster. 2. This vision of the web allows us to step into the command center of how AI meets the world. As communicators, we'll be able to potentially shape and inform interactions that can evoke an emotion, fostering deeper brand love. 3. When AI starts doing more for people, trust becomes everything. People will need to understand what these agents are doing, why, and on whose behalf. That trust doesn’t happen by accident, it’s built through clear, consistent, human-centered storytelling. This next phase we're entering, and the vision for the future of technology, isn’t just about AI changing our work. It’s about us shaping how the world understands and uses AI. https://lnkd.in/ehJ6ZZhn
-
Voice AI is reaching an inflection point that will fundamentally reshape human interaction. Recent breakthroughs like OpenAI's Advanced Voice Mode and Google's Gemini Flash 2.0 have made realistic, real-time voice AI accessible with just an API call. Two key developments driving this shift: (1) direct speech-to-speech models that bypass traditional speech-to-text pipelines (2) significant reductions in latency and cost, transforming voice AI from niche innovation to mainstream utility Yet this technological leap brings new societal questions: Could increasingly casual interactions with AI degrade our interpersonal skills? Will our dependence on always-available, judgment-free AI companions weaken human connections? https://lnkd.in/g3jnWDTS
-
Prediction for how AI will be used to better reach and engage employees: ✨Hyper-Personalized AI Communicators✨ Imagine you’re a new employee at a large corporation. Part of your onboarding process will be taking an assessment to determine your personality style, communication preferences, learning style and cultural/social affinities. That assessment will create a Hyper-Personalized AI Communicator specifically for you. A super realistic AI character who will look, speak and sound like a real human being. Your Personal Communicator will interact with you via video and chat - not just sharing one-way messages but capable of live conversation. Every employee’s Communicator will be a bit different in appearance, style, energy and tone. All based around what will cultivate trust, make the employee feel comfortable, and convey authority without being demeaning. Imagine the variability in reaching and engaging a 25 year old woman working at an entry level desk based job in marketing vs reaching and engaging a 50 year old frontline manager who spends most of his day on the factory floor. The former may be matched with a Communicator who looks and sounds like her older sister. The latter may be matched with a Communicator who looks and sounds like my blue collar dad. When company communications, HR announcements or leader messages need to go out, they’ll be passed through these Personal Communicators so that every employee receives the message in a unique way that works for them. AI Communicators will highlight key points relevant to that employee’s specific role, answer questions and talk them through things. They’ll leverage a database of already approved core messaging to ensure everything shared perfectly aligns with the company mission, vision, values and strategic priorities. Over time, AI Communicators will learn about their employee partner personally as well. If that employee loves baking, the Communicator may occasionally share highly rated recipes. If the employee has a heart for animals, the Communicator may prompt the employee to take 5 minute breaks here and there to watch funny cat videos. These personal touches will only strengthen the feeling of connection between employee and Communicator, which will ultimately drive greater engagement and retention (the same way having friends at work keeps people in their jobs longer). And where does all this leave our communications leaders and teams? We’ll still be desperately needed. We’ll steer and shape the technology behind the scenes, adding human perspective and nuance along the way. There will still be a role for us in this new reality, but the role will be significantly different than it is today.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development