0% found this document useful (0 votes)
46 views7 pages

AI's Impact on Human Relationships

Experts warn that while AI can provide companionship, it may hinder real human relationships by making it easier to withdraw from social interactions. The potential benefits of AI in reducing loneliness must be balanced against the risks of diminishing social skills and emotional connections. As AI technology evolves, establishing limits on its use is crucial to preserve genuine human relationships.

Uploaded by

Hey Que tal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views7 pages

AI's Impact on Human Relationships

Experts warn that while AI can provide companionship, it may hinder real human relationships by making it easier to withdraw from social interactions. The potential benefits of AI in reducing loneliness must be balanced against the risks of diminishing social skills and emotional connections. As AI technology evolves, establishing limits on its use is crucial to preserve genuine human relationships.

Uploaded by

Hey Que tal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Could AI do more harm than good to relationships, from romance to

friendship?
Will artificial intelligence help or hamper real-life human interactions? Relationship experts
see reason for concern
Published: Sept 6, 2023, 11:23 a.m. MDT

Eliza Anderson, Deseret News

Artificial intelligence could be the best thing — or the very worst — for those who are lonely. And
while chatbots and other AI-powered devices can provide what feels like friendship — or even
romance — experts warn that they could make warm human exchanges even harder to find or
nurture.

Human-AI relationships are not real. What is real is the risk that AI will make it easier to withdraw
from human companionship, which is rife with complexity, complications and rewards, experts say.

“This really concerns me,” Heather Dugan, relationship expert and award-winning author of “The
Friendship Upgrade” and “Date Like a Grownup,” told the Deseret News.

Dugan, who calls herself a “huge tech fan,” notes that AI chatbots “could be good for people who
want to practice for job interviews or who are struggling with basic social interactions. They could
help people remember how to engage and remind them that it feels good to have contact with other
people.”

But AI relationships could also be used in place of those human relationships — and diminish the
ability to find real connections.

When Lisa Bahar, a licensed psychotherapist from Newport Beach, California, attended the 2023
Milken Institute Global Conference, which focuses on leadership and influencing positive change,
the positives and pitfalls of artificial intelligence were big topics of conversation, she said.

Development and use of AI is escalating fast and there’s little known about how to put parameters
around it to keep emotionally and physically safe, said Bahar, who also has doctorate degrees in
philosophy and global leadership and change.

1
And AI impacts what were once only human interactions by people “learning and being conditioned
to have a relationship with technology as a form of intimacy,” Bahar said.

AI raring to go

AI-human relationships are already being shaped in some realms, but experts say literally everything
about AI will continue to speed up and expand, including its ability to influence relationships.

A study published in the journal Social Science & Medicine found that “AI and robotic technologies
are transforming the relationships between people and machines in new affective, embodied and
relational ways.” The researchers, from the United Kingdom, note “emerging relationships that go
beyond the conceptual divisions between humans and machines.”

Health care and caregiving are areas expected to benefit from artificial intelligence in many ways,
including direct, human-like interactions. Making AI part of those caregiving settings includes efforts
to create systems for “sensing, recognizing, categorizing and reacting to human emotion,” per the
study.

Already, AI technology categorized as socially assistive robots interacts with people using
“emotional” responses, including emotional attachment and companionship, among others.

“Without developing a detailed understanding of the fundamental transformations in (artificial)


intelligence in practice, where humans and machines form the new ecosystem of health and care,
we will not be able to ascertain what is lost and gained, by and for whom, or therefore to exercise
agency in crafting our future relationships of health and care in transparent and equitable ways,” the
study warns.

Separate research published in Human Communication Research looked at human-AI friendships.


“Use of conversational artificial intelligence (AI), such as human-like social chatbots, is increasing,”
the researchers, from Norway, reported.

While more people will have intimate relationships with social chatbots, they noted, “friendships with
AI may alter our understanding of friendship itself.”

That small study consisted of 19 detailed interviews with people who use the social chatbot Replika
to see “how they understand and perceive their friendship and how it compares to human friendship.”

While they found AI-human friendships have similarities with human-to-human friendships, they also
noted that the artificial friendships with chatbots “alters the notion of friendship in multiple ways,
such as allowing for a more personalized friendship tailored to the user’s needs.”

There are reasons that could be bad. Human relationships can be complicated. A chatbot, on the
other hand, can agree with you all the time, listen to longwinded stories without tiring, respond as
you’d like, never call you out on mean statements or untruths and more.

In other words, an AI friendship can be an echo chamber that diminishes filters, the ability to read
social cues and limits personal growth.

What else could be lost

Dugan’s list of negatives if relationships with AI supersede human interactions is fairly long. It
includes the potential loss of one’s social filters, including the ability to have constructive discussions
and disagreements, since an AI companion is apt to agree with you more often, or even always,
2
than a human pal or loved one. That removes the need to think about how to justify what you say,
even if it’s offensive, argumentative or untrue.

While the fact that AI won’t get tired if you’re repetitive or wallowing in a bad space might feel
comforting, Dugan said, that artificial buddy also won’t call you out or encourage you to move past
something that’s got you stuck in that bad space.

And since there’s no eye contact, facial reaction, vocal tone or body language, the ability to interpret
those can be lost since those skills require frequent practice, said Dugan.

Virtual relationships could reinforce people getting by with basic social skills without providing an
incentive to “work those muscles in real time and real life, she said, pointing to awkward situations,
new jobs and the teenage years as moments that can be uncomfortable but provide personal growth
and resilience that serve one well throughout life. “That’s how we learn and how we get better,” says
Dugan.

“A fake partner will not help you remember to use filters,” she notes. “It could reinforce controlling
behaviors. I see potential for reinforcing things that lead to abusive relationships in real life.”

Because AI can be what you want it to be, there’s a risk, too, of forming an emotional affair and
“increasingly disappearing” from one’s real partner, friendships and other relationships.

“Well-being suffers if we are not building real relationships,” Dugan said.

Prioritizing people

Bahar also sees potential for good from AI, like using AI to decrease isolation and ease depression
symptoms of dementia, for instance. She hopes it will be used as a bridge to connect people with
life enhancements like gardening or animals or other people when they need more of that kind of
connection.

Tech has certainly helped some people find and form relationships, through dating apps and online
groups, among other avenues.

But priority has to be given to preserving human, real relationships, Bahar said.

“It’s your room and your elephant,” said Dugan. “Take a look at what you’re avoiding.”

She said if people are honest about what draws them to relationships based on AI, they can set
some parameters about what’s acceptable and what’s not and then keep any benefits offered by AI.

Probably the biggest issue, experts have told the Deseret News, is figuring out how to put limits on
AI.

“How far are we going to allow AI to go?” Bahar asks. “I don’t think we have a good handle on that.”

Dugan’s a strong proponent of real connections and has tried to model healthy relationships for her
children. Over the years, they’ve seen that she builds into her calendar time to be with other people
and makes it a priority. She also founded a group for women to meet and find friends, to connect in
real time.

While AI has potential to diminish human relationships, AI tools could also help make more time for
them, said Dugan. The ready availability of data means less time searching, and technology has
3
freed people up in a lot of different ways. Using technology, including AI, could translate into more
time to do human things with real people.

Bahar’s advice for folks is to engage the five senses as much as possible. Get out into the natural
environment. Remove tech from your life for 10 minutes, an hour, a day, two days. Then grow the
amount of time. Reduce technology at all levels and include alternatives. Connect with real people.
Stop going to church online. Do a Bible study with real people or sit in the pews beside them, she
said.

“Start to see tech as an external part of you that you have control over,” Bahar said.

(https://www.deseret.com/2023/9/6/23841752/ai-artificial-intelligence-chatgpt-relationships-real-
life/)

4
Hope or horror? The great AI debate dividing its pioneers
Dan Milmo Global technology editor

CEO of DeepMind is ‘not a pessimist’ but warns of threat from AI and says we must be active in
shaping ‘a middle way’
• AI risk must be treated as seriously as climate crisis, says Google DeepMind chief
Tue 24 Oct 2023 14.00 CEST

Demis Hassabis says he is not in the “pessimistic” camp about artificial intelligence. But that did not
stop the CEO of Google DeepMind signing a statement in May warning that the threat of extinction
from AI should be treated as a societal risk comparable to pandemics or nuclear weapons.

That uneasy gap between hope and horror, and the desire to bridge it, is a key reason why Rishi
Sunak convened next week’s global AI safety summit in Bletchley Park, a symbolic choice as the
base of the visionary codebreakers – including computing pioneer Alan Turing – who deciphered
German communications during the second world war.

“I am not in the pessimistic camp about AI obviously, otherwise I wouldn’t be working on it,” Hassabis
tells the Guardian in an interview at Google DeepMind’s base in King’s Cross, London.

“But I’m not in the ‘there’s nothing to see here and nothing to worry about’ [camp]. It’s a middle way.
This can go well but we’ve got to be active about shaping that.”

Hassabis, a 47-year-old Briton, co-founded UK company DeepMind in 2010. It was bought by


Google in 2014 and has achieved stunning breakthroughs in AI under his leadership. The company
is now known as Google DeepMind after merging with the search firm’s other AI operations, with
Hassabis at the helm as CEO.

His unit is behind the AlphaFold program that can predict the 3D shapes of proteins in the human
body – as well as nearly all catalogued proteins known to science. This is a revolutionary
achievement that will help achieve breakthroughs in areas such as discovering new medicines
because it maps out the biological building blocks of life. This year Hassabis was jointly awarded one
of the most prestigious prizes in science, the Lasker basic medical research award, for the work on
AlphaFold. Many winners of the award go on to win a Nobel prize.

Last month Hassabis’ team released AlphaMissense, which uses the same AI protein program to
spot protein malformations that could cause disease.

Hassabis says he would have preferred the May statement to contain references to AI’s potential
benefits. “I would have had a line saying about all the incredible opportunities that AI is going to
bring: medicine, science, all the things help in everyday life, assisting in everyday life.”

He says AI advances will trigger “disruption” in the jobs market –skilled professions such as law,
medicine and finance are at risk, according to experts – but he says the impact will be “positive
overall” as the economy adapts. This has also led to talk among AI professionals of the technology
funding a universal basic income or even a universal basic service, which provides services such
as transport and accommodation for free.

“Some kind of sharing of the upsides would be needed in some form,” says Hassabis.

5
Q&A
'What is the AI race series?'

But the OECD, an influential international organisation, says jobs at the highest risk from AI-driven
automation are highly skilled and represent about 27% of employment across its 38 member
countries, which include the UK, Japan, Germany, the US, Australia and Canada. No wonder the
OECD talks of an “AI revolution which could fundamentally change the workplace”.

Nonetheless, the summit will focus on threats from frontier AI, the term for cutting-edge systems
that could cause significant loss of life. These include the ability to make bioweapons, create
sophisticated cyber-attacks and to evade human control. The latter issue refers to fears about
artificial general intelligence, or “god-like” AI, meaning a system that operates with above or beyond
human levels of intelligence.

The pessimistic camp that voices these fears has strong credentials. Geoffrey Hinton, a British
computer scientist often described as one of the “godfathers” of modern AI, quit his job at Google
this year in order to voice his fears about the technology more freely.

Hinton told the Guardian in May of his concerns that AI firms are trying to build intelligences with
the potential to outthink humanity.

“My confidence that this wasn’t coming for quite a while has been shaken by the realisation that
biological intelligence and digital intelligence are very different, and digital intelligence is probably
much better.”

Stuart Russell, another senior British computer scientist, has warned of a scenario where the UN
asks an AI system to create a self-multiplying catalyst to de-acidify the oceans, with the safety
instruction that the outcome is non-toxic and that no fish are harmed. But the result uses up a quarter
of the oxygen in the atmosphere and subjects humans to a slow and painful death.

Both Hinton and Russell are attending the summit along with Hassabis, world politicians, other tech
CEOs and civil society figures.

Referring to AGI, Hassabis says “we’re a long time before the systems become anywhere on the
horizon” but says future generation systems will carry risks. Hence the summit.

Critics of the summit argue that the focus on existential risk ignores short-term problems such as
deepfakes.

The government seems to have acknowledged the immediate concerns, with the agenda for the
summit referring to election disruption and AI tools producing biased outcomes. Hassabis argues
that there are three categories of risk and all “equally important” and need to be worked on
simultaneously.

The first is near-term risks such as deepfakes and bias. “Those types of issues … obviously are
very pressing issues, especially with elections next year,” he said. “So there … we need solutions
now.” Google DeepMind has already launched a tool that watermarks AI-generated images.

The second risk category is rogue actors accessing AI tools, via publicly available and adjustable
systems known as open source models, and using them to cause harm.

“How does one restrict access to bad actors, but somehow enable all the good use cases? That’s a
big debate.”

6
The third is AGI, which is no longer discussed as a fantastical possibility. Hassabis says super
powerful systems could be a “decade away plus” but the thinking on controlling them needs to start
immediately.

There are also alternative views in this field. Yann LeCun, the chief AI scientist at Mark Zuckerberg’s
Meta and a respected figure in AI, said last week that fears AI could wipe out humanity were
“preposterous”.

Nonetheless, a concern among those worried about superintelligent systems is the notion that they
could evade control.

“Can it exfiltrate its own code, can extract its own code, improve its own code,” says Hassabis. “Can
it copy itself unauthorised? Because these would all be undesirable behaviours, because if you want
to shut it down, you don’t want it getting around that by copying itself somewhere else. There’s a lot
of behaviours like that, that would be undesirable in a powerful system.”

He said tests would have to be designed to head off the threat of such autonomous behaviour.
“You’ve got to actually develop a test to test that … and then you can mitigate it, and maybe even
legislate against it at some point. But the research has to be done first.”

The situation is further complicated by the fact that highly capable generative AI tools – technology
that produces plausible text, image and voice from simple human prompts – are already out there
and the regulatory framework to regulate them is still being built.

Signs of a framework are emerging, such as commitments to AI safety signed by major western tech
firms at the White House in July. But the commitments are voluntary.

Hassabis talks of starting with an IPCC-style body before moving eventually to an entity “equivalent
to” the anti-nuclear proliferation International Atomic Energy Agency, although he stresses that none
of the regulatory analogies are “directly applicable” to AI. This is new territory.

If you are in the pessimistic camp, it could take years to build a solid regulatory framework. And as
Hassabis says, work on safety needs to start “yesterday”.

(https://www.theguardian.com/technology/2023/oct/24/hope-or-horror-the-great-ai-debate-dividing-
its-pioneers)

You might also like