0% found this document useful (0 votes)
3 views26 pages

US10616398

The patent US 10,616,398 B2 describes techniques for modifying communication sessions based on proximity context. It outlines methods for establishing communication between devices and adjusting parameters like volume based on the proximity of non-participants. The invention aims to enhance privacy during conversations by detecting nearby individuals and altering session characteristics accordingly.

Uploaded by

np.test.moto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views26 pages

US10616398

The patent US 10,616,398 B2 describes techniques for modifying communication sessions based on proximity context. It outlines methods for establishing communication between devices and adjusting parameters like volume based on the proximity of non-participants. The invention aims to enhance privacy during conversations by detecting nearby individuals and altering session characteristics accordingly.

Uploaded by

np.test.moto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

US010616398B2

United States Patent (10 ) Patent No.: US 10,616,398 B2


Agrawal et al. (45 ) Date of Patent: * Apr . 7 , 2020
(54 ) COMMUNICATION SESSION 2009/00322 (2013.01); G08B 3/1016
MODIFICATIONS BASED ON A PROXIMITY (2013.01); H04M 2250/12 ( 2013.01)
CONTEXT (58 ) Field of Classification Search
(71) Applicant: Motorola Mobility LLC , Chicago , IL CPC .......... HO4M 1/72569; H04M 2250/12 ; G06K
(US ) 2009/00322 ; G06K 9/00288 ; GOSB
3/1016
(72) Inventors: Amit Kumar Agrawal, Bangalore (IN ) ; USPC 455 /414.1
Rachid M. Alameh , Crystal Lake, IL See application file for complete search history .
(US ); Giles Tucker Davis, Downers (56 ) References Cited
Grove , IL (US )
U.S. PATENT DOCUMENTS
(73 ) Assignee : Motorola Mobility LLC , Chicago , IL
(US) 8,614,733 B2 12/2013 Kato
9,231,845 B1 1/2016 Goldstein et al .
( * ) Notice : Subject to any disclaimer, the term of this (Continued )
patent is extended or adjusted under 35
U.S.C. 154 (b ) by 0 days. OTHER PUBLICATIONS
This patent is subject to a terminal dis
claimer. “ Final Office Action ” , U.S. Appl. No. 15 /904,160, dated Jul. 10 ,
2019 , 34 pages.
(21) Appl. No.: 16 /544,441 (Continued )
(22) Filed : Aug. 19, 2019 Primary Examiner German Viana Di Prisco
Assistant Examiner - Mark G. Pannell
(65 ) Prior Publication Data ( 74 ) Attorney, Agent, or Firm — SBMC
US 2019/0373104 A1 Dec. 5 , 2019
( 57 ) ABSTRACT
Related U.S. Application Data Techniques described herein provide modifications to a
(63) Continuation of application No. 15 /904,160, filed on communication session based on a proximity context. Vari
Feb. 23 , 2018 , now Pat. No. 10,432,779 . ous implementations establish a communication session
between a local communication device and a remote com
(51) Int. Ci. munication device . In response to establishing the commu
H04M 3/42 ( 2006.01) nication session , one or more implementations determine a
HO4M 11/00 (2006.01 ) proximity context associated with an area around the local
H04M 11/10 ( 2006.01 ) device, such as by detecting a proximity of various objects
G06F 15/173 (2006.01) to the local device. Upon determining the proximity context,
H04M 1/725 ( 2006.01) various embodiments alter various operating parameters
H04W 76/10 (2018.01) associated the communication session , such as by reducing
(Continued ) a speaker volume and / or announcing the presence of a
(52 ) U.S. CI. person within proximity to the local device .
CPC H04M 1/72569 ( 2013.01) ; H04W 76/10
(2018.02 ); G06K 9/00288 (2013.01); GOOK 20 Claims, 13 Drawing Sheets
702
700
102
- 800
When is the
surprisc party for
Gladys ?
114 — B02

112
804
702
Gladys is within
proximity of hoaring
102

806

114
US 10,616,398 B2
Page 2

(51) Int. Ci.


G08B 3/10 (2006.01)
G06K 9/00 ( 2006.01)
( 56 ) References Cited
U.S. PATENT DOCUMENTS
9,571,628 B1 2/2017 Bostick et al.
10/2019 Agrawal et al.
10,432,779 B2
2008/0192977 Al 8/2008 Gruenhagen et al.
2008/0242265 A1 10/2008 Cohen et al.
2009/0097671 A1 4/2009 Paradiso et al.
2013/0078962 A1 3/2013 Clarke et al .
2013/0150117 A1 6/2013 Rodriguez et al.
2013/0225199 A1 8/2013 Shaw
2014/0055553 Al 2/2014 Lee et al.
2015/0009278 A1 1/2015 Modai
2015/0172462 A1 6/2015 Cudak et al.
2016/0301373 Al 10/2016 Herman et al.
2017/0097413 Al 4/2017 Gillian et al .
2019/0268460 A1 8/2019 Agrawal et al.
OTHER PUBLICATIONS
“ Final Office Action ” , U.S. Appl. No. 15 / 904,160 , dated Nov. 27 ,
2018 , 37 pages .
“ Non -Final Office Action ” , U.S. Appl. No. 15 / 904,160 , dated Mar.
27, 2019 , 33 pages .
“ Non -Final Office Action ” , U.S. Appl. No. 15 /904,160 , dated Aug.
14 , 2018, 33 pages.
“ Notice of Allowance ” , U.S. Appl. No. 15 /904,160 , dated Jul. 29 ,
2019 , pages .
U.S. Patent Apr. 7, 2020 Sheet 1 of 13 US 10,616,398 B2
106

100

104

Communication
Cloud
108
104

112
114
122

Communication Device 102

Communication Module 110

Device Assistant Module 116

Range Detection Module 118

Identity Detection Module 120

FIG . 1
U.S. Patent Apr. 7 , 2020 Sheet 2 of 13 US 10,616,398 B2
102-1

102-2
Communication Device 102

Processor(s ) 200

Computer-Readable Media 202


102-3 Memory Media 204

Storage Media 206

Communication Module 110

Device AssistantModule 116


102-4
Range Detection Module 118

Identity Detection Module 120

102-5 Display Device 208

Audio InputModule 210

Audio Output Module 212


102-6
Input/output Sensor(s ) 214

FIG . 2
U.S. Patent Apr. 7, 2020 Sheet 3 of 13 US 10,616,398 B2
300
308

306

312

304

314
102 1
1
$
1
302

3 1

1 ++
1 1
w

1 ***

th

1
3
114 1

1
310
1

FIG . 3
U.S. Patent Apr. 7 , 2020 Sheet 4 of 13 US 10,616,398 B2
400

402

1 406 404 ST

C.
?

408

412

410

406

?
FIG. 4
U.S. Patent Apr. 7, 2020 Sheet 5 of 13 US 10,616,398 B2

112
(
500

102

.
?

114
112

502

102

FIG . 5
U.S. Patent Apr. 7 , 2020 Sheet 6 of 13 US 10,616,398 B2
600

Distance Range Attenuation


0-2 meters 20 dB
602 604
2-4 meters 15 dB

4-6 meters 10 dB

6-8 meters 5 dB

8+ meters No Attenuation

YA CM CNN

606

User Age Attenuation Factor


10-15 co

15-25 0.9

25-45 1

608 610
45-65 0.8

654 0.5

FIG . 6
U.S. Patent Apr. 7 , 2020 Sheet 7 of 13 US 10,616,398 B2
102 702

700

102
702

704
114

Gladys is within a proximity to the


phone where call contentmay be
overheard

Fig . 7
U.S. Patent Apr. 7 , 2020 Sheet 8 of 13 US 10,616,398 B2

702
700
102
800
When is the
surprise party for
Gladys ?
114 802

112
804
702
Gladys is within
proximity of hearing
102

806

114

FIG . 8
U.S. Patent Apr. 7 , 2020 Sheet 9 of 13 US 10,616,398 B2
112

102

900

Unable to lower volume. This


conversation is not private

902

Unable to lower volume .


This conversation is not
private
112

102

FIG . 9
U.S. Patent Apr. 7, 2020 Sheet 10 of 13 US 10,616,398 B2

1000

Proximity Context Settings


1006a 1006b 1006c 1006d
Distance Context User identity Miscellaneous
1008 1010

IIC
Attenuation 1 : dB Distance
Attenuation 2 : dB Distance

Attenuation 3 :
Attenuation 4 :
1012 1014
dB
dB
Distance
Distance
(
OOP m

X Default User- Defined

1002
1004 X Enabled
OK Cancel
Disabled

Fig . 10
U.S. Patent Apr. 7, 2020 Sheet 11 of 13 US 10,616,398 B2
1100

1102 Establish , at a local communication


device , a communication session with
a remote communication device .

1104 Determine a proximity context


associated with an area surrounding
the local communication device .

1106 Analyze the proximity context to


determine one ormore
characteristics associated with a non
call participant within the area

1108 Automatically modify the


communication session based on the
proximity context

1110 Yes
In progress ?

No
1112
Exit

FIG . 11
U.S. Patent Apr. 7, 2020 Sheet 12 of 13 US 10,616,398 B2
1200

1202 Establish , at a local


communication device , a
communication session with a
remote communication device .

1204 Generate a proximity context


to detect a presence of a
non - call participant

1206 No
Presence
detected ?

Yes

1208 Attempt to determine an identity


of the non - call participant

1210 Identity
No resolved ? Yes

1212 1214

Modify the communication Modify the communication


session based on range detection session based on the identity
information

FIG . 12
U.S. Patent Apr. 7 , 2020 Sheet 13 of 13 US 10,616,398 B2

Electronic Device 1300

Communication Transceiver(s ) Data Input Port(s ) 1306


1302

Processor System 1308 Processing and Control 1310

Memory Device (s ) 1312


Device Data 1304

Device Applications 1314

Operating System 1316

Communication Module 1318 Device Assistant Module 1320

Range Detection Module 1322 Identity Detection Module 1324

Input/output sensor(s ) 1326 Media Data Port 1332

Audio System 1328


Audio / Video Processing 1324
Display System 1330

FIG . 13
US 10,616,398 B2
1 2
COMMUNICATION SESSION FIG . 11 illustrates a flow diagram that describes an
MODIFICATIONS BASED ON A PROXIMITY example ofmodifying a communication session based on a
CONTEXT proximity context in accordance with one or more imple
mentations;
RELATED APPLICATIONS 5
FIG . 12 illustrates a flow diagram that identifies exter
nally playing audio during moments of interest in accor
This application is a continuation of U.S. patent applica dance with one or more implementations; and
tion Ser. No. 15 / 904,160, filed Feb. 23 , 2018 , entitled FIG . 13 is an illustration of an example device in accor
“ Communication Session Modifications Based on a Prox 10
dance with one or more implementations.
imity Context” , the disclosure of which is hereby incorpo DETAILED DESCRIPTION
rated by reference herein in its entirety.
BACKGROUND Turning to the drawings, wherein like reference numerals
refer to like elements, techniques of the present disclosure
Computing devices provide users with the ability to 15 ment are illustrated as being implemented in a suitable environ
. The following description is based on embodiments of
exchange real - time audio with one another during a com the claims
munication session . For example, a user can initiate an audio with regard and should not be taken as limiting the claims
call to a co -worker using a mobile communication device in itly described herein . embodiments that are not explic
to alternative
locations outside of a work environment. By providing the 20 Techniques described herein provide modifications to a
ability to conduct communication sessions, mobile devices communication session based on a proximity context. Vari
oftentimes place users in environments where conversations ous implementations establish a communication session
can be overheard by people in the surrounding area , such as between a local communication device and a remote com
patrons at a coffee shop , shoppers in a store , diners a munication device . In response to establishing the commu
restaurant, and so forth . In turn ,the usermay unintentionally 25 nication session , one or more implementations determine a
divulge sensitive information to these people . proximity context associated with an area around the local
device , such as by detecting a proximity of various objects
BRIEF DESCRIPTION OF THE SEVERAL to the local device . Upon determining the proximity context,
VIEWS OF THE DRAWINGS various embodiments alter various operating parameters
30 associated the communication session , such as by reducing
While the appended claims set forth the features of the a speaker volume and /or announcing the presence of a
present techniques with particularity , these techniques , person within proximity to the local device .
together with their objects and advantages , may be best Consider now an example environment in which various
understood from the following detailed description taken in 35 aspects as described herein can be employed .
conjunction with the accompanying drawings of which : Example Environment
FIG . 1 is an overview of a representative environment that
includes an example ofmodifying a call based on a prox FIG . 1 illustrates an example environment 100 in accor
imity context in accordance with one or more implementa dance with one or more implementations . Environment 100
tions; 40 includes a communication device 102 in the form of a
FIG . 2 illustrates a more detailed example of various
devices capable of providing call modifications based on a mobile communication device that is capable of conducting
proximity context in accordance with one or more imple in environment 100session
a communication with another device. Accordingly ,
, communication device 102 conducts a
mentations ; communication session 104 with remote communication
FIG . 3 illustrates an example of identifying a proximity 45 device 106 , where communication device 102 represents a
context using range detection in accordance with one or local device associated with a user conducting the commu
more implementations ; nication session . Similarly , remote communication device
FIG . 4 illustrates examples of identifying a proximity 106 represents a device that is remote from communication
context based on input audio in accordance with one or more device 102 and is associated with a different user partici
implementations ; 50 pating in the communication session .
FIG . 5 illustrates an example of a call modification based Communication session 104 generally represents a real
on a proximity context in accordance with one or more time communication exchange between multiple communi
implementations; cation devices . While environment 100 illustrates the par
FIG . 6 illustrate example lookup tables that can be used ticipating communication devices as mobile communication
to derive audio attenuation in accordance with one or more 55 devices, alternate or additional implementations include any
implementations; number of communication devices of any type in the com
FIG . 7 illustrates an example call alert in accordance with munication session . A real-time communication exchange
one or more implementations ; can include the exchange of real- time audio and /or the
FIG . 8 illustrates an example call alert in accordance with exchange of video . Here, the phrase " real-time” is used to
one or more implementations ; 60 signify an exchange of audio and/or video between devices
FIG . 9 illustrates an example alert generated in response in a manner thatmimics real-world exchanges . For example,
to proximity context detection in accordance with one or the processing and propagation of signals used to exchange
more implementations ; audio and /or video can sometimes encounter delays due to
FIG . 10 illustrates an example user interface that can be real-world properties of the electronic components and /or
used to customize call parameter modifications based on 65 the communication channels. However, the same delay ( in
proximity context detection in accordance with one ormore general) is applied to the whole of the audio and /or video
implementations ; such that once the delay is encountered at a receiving device ,
US 10,616,398 B2
3 4
the exchange continues on with little to no delay. In other context and the operating context as further described
words, the delay is generally a constant such that once an herein . Alternately or additionally , device assistance module
initial delay is encountered to exchange the audio , users see 116 performsvarious actions associated with the recommen
the video and/or hear the audio as they would in the dations without user intervention . As one example , device
real-world . Alternately or additionally, the communication 5 assistant module 116 analyzes a proximity context generated
session can include the exchange of finite clips of prere by range detection module 118 and/or identity detection
corded video , such as the exchange of a video clip over text module 120 while a communication session is in progress ,
messaging and modifies the communication session based on the analy
Communication cloud 108 generally represents a com sis , such audio levels , proximity notifications, and so forth .
munication network that facilitates a bi- directional link 10 As another example , various implementations determine
between computing devices . This can include multiple inter an operating context that indicates the communication
connected communication networks that comprise a plural device is conducting a communication session in a private
ity of interconnected elements, such as a wireless local area mode, rather than speaker mode. The phrase “ private mode”
network (WLAN ) with Ethernet access , a wireless telecom denotes an audio output mode that directs audio to an
munication network interconnected with the Internet, a 15 earpiece to , and /or plays audio at an output level configured
wireless (Wi-Fi) access point connected to the Internet, a for, a speaker physically located next to a user's ear to
Public Switched Telephone Network (PSTN ), and so forth . reduce a project path of the audio and increase the privacy
Accordingly, communication cloud 108 provides connectiv of the audio . Conversely, the phrase “ speakermode ” denotes
ity between communication device 102 and remote commu an audio outputmode that directs the audio to , and /or plays
nication device 106 . 20 audio at an output level configured for , a speaker away from
To facilitate communications, communication device 102 the user's ear to increase the audio projection path to allow
includes communication module 110 provides the ability to multiple people access to the audio . Some implementations
conduct communication session . Accordingly, communi reduce and /or attenuate the audio output level associated
cation module 110 generally represents any suitable com with speaker 112 in response to detecting a proximity of
bination of hardware, software , and /or firmware used to 25 person 114 and identifying the private mode operating
facilitate the exchange of audio and/or video , as well as context. Conversely, if the communication device is in
other information. For instance , communication module 110 speaker mode , various implementations determine to not
can include one or more protocol stacks associated with a modify a communication session based on a proximity
network over which the communication session is con context as further described herein . Alternately or addition
ducted , client software that supplies a user interface used to 30 ally , some implementations amplify and /or revert back to an
initiate and /or terminate the communication session , firm original audio output level in response to detecting that
ware that drives hardware to generate signals and /or process person 114 has moved from being within a predefined
messages used in maintaining the communication session , proximity to outside of a predefined proximity , that the
and so forth . Various implementations of communication communication device has transitioned to a speaker mode ,
module 110 receive audio input from a microphone associ- 35 and so forth . Accordingly , device assistant module 116
ated with communication device 102 (not illustrated here ), analyzes a proximity context and /or operating context, and
and forward the audio input to remote communication makes determinations on how to manage communication
device 106 part of communication session 104. Alternately session 104 based on these analyses. To determine how to
or additionally, communication module 110 forwards audio manage communication session features, device assistant
received from remote communication device 106 over the 40 module 116 communicatively couples to , or interfaces with ,
communication session to a speaker 112 for projection . In range detection module 118 and/or identity detection module
some implementations, communication module 110 receives 120 .
video input (e.g., synchronized images and audio ) from a Range detection module 118 maps an area surrounding
camera associated with communication device 102 , and the communication device to provide a proximity context.
forwards the video input to remote communication device 45 Accordingly, range detection module 118 generally repre
106. Thus, communication module 110 enables communi sents any combination of hardware, software, and /or firm
cation device 102 to send and /or receive various types of ware used to determine characteristics about a surrounding
information in various formats over a communication ses area. In some implementations , range detection module 118
sion (e.g., audio , video, protocol messaging, etc.). includes, and/or interfaces with , a depth sensor, such as an
Environment 100 includes person 114 , who generally 50 Infrared (IR ) imager, a stereo depth sensor, a time-of- flight
represents a non-call participant. In other words, person 114 sensor, and so forth , that transmits electromagnetic wave
represents a person who is located within an arbitrary forms ( e.g. , laser signals 122 ) outward from communication
proximity to communication device 102 , but is not a par device 102. In turn , the signals reflect off of objects in the
ticipant in communication session 104. In various imple area , such as person 114 , to generate return signals that are
mentations, the proximity of person 114 can pose a risk to 55 process to obtain characteristics about the surrounding area
the information exchanged over communication session as further described herein . Some implementations of range
104. Depending upon how close person 114 is , some of the detection module 118 configure how the signals are trans
information exchanged over the communication session can mitted out (e.g.,what a propagation pattern is used ) as a way
be unintentionally exposed to person 114 via the audio to change the signals for a particular detection purpose. For
output generated by speaker 112. To mitigate the risk of 60 instance, range detection module 118 can configure the
person 114 overhearing information , communication device signals for object presence detection over a wide area ,
102 includes device assistant module 116 , range detection configure the signals to concentrate a particular area to
module 118 , and identity detection module 120. obtain the detailed information for identification purposes ,
Device assistantmodule 116 identifies an operating con and so forth . This can include managing different types of
text associated with communication device 102, and pro- 65 sensors, such as a laser-based detection sensor, an image
vides recommendations based on that operating context, a based detection sensor, a radio frequency (RF) based detec
proximity context, and /or a combination of the proximity tion sensor, and so forth . Thus, range detection module 118
US 10,616,398 B2
5 6
generates information about an area surrounding communi notify a user about people and /or objects within a predeter
cation device 102 to generate proximity context. mined proximity of communication device 102 as deter
Identity detection module 120 uses proximity context mined by range detection module 118 and /or identity detec
information generated by range detection module 118 to tion module 120 .
authenticate and /or identify a particular person from other 5 Audio input module 210 represents functionality that
people . Alternately or additionally , identity detection mod captures sound external to communication device 102, such
ule 120 analyzes input audio from a microphone and /or as a microphone , and converts the sound into various
communication session 104 for identification purposes ( e.g., formats and/or representations that can be processed by
a particular person , key words, etc.). To demonstrate , iden communication device 102. Accordingly , various implemen
tity detection module 120 can include facial recognition 10 tations of audio input module 210 forward the captured
algorithmsas a way to characterize various facial features of audio to range detection module 118 and /or identity detec
person 114 using information generated from the return tion module 120 for the identification of keywords, a par
signals and /or an image capture from a camera sensor. In ticular person's identity, and / or background noise charac
turn , identity detection module 120 maps the characterized teristics as further described herein .
facial features to a particular user identity , such as by 15 Audio outputmodule 212 represents any suitable type of
comparing the characterized facial features to known facial device that can be used to project audible sounds, tones ,
features. This can include using images tagged or associated and /or information , such as speaker 112 of FIG . 1. Various
with a particular user as a baseline for the known facial implementations of audio output module 212 use combina
features . Various implementations of identity detection mod tions of hardware, firmware and /or software to output the
ule 120 apply voice recognition algorithms and /or speech 20 audible sound, such as a device driver that programmatically
recognition algorithms to audio input as a way to identify a controls and / or drives hardware. In some implementations,
particular person and / or keywords. Device assistant module device assistant module 116 manages an audio output level
116 can query the identity detection module for the identi associated with audio output module 212 such that device
fication and/or the identity detection module can push this assistant module 116 can amplify and /or attenuate the
information to device assistant module. Further, any audio 25 sounds generated by audio output module 212 .
can be analyzed , such as audio transmitted over communi Communication device 102 also includes input/output
cation session 104 , audio captured by a microphone of sensors 214 to generate proximity context information about
communication device 102 , etc. a surrounding area . The input/output sensors can include any
FIG . 2 illustrates an expanded view of communication combination of hardware, firmware , and/or software used to
device 102 of FIG . 1 with various non - limiting example 30 capture information about external objects . In some imple
devices including : smartphone 102-1, laptop 102-2 , home mentations, input/output sensors include a light output mod
assistant device 102-3 , desktop 102-4 , tablet 102-5 , and ule, such as a laser source , to project light outward . In turn ,
smart watch 102-6 . Accordingly, communication device 102 input/output sensors can include input modules to capture
represents any mobile device, mobile phone, client device , reflected signals, such as image -capture elements ( e.g., pixel
wearable device, tablet, computing , communication , enter- 35 arrays, CMOS or CCD photo sensor arrays, photodiodes,
tainment, gaming , media playback , and /or other type of photo sensors , single detector arrays , multi -detector arrays ,
electronic device that incorporates call management based etc.) Alternately or additionally , input/output sensors 214
on proximity context as further described herein . A wearable includes other types of input/output sensors to transmit
device may include any one or combination of a watch , and /or receive various types of electromagnetic waveforms,
armband , wristband , bracelet, glove or pair of gloves , 40 such as a camera , a proximity detector, an infrared sensor, an
glasses, jewelry items, clothing items, any type of footwear audio detector, a radio frequency (RF) based detector,
or headwear, and /or other types of wearables. antenna, and so forth . As an example, various implementa
Communication device 102 includes processor (s) 200 and tions use time-of- flight sensors to obtain a depth map of the
computer-readable media 202, which includes memory surrounding area .
media 204 and storage media 206. Applications and /or an 45 Having described an example operating environment in
operating system (not shown) embodied as computer-read which various aspects of call modifications based on prox
able instructions on computer- readable media 202 are imity context can be utilized , consider now a discussion of
executable by processor(s) 200 to provide some, or all, of the identifying a proximity context in accordance with one or
functionalities described herein . For example, various more implementations .
embodiments access an operating system module that pro- 50
vides high - level access to underlying hardware functionality Identifying a Proximity Context
by obscuring implementation details from a calling program ,
such as protocolmessaging, register configuration , memory Various computing devices provide users with the ability
access , and so forth . to establish communication sessions with other devices ,
Computer-readable media 202 also includes communica- 55 such as voice and/or video call functionality. When the
tion module 110 , device assistant module 116 , range detec computing device is a portable device , such as a mobile
tion module 118 , and identity detection module 120 of FIG . communication device , a user can conduct communication
1. While communication module 110 , device assistant mod exchanges in environments that include other people. Often
ule 116 , range detection module 118 , and identity detection times , the user is unaware or lackadaisical about who is
module 120 are illustrated here as residing on computer- 60 within hearing distance, and unintentionally exposes sensi
readable media 202 , they can alternately or additionally be tive and /or private information .
implemented using hardware , firmware , software , or any Various implementations determine a proximity context
combination thereof. of a communication device, and use the proximity context to
Communication device 102 optionally includes display determine whether make modifications to a call that is in
device 208 that can be used to render content. In response to 65 progress . Alternately or additionally , some implementations
proximity context information , various implementations dis identify an operating context of the communication device
play notifications and /or alerts via display device 208 to that is used in determining what modifications to perform .
US 10,616,398 B2
7 8
To demonstrate , consider FIG . 3 that illustrates an envi located in the region that resides between boundary 304 and
ronment 300 in which a communication device determines boundary 306 , and that person 312 is located in the region
a proximity context in accordance with one ormore embodi that resides between boundary 306 and boundary 308 .
ments . Environment 300 includes communication device However, any other suitable combination and /or type of
102 of FIG . 1 engaged in a communication session with 5 characteristics about a surrounding area can be identified for
another communication device (not illustrated here ), where determining what modifications to make to a communication
the communication device operates in a private mode . While session . Various implementations identify other character
the communication session is active, communication device istics corresponding to the surrounding area, such as user
102 identifies a proximity contextby scanning a surrounding identity, user age, user direction , velocity, direction , etc.
area , such as by way of range detection module 118 and /or 10 To illustrate , various implementations of communication
identity detection module 120. Here, the surrounding area device 102 identify that person 114 corresponds to a young
corresponds to an area around the communication device woman whose face is pointed in a direction towards the
where the communication device can successfully transmit communication device, and that person 114 is closer to the
and receive various types of signals as further described communication device than person 312. Alternately or addi
herein . Alternately or additionally, the communication 15 tionally, the communication device identifies that person
device can define the surrounding area using a predeter 312 is an older man whose face is pointing away from the
mined shape of a predetermined size. Any suitable event can communication device , and is farther from the communica
trigger a scan of the surrounding area, such as the user tion device relative to person 114. These various character
initiating the communication session , a successful establish istics can then be used to determine the callmodifications to
ment of the communication session , an audio input level 20 perform as further described herein . For instance , some
exceeding a predetermined threshold , an identified audio implementations select person 114 to base various call
output level exceeding a predetermined threshold, an iden modifications on , since person 114 is located closer to the
tified operating context ( e.g., a private mode), and so forth . communication device than person 312. As another
The communication device can scan the area in any suitable example , various implementations base the call modifica
manner , such as continuously , periodically , for a predeter- 25 tions on characteristics of person 114 since she is facing
mined number of times, and so for. Various implementations towards the device , and person 312 is facing away from the
automatically initiate the scanning without user intervention . device .
In environment 300 , communication device 102 transmits In various embodiments , the communication device
and /or receives various waveforms 302. For example , input/ reconfigures waveforms 302 to obtain additional informa
output sensors 214 of FIG . 2 transmit out electromagnetic 30 tion . For example , the area identification process can trans
waves (e.g., radio frequency (RF ) signals , infrared light, mit a first set of signals that provide object presence detec
gammarays, microwave signals , etc.), and receive reflected tion , and then reconfigure the signals to transmit a second set
signals that are analyzed characterize the surrounding of signals that provide information that can be used to
area. Since the electromagnetic waves adhere to various identify a particular user identity , user gender, user age, and
wave and particle properties, the waves behave in known 35 so forth . Alternately or additionally , multiple different types
manners , such as constructive interference, destructive inter of sensors can be used . For instance , an RF -based detection
ference, reflection , refraction , and so forth . As the transmit system can be used to first identify the presence of a user
ted waves reflect off of objects in the area , the communica and , upon detecting the presence of a user, obtain an image
tion device receives the reflected waves, and performs from a second sensor (e.g., a camera -based depth sensor)
various analyses using knowledge of these properties to 40 that is processed using facial recognition algorithms.
obtain characteristics about objects in the surrounding area . Accordingly,multiple different sensors can be used in com
As an example , some implementations use a time-of- flight bination and /or at various stages to provide a proximity
ranging system that transmits a light signal , and measures context. This can include determining a proximity context
the time-of-flight for the light signal to an object as a way to based on input audio .
determine the object’s distance . Accordingly, various imple- 45 To further illustrate , now consider FIG . 4 that illustrates
mentations analyze transmitted and /or return waveforms to various examples of identifying a proximity context in
determine a size , a shape, a distance , a velocity , and so forth , accordance with one or more embodiments. In some sce
of objects in a surrounding area . Accordingly , waveforms narios, FIG . 4 can be considered a continuation of one or
302 generally represent the transmission and / or the recep more examples described with respect to FIGS. 1-3 . The
tion of information (e.g., electromagnetic waves ) by com- 50 upper portion of FIG . 4 illustrates an environment 400 that
munication device 102 . includes communication device 102 of FIG . 1. In environ
In environment 300 , communication device 102 identifies ment 400 , the communication device actively conducts a
three surrounding regions, where each region has a circular voice call with a remote device . Environment 400 also
shape with a boundary corresponding to the respective includes a non -call participant: person 402. Instead of par
circle's radius : boundary 304 , boundary 306 , and boundary 55 ticipating in the communication session , person 402 resides
308. While the regions are described as being circular, any in the background playing a guitar that generates back
other shape, size , or metric can be used to characterize the ground noise 404. Various implementations determine a
area. Some implementations alternately or additionally char proximity context associated with communication device
acterize the surrounding area by determining the respective 102 by identifying the audio level of audio external to the
distances of various objects to the communication device . 60 communication session ( e.g., background noise 404 ). For
Here, the communication device determines that person 114 instance, communication device 102 can determine a decibel
is at a distance 310 from the communication device , where (dB ) levelof the background noise by using microphone 406
distance 310 represents an arbitrary value . Similarly , the to capture background noise 404 , and process the audio to
communication device determines that person 312 is at a identify a background noise audio level as part of a prox
distance 314 from the communication device , where dis- 65 imity context.
tance 314 represents an arbitrary value . In this example , the The determination to identify background noise can
communication device also determines that person 114 is sometimes be based on previously identified context infor
US 10,616,398 B2
9 10
mation . For instance , some implementations first identify the communication device 102 (by way of range detection
presence of person 402 using various object sensing tech module 118 and /or identity detection module 120 ) scans the
niques describe herein . In response to identifying the pres surrounding environment to identify a proximity context
ence of a person, communication device 102 activates and /or operating context, examples of which are provided
microphone 406 to capture and analyze background noise 5 herein . The scanning can occur continuously, periodically,
404. Accordingly, in the upper portion of FIG . 4 , the for a predetermined number of times, and so forth . Accord
communication device uses multiple sensors to determine a ingly, various implementations update the proximity context
proximity context, one of which includes a microphone for over time.
audio capture and /or background noise analysis . While Continuing on , the lower portion of FIG . 5 represents an
described as a sequential process in which the sensors are 10 arbitrary point in time during the communication session in
utilized at different times, other implementations utilize the which person 114 moves within a predetermined proximity
sensors in parallel and /or concurrently. to communication device 102. As further described herein ,
Other types of proximity context information can be the communication device identifies various characteristics
identified from sound as well.Moving to the lower portion associated with person 114 using the proximity context
of FIG . 4 , consider now environment 408 that includes 15 information , such as a distance the user is from the com
communication device 102 of FIG . 1 and person 410. In munication device , a region in which the user is located , an
environment 408 , communication device 102 is actively identity of the user, a direction in which the user is facing ,
conducting a communication session with a remote device, an age of the user, and so forth. In response to the charac
while person 410 represents a non -call participant within a teristics identified via the proximity context, communication
predetermined proximity to the communication device . 20 device 102 attenuates the audio output from speaker 112 to
Accordingly , since person 410 is not a participant of the audio level 502. While illustrated here as an attenuation of
communication session , she conducts a separate conversa two audio level units, any other attenuation can be applied .
tion 412. In turn , communication device 102 determines to To determine what attenuation to apply , various implemen
characterize a proximity context associated with background tations use the proximity context information in combina
noise , and captures conversation 412 via microphone 406. 25 tion with lookup tables .
After capturing portions of conversation 412, communica Consider now FIG . 6 that illustrates various examples of
tion device 102 processes the audio , such as by applying lookup tables. In some implementations, FIG . 6 represents a
voice recognitions algorithms to determine an identity of continuation of one or more of the examples described with
person 410 and /or speech algorithms that identify one or respect to FIGS . 1-5 . The upper portion of FIG . 6 includes
more keywords from the audio ( e.g., a name, a location , 30 lookup table 600 that maps distance ranges with attenuation
business name, etc.). In turn , the communication device 102 values , where closer distances correspond to more attenua
can make call modifications to the communication session tion . For instance, contemplate now a scenario in which the
based on this proximity context, as further described herein . proximity context identifies that person 114 is 3.2 meters
Having described aspects of identifying a proximity con away from the communication device . Various implemen
text during a communication session , consider now a dis- 35 tations access lookup table 600 and determine that the user
cussion ofmodifying a communication session based on a identified distance falls into range 602. Since range 602
proximity context in accordance with one or more imple corresponds to attenuation 604 , the communication device
mentations. applies 15 dB attenuation to the audio output at speaker 112 .
Modifying a Communication Session Based on a Call While lookup table presents attenuation information relative
Proximity 40 to identified distances , other types of lookup tables and /or
The portability of devices enables user to conduct audio information can be utilized as well.
and /or video calls in varying locations. This provides the Moving to the lower portion of FIG . 6 , lookup table 606
user with more flexibility relative to fixed landlines since the maps a user's age to an attenuation factor. In various
user is able to conduct calls at any moment and at any implementations where the proximity context identifies a
location . However, these varying locations oftentimes 45 user's age, a lookup table , such as lookup table 606 , can be
include people , thus creating a potential risk of the user used singularly or in combination with other lookup tables
inadvertently divulging information to these people . Various to identify the attenuation factor. Here, the proximity context
implementations modify a communication session based on determines that person 114 has an age that falls within age
an identified proximity context. range 608 , which corresponds to an attenuation factor 610 .
To demonstrate , consider now FIG . 5 that illustrates 50 Various implementations apply the attenuation factor to
communication device 102 of FIG . 1. In various implemen either an existing audio level at which a speaker currently
tations, FIG . 5 represents a continuation of one or more of operates , or an attenuation amount identified from proximity
the examples described with respect to FIGS . 1-4 . In the context. Recall the scenario in which the communication
upper portion of FIG . 5 , the communication device is device determines to apply a 15 dB attenuation based on the
engaged in a communication session with a remote device 55 user's distance. Using the combination of distance and age
(not illustrated here ). As part of the communication session , proximity context information , the communication device
the remote device generates audio, and transmits the audio applies the attenuation factor identified in lookup table 606
to communication device. In turn , communication device to the 15 dB attenuation indicated by lookup table 600 ,
102 projects the audio out of speaker 112 at audio level 500 , resulting in an attenuation of: 15 dB * 0.8 = 12 dB . Accord
which represents a maximum level supported by the com- 60 ingly, some implementations use multiple proximity context
munication device 102. This is further indicated through the parameters to determine whether to attenuate or amplify an
display of five bars, where each respective bar corresponds audio output level, where the information obtained from the
to an arbitrary unit corresponding to audio levels (e.g., 3 dB lookup table (s) is aggregated using various algorithms to
per bar, 5 dB per bar, etc. ). Audio level 500 can be set in any weight and/or combine the information .
suitable manner, such as through a user defined audio level, 65 It is to be appreciated that the lookup tables described
a default audio level, a communication session default audio herein are for discussion purposes, and are not intended to
level, and so forth . During the communication session , be limiting . For instance , a lookup table can reference
US 10,616,398 B2
11 12
multiple proximity context parameters of any suitable type , communication session, remote home assistant 702 trans
such as background noise levels (e.g., noisy environments mits audio 800 across the communication session , where the
apply less attenuation to the audio output level than quiet audio includes user name 802 that corresponds to the
environments ), what direction a user is facing , a particular identity of person 114. In other words, since the communi
user identity, and so forth . As one example , in response to 5 cation device knows the identity of person 114, various
identifying a particular user, some implementations use the implementations use this information to scan for associated
identity to obtain other types of information about the user keywords (e.g., the user's name). Thus, proximity context
instead of deriving the information from the sensors, such as can include keyword identification in content transmitted
referencing stored information about the user's age, hearing across the communication session .
loss information , and so forth . The obtained information can 10 Moving to the lower portion of FIG . 8 , communication
then be used to determine an attenuation level, such as by device 102 determines to modify communication session
using a lookup table based on hearing loss information. As 700 by generating an audible alert 804. Here, communica
another example , if a user is facing away from the commu tion device 102 transmits the audible alert 804 across the
nication device, various implementations access a lookup communication session in a format that enables remote
table based on a directional information , where the proxim- 15 home assistant 702 to consume the audible alert (e.g., play
ity context identifying a rotation angle the user's face is out the alert at a speaker 806 ). Alternately or additionally,
positioned at relative to front. In turn , the communication communication device 102 plays the audible alert 804 at
device can access a lookup table that relates the rotation speaker 112 at an audio output level determined using
angle to an attenuation amount ( e.g., less attenuation applied various techniques described herein .
for away - facing directions , more attenuation applied for 20 In some scenarios, a communication device determines an
towards - facing directions ). attenuation amount that renders the output audio inaudible.
Consider now FIG . 7 that illustrates another example of For instance , consider an instance in which the communi
making communication session modifications based on a cation device conducts a private mode communication ses
proximity context. In various implementations, FIG . 7 can sion with the current audio output level set to 25 % of the
be considered a continuation of one or more examples 25 maximum supported audio output level. At some arbitrary
described with respect to FIGS . 1-6 . In the upper portion of point in time during the communication session , the com
FIG . 7, communication device 102 engages in a communi munication device determines (via the proximity context)
cation session 700 with remote home assistant 702. During that a non -call participant is located within the surrounding
the communication session , communication device 102 area . In response to this determination , the communication
scans the surrounding area to determine a proximity context 30 device identifies an attenuation level to apply to the current
as further described herein . In the upper portion of FIG . 7 , audio output level. However, prior to applying the attenua
the proximity context determines that no users are in a tion level, the communication device additionally identifies
surrounding area . that, based on the current audio output level, applying the
Moving to the lower portion of FIG . 7 , the proximity identified attenuation level would result in inaudible audio
context identifies at a later point in time that person 114 has 35 (e.g., below an audible sound threshold ). Accordingly,
moved within a predetermined proximity /distance to the instead of applying the identified attenuation level , various
communication device , and has also identified person 114 as implementations provide an alert that indicates the conver
a particular user, such as through the use of facial recogni sation is not private and /or the audio output level has been
tion algorithms, voice recognition algorithms, etc. In adjusted to a minimum audible level instead of an audio
response to the identified proximity context information , 40 output level based on the identified attenuation .
communication device 102 modifies communication session To illustrate, consider now FIG . 9 that demonstrates
700 by displaying alert 704 on a display associated with example alerts in accordance with one or more implemen
communication device 102. Alternately or additionally , tations. In various scenarios, FIG . 9 can be considered a
communication device 102 transmits messages and/or com continuation of one or more examples described with respect
mands over communication session 700 that invoke the 45 to FIGS. 1-8 . The upper portion of FIG . 9 includes com
display of alert 704 at remote home assistant 702. Accord munication device 102 of FIG . 1 , where the communication
ingly, communication device 102 can visibly display an alert device has determined that the identified attenuation level
at the communication device and /or invoke the display of an would render audio output at speaker 112 inaudible. Accord
alert at a remote device . Some implementations generate ingly, communication device 102 displays alert 900 to
audio alert that is output at communication device 102 50 indicate to the user that the current conversation is not
and / or is transmitted over communication session 700 for private and /or at risk of being overheard . Alternately or
output at remote home assistant 702. The audible alert can additionally, the communication device reduces the audio
either be in combination with a displayed alert, or instead of output level to a minimum audible output level, rather than
a displayed alert. an audio output level with the identified attenuation level.
To further demonstrate , consider now FIG . 8 that includes 55 The minimum audible output level can be determined in any
communication device 102 of FIG . 1 , and remote home suitable manner, such as through a default value, a lookup
assistant 702 of FIG . 7. In various implementations, FIG . 8 table, and so forth . Some implementations play an audible
can be considered a continuation of one or more examples alert that indicates the communication session is not private ,
described with respect to FIGS. 1-7. In the upper portion of such as alert 902 output on speaker 112 illustrated in the
FIG . 8 , the communication device and the remote home 60 lower portion of FIG . 9. While the visual alert and the
assistant conduct communication session 700 as described audible alert are illustrated separately in FIG . 9, other
with respect to FIG . 7. Similarly , person 114 remains within implementations display an alert and output an audible alert
proximity to communication device 102 , where the user's concurrently and /or transmit the alter( s) to the remote com
identity has been determined . Based on the proximity con munication device participating in the communication ses
text, communication device 102 has knowledge that person 65 sion .
114 is within hearing proximity , and additionally knows the By automatically scanning a surrounding area to deter
identity of person 114. At some arbitrary point during the mine a proximity context, a communication device can help
US 10,616,398 B2
13 14
protect information exchanged over the session by automati threshold values, and so forth . It is to be appreciated that the
cally modifying the communication session when perceived user -defined configuration settings described here are for
risks are identified . This can include the communication discussion purposes, and that any other type of setting can
device automatically accessing lookup tables and/or default be utilized to customize communication session modifica
values to help determine what parameters to change and 5 tions based on proximity detection .
how . However, sometimes the user desires to have more FIG . 11 illustrates a method 1100 that modifies a com
control over how a communication session gets modified . munication session based on a proximity context in accor
Accordingly , various implementations provide the user with
access to enter user -defined configuration settings to drive dance with one or more implementations . The method can
be performed by any suitable combination of hardware ,
what modifications are applied based on a proximity con- 10 software , and /or firmware . In at least some embodiments,
text.
Consider now FIG . 10 that illustrates an example user aspects of the method can be implemented by one or more
interface in accordance with one or more implementations. suitably configured hardware components and /or software
In various scenarios, FIG . 10 illustrates a continuation of modules , such as communication module 110 , device assis
one or more examples described with respect to FIGS. 1-9 . 15 detection
tantmodulemodule
116, range detection module 118, and /or identity
120 of FIG . 1. While the method described
FIG . 10 includes example user interface 1000 that represents
any suitable type of user interface displayed by a commu in FIG . 11 illustrates these steps in a particular order, it is to
nication device , such as communication device 102 of FIG . be appreciated that any specific order or hierarchy of the
1.Here, user interface 1000 provides the user with an ability steps described here is used to illustrate an example of a
to enter user-defined configuration settings associated with a 20 sample approach . Other approaches may be used that rear
proximity context. As an example , user interface 1000 range the ordering of these steps. Thus, the order steps
includes selectable control 1002 and selectable control 1004 arranged , and the illustrated order
described here may be rearr
which provide the user with the ability to enable and disable ing of these steps is not intended to be limiting .
modifications based on a proximity context. By activating At 1102 , various implementations establish a communi
selectable control 1002 (labeled “ Enabled " ), the user directs 25 cation session between a local communication device and a
the communication device to modify a communication ses remote communication device. This can include any suitable
sion based on a proximity context as further described type of communication session , such as a real-time voice
herein . Conversely , by activating selectable control 1004 communication session , a real-time video communication
( labeled “ Disabled ” ), the user directs the communication session , and so forth . In some implementations , the local
device to ignore and/or disallow modifications to a commu- 30 communication device initiates the communication session ,
nication session based on a proximity context. while in other implementations the remote communication
As another example, user interface 1000 includes navi device initiates the communication session . Various imple
gable tab 1006a (labeled “ Distance ” ), navigable tab 1006b mentations include multiple remote communication devices
( labeled “ Context”), navigable tab 1006c ( labeled “ User in the communication session, such as a conference call.
Identity ” ), and navigable tab 10060 ( labeled “ Miscella- 35 At 1104 , various implementations determine a proximity
neous” ). When selected , each navigable tab exposes user context associated with an area surrounding the local com
defined configuration settings . For example , navigable tab munication , such as by using a depth sensor to transmit
1006a includes text fields that enable the user to designate and /or receive electromagnetic waveforms over a surround
what attenuation level is applied for various distance loca ing area as further described herein . Alternately or addition
tions of non -call participants. Thus, data entered into text 40 ally , some implementations determine an operating context
field 1008 corresponds to an attenuation amount, such as 10 , of the local communication device , such as whether the
5 , 20 , etc., which user interface 1000 further denotes by communication device is operating in a private mode or a
displaying a unit of measure (“ dB ” ) for each respective input speaker mode, where modifications to a communication are
field . Similarly , data entered into text field 1010 corresponds applied when the local communication device operates in
to a distance associated with the attenuation amount entered 45 the private mode , but are not applied when the local com
in text field 1008 , which user interface 1000 denotes by munication device operates in the speaker mode. Determin
displaying a unit of measure (“ m ” ) at each respective input ing the proximity context can include using a single sensor,
field . In turn , the communication device can use the values ormultiple sensors as further described herein , where some
entered in each of these fields to determine attenuation levels implementations perform multiple iterations of transmitting
as further described herein . For example, some implemen- 50 waveforms, receiving waveforms, and analyzing the wave
tations generate a new lookup table with the user-defined formswith a same sensor or different sensors for respective
information . Alternately or additionally,the user can activate iterations
control 1012 to indicate that the communication device
.
Upon determining a proximity context, various imple
should use default values for an attenuation -to - distance mentations analyze the proximity context to determine one
lookup table, or activate control 1014 to indicate that the 55 ormore characteristics associated with a non -call participant
communication device should use the user -defined settings . within the area surrounding the local communication device
Other types of settings can be exposed through user at 1106. Some scenarios determine a presence , a distance , an
interface 1000 as well, such as user-defined proximity events identity, an age , a direction of movement, a direction in
through navigable tab 1000b (e.g., an identified velocity, a which the user faces relative to the local communication
number of identified non -call participants, an identified key 60 device, identifying key words, and so forth . This can include
word , etc. ). Navigable tab 1006c provides access to user applying facial recognition algorithms, voice recognition
defined information about a particular user , such as an algorithms, speech recognition algorithms, and so forth .
identified hearing loss, an indication to ignore communica Alternately or additionally , some implementations identify
tion session modifications for a particular user, etc. Navi whether to apply modifications and /or whatmodifications to
gable tab 1006d provides access to miscellaneous user- 65 apply based on the proximity context, such as by accessing
defined customizations, such as default sensors to enable for a lookup table to determine attenuation levels as further
proximity context detection , proximity scanning periodicity , described herein .
US 10,616,398 B2
15 16
In response to analyzing the proximity context, various At 1208 , various implementations attempt to determine an
implementations automatically modify the communication identity of the non -call participant, examples of which are
session based on the proximity context at 1108 , such as by provided herein . Accordingly, the method proceeds to 1210
displaying an alert at the local communication device, to determine whether an identity of the non -call participant
transmitting an audible alert to the remote communication 5 has been resolved . If the identity has not been resolved , the
device over the communication session , attenuating the method proceeds to 1212 for range-based modifications .
output audio level , amplifying the output audio level, and so Conversely , if the identity has been resolved , the method
forth . proceeds to 1214 for identity -based modifications.
Upon making the modification (s ), various implementa At 1212 , various implementations modify the communi
tions proceed to 1110 to determine whether the communi- 10 cation session based on range detection information , such as
cation session is still in progress. When it has been deter distance , directional information ( e.g. , what way the non
mined that the communication session has ended, the call participant is facing, what direction the non -call par
ticipant is moving in ), and so forth . This can include
method proceeds to 1112. However, if the communication modifications
session is still in progress, some implementations return to 15 information asbased
well ,
on other types of proximity context
such as background noise levels, prox
1104 to determine a new proximity context and/or whether imity alerts based on distance
to make any new modifications . Accordingly, various within hearing distance” ), and( e.g. so
, " there is a person located
forth . However, since the
aspects scan the surrounding area over the duration of the identity has not been resolved , these modifications do not
communication session to make modifications based on new include identity -based modifications.
proximity context information in real -time. In turn , updates 20 However , in response to resolving the identity, various
to the proximity context can influence how modifications are implementations modify the communication session based
selected . on the identity as further described herein . For example, as
To illustrate , consider now FIG . 12 that illustrates a further described her some implementations store user
method 1200 that switches how modifications based a defined configuration settings for a particular user, such as
proximity context are selected in accordance with one or 25 “ Ignoremodifications forGladys”, “ Send audible alertwhen
more implementations. The method can be performed by Gladys is identified ” , and so forth . In turn , the user-defined
any suitable combination of hardware , software , and /or configuration settings are used to determine how to modify
firmware . In at least some embodiments, aspects of the the communication session in lieu of other modifications .
method can be implemented by one or more suitably con Alternately or additionally, some implementations combine
figured hardware components and /or software modules , 30 the identity-based modifications with distance-based modi
such as communication module 110 , device assistantmodule fications.
116 , range detection module 118 , and/or identity detection When the communication session hasbeen modified , such
module 120 of FIG . 1. While the method described in FIG . as at 1212 or 1214 , the method returns to 1204 to repeat the
12 illustrates these steps in a particular order , it is to be process, and generate a new proximity context that can be
appreciated that any specific order or hierarchy of the steps 35 used to monitor the presence of the non -call participant
described here is used to illustrate an example of a sample and / or other characteristics associated with the detected
approach . Other approaches may be used that rearrange the non - call participant. This allows for real-time updates such
ordering of these steps . Thus, the order steps described here that the communication device can discern when the non
may be rearranged , and the illustrated ordering of these steps call participant moves closer or further away. In turn , this
is not intended to be limiting. 40 can change how the communication device identifies and/or
At 1202 , various implementations establish a communi applies modifications to the communication session . For
cation session between a local communication device and a example , consider a scenario in which the non -call partici
remote communication device . This can include any suitable pant is initially too far away for the communication device
type of communication session, such as a real-time voice to resolve an identity . In such a scenario , the communication
communication session , a real- time video communication 45 device would apply range -based modifications to the com
session , and so forth . In some implementations, the local munication session since the identity was unresolved . How
communication device initiates the communication session , ever, as the non -call participant moves closer, the commu
while in other implementations the remote communication nication device is able to resolve an identity, and switches to
device initiates the communication session . Various imple a different detection system (e.g., switches from a range
mentations include multiple remote communication devices 50 based detection system to an identity -based detection sys
in the communication session , such as a conference call. tem ). Accordingly, by continuously monitoring for a pres
In response to establishing the communication session , ence of a non -call participant (as well as generating a
some implementations generate a proximity context to proximity context), a communication device can make
detect a presence of a non -call participant 1204. Alternately modifications in real-time. In turn , this helps the communi
or additionally, various implementations identify various 55 cation device protect information without active user input
characteristics about the non -call participant as further during the communication session .
described herein . Generating the proximity context can Having described various examples of communication
include using a depth sensor, an audio sensor, a camera , and session modifications based on a proximity context, consider
so forth . Some implementations transmit electromagnetic now a discussion of an example device in which can be used
waves, and analyze captured return waves to determine the 60 for various implementations.
presence of the non- call participant. Accordingly , at 1206,
the method determines whether a presence of a non - call Example Device
participant is detected by analyzing the proximity context. If
no presence is detected , the method returns to 1204 , and FIG . 13 illustrates various components of an example
continues to scan for the presence of a non-call participant. 65 electronic device 1300 , such as communication device 102
However, if a presence is detected , the method proceeds to of FIG . 1, which can be utilized to implement various
1208 . aspects as further described herein . Electronic device 1300
US 10,616,398 B2
17 18
can be , or include, many different types of devices capable be implemented as any suitable types of media such as
of realizing automatic image association with an audio track electronic, magnetic , optic , mechanical, quantum , atomic,
in accordance with one or more implementations. and so on . Memory devices 1312 provide data storage
Electronic device 1300 includes communication trans mechanisms to store the device data 1304 , other types of
ceivers 1302 that enable wired or wireless communication of 5 information or data , and various device applications 1314
device data 1304 , such as received data and transmitted data . (e.g., software applications). For example, operating system
While referred to as a transceiver, it is to be appreciated that 1316 can be maintained as software instructions within
communication transceivers 1302 can additionally include memory devices 1312 and executed by processor system
separate transmit antennas and receive antennas without 1308 .
departing from the scope of the claimed subject matter. 10 In some aspects,memory devices 1312 includes commu
Example communication transceivers include Wireless Per nication module 1318 , device assistant module 1320, range
sonal Area Network (WPAN ) radios compliant with various detection module 1322 , and identity detection module 1324 .
Institute of Electrical and Electronics Engineers (IEEE ) While these modules are illustrated and described as resid
802.15 (BluetoothTM ) standards, Wireless Local Area Net ing within memory devices 1312 , other implementations of
work (WLAN ) radios compliant with any of the various 15 these modules can alternately or additionally include soft
IEEE 802.11 (WiFiTM ) standards, Wireless Wide Area Net ware , firmware , hardware , or any combination thereof.
work (WWAN ) radios for cellular telephony (3GPP -com Communication module 1318 manages various aspects of
pliant), wireless metropolitan area network radios compliant a communication session , such as initiating and /ormanaging
with various IEEE 802.16 (WiMAXTM ) standards, and wired protocolmessaging used to establish a communication ses
Local Area Network (LAN ) Ethernet transceivers. 20 sion over a network . Alternately or additionally , communi
Electronic device 1300 may also include one or more cation module 1318 interfaces with various sensors, such as
data - input ports 1306 via which any type of data , media an audio input sensor and/or video input sensor, to obtain
content, and inputs can be received , such as user- selectable audio and /or video to transmit over the communication
inputs ,messages , music, television content, recorded video session . Various implementations alternately or additionally
content, and any other type of audio , video , or image data 25 output audio and /or video received over the communication
received from any content or data source. Data -input ports session as further described herein .
1306 may include Universal Serial Bus (USB ) ports, Device assistant module 1320 provides call management
coaxial-cable ports, and other serial or parallel connectors capabilities, such asa the ability to modify various parameters
( including internal connectors ) for flash memory , Digital associated with communication session as further
Versatile Discs (DVDs), Compact Disks (CDs), and the like. 30 described herein . This can include analyzing a proximity
These data - input ports may be used to couple the electronic context associated with an area surrounding electronic
device to components , peripherals, or accessories such as device 1300 and/or an operating context to determine what
keyboards, microphones, or cameras. modifications to apply while the communication session is
Electronic device 1300 of this example includes processor in progress.
system 1308 (e.g. , any of application processors , micropro- 35 Range detection module 1322 maps an area surrounding
cessors , digital-signal processors, controllers , and the like ) the communication device to provide a proximity context,
or a processor and memory system (e.g., implemented in a such as through the use of a depth sensor as further described
system - on -chip ), which processes computer-executable herein . In turn , the proximity context can be used by device
instructions to control operation of the device. A processing assistant module 1320 and /or identity detection module
system may be implemented at least partially in hardware, 40 1324 to determine what modifications to apply to the com
which can include components of an integrated circuit or munication session . Identity detection module 1324 pro
on -chip system , digital-signal processor, application -spe vides authentication of a user's identity , such as through
cific integrated circuit, field -programmable gate array, a facial recognition algorithms and /or voice recognition .
complex programmable logic device , and other implemen Alternately or additionally , identity detection module 1324
tations in silicon and other hardware. Alternatively , or in 45 identifies various key words through speech recognition
addition , the electronic device can be implemented with any algorithms applied to audio as further described herein .
one or combination of software , hardware , firmware , or Electronic device 1300 also includes input/output sensors
fixed - logic circuitry that is implemented in connection with 1326 which generally represent any combination of sensors
processing and control circuits, which are generally identi that can be used to determine a proximity context, examples
fied as processing and control 1310. Although not shown, 50 of which are provided herein . Various implementations of
electronic device 1300 can include a system bus, crossbar, range detection module 1322 and / or identity detection mod
interlink , or data -transfer system that couples the various ule 1324 interface with , configure , and /or control various
components within the device . A system bus can include any features of input/output sensors 1326 to determine a prox
one or combination of different bus structures, such as a imity context and / or an identity of a non -call participant,
memory bus or memory controller, data protocol/format 55 such as when a sensor is enabled /disabled , what direction
converter, a peripheral bus, a universal serial bus, a proces electromagnetic waves are transmitted in , a radiation pat
sor bus, or local bus that utilizes any of a variety of bus tern , and so forth .
architectures. Electronic device also includes audio and video process
Electronic device 1300 also includes one ormore memory ing system 1328 that processes audio data and passes
devices 1312 that enable data storage, examples of which 60 through the audio and video data to audio system 1330 .
include random access memory (RAM ), non -volatile Audio system 1330 and display system 1332 may include
memory (e.g. , read -only memory (ROM ), flash memory, anymodules that process , display, or otherwise render audio ,
EPROM , EEPROM , etc.), and a disk storage device . video , display, or image data. Display data and audio signals
Memory devices 1312 are implemented at least in part as a can be communicated to an audio component and to a
physical device that stores information (e.g., digital or 65 display component via a radio frequency link , S -video link ,
analog values ) in storage media , which does not include HDMI, composite video link , component-video link , digital
propagating signals or waveforms. The storagemedia may video interface , analog-audio connection , or other similar
US 10,616,398 B2
19 20
communication link , such asmedia -data port 1334. In some amplifying the audio outputlevel to revert back from said
implementations, audio system 1330 and display system applying the attenuation to the audio output.
1332 are external components to electronic device 1300 . 7. The method as recited in claim 1 further comprising
Alternatively, or additionally , audio system 1330 and /or modifying the communication session based on the identity
display system 1332 can be an integrated component of the 5 of the non -call participant by using user-defined settings to
example electronic device 1300 , such as part of an integrated determine how to perform said modifying .
speaker and /or an integrated display and touch interface. 8. A local communication device comprising :
Display system 1332 represents any suitable system that one or more sensors ;
can be used to render images, such as an organic light one or more processors ; and
emitting diode (OLED ) display , Liquid Crystal Display 10 one or more processor-executable instructions that,
( LCD ), a light-emitting diode display (LED ), an electrolu responsive to execution by the one or more processors,
minescent display (ELD ), a plasma display panel (PDP ), and enable the communication device to perform opera
so forth . In some implementations , display system 1332 tions including:
includes touch input capabilities, where input can be establishing , at the local communication device , a
received through physical interactions with the display 15
device (e.g., fingers, styluses, etc. ). Various implementations communication session with a remote communica
use combinations of hardware, firmware and / or software to tion device ;
generate a device capable of rendering content. identifying , using the one or more sensors while the
In view of the many possible aspects to which the communication session is in progress, a presence of
principles of the present discussion may be applied , it should 20 a non -call participant in an area surrounding the local
be recognized that the implementations described herein communication device ;
with respect to the drawing figures are meant to be illustra determining an identity of the non -call participant;
tive only and should not be taken as limiting the scope of the in response to determining the identity of the non -call
claims. Therefore, the techniques as described herein con participant, scanning communications from the
template all such implementations as may come within the 25 remote computing device for a keyword associated
scope of the following claims and equivalents thereof. with the identity ; and
We claim :
in response to identifying the keyword in the commu
1. A method comprising: nications from the remote computing device, auto
establishing, at a local communication device, a commu- 30 matically, and without human intervention , forward
nication session with a remote communication device ; ing an audible alert to the remote computing device
that indicates that the communication session is not
identifying, using the local communication device and private and indicates the presence of the non -call
while the communication session is in progress, a participant.
presence of a non- call participant in an area surround 9. The communication device as recited in claim 8 ,
ing the local communication device ; 35
wherein said determining further comprises determining the
determining an identity of the non -call participant;
in response to determining the identity of the non -call identity by using a facial recognition algorithm .
participant, scanning communications from the remote 10. The communication device as recited in claim 8 , said
computing device for a keyword associated with the operations further including :
identity ; and 40 using the identity to determine an age associated with the
in response to identifying the keyword in the communi non -call participant ;
cations from the remote computing device , automati determining an attenuation to apply to an audio output
cally , and without human intervention , forwarding an level associated with the communication session based ,
audible alert to the remote computing device that at least in part, on the age; and
indicates that the communication session is not private 45 applying the attenuation to audio output at the local
and indicates the presence of the non -call participant. communication device .
2. The method as recited in claim 1 , wherein said deter 11. The communication device as recited in claim 10 , said
mining comprises determining the identity by using a facial operations further including :
recognition algorithm . determining that the non -call participanthas moved out
3. The method as recited in claim 1, wherein said deter- 50 side of the area surrounding the local communication
mining comprises determining the identity by using a voice device; and
recognition algorithm . amplifying the audio output level to revert back from said
4. The method as recited in claim 1 further comprising : applying the attenuation to the audio output.
using the identity to determine an age associated with the 12. The communication device as recited in claim 10 ,
non -call participant; 55 wherein said determining the attenuation further comprises
determining an attenuation to apply to an audio output determining the attenuation based on range detection infor
level associated with the communication session based , mation .
at least in part, on the age ; and 13. The communication device as recited in claim 8 , said
applying the attenuation to audio output at the local operations further including modifying the communication
communication device . 60 session based on the identity of the non -call participant by
5. The method as recited in claim 4 , wherein said deter using user-defined settings to determine how to perform said
mining the attenuation further comprises determining the modifying.
attenuation based on range detection information . 14. The communication device as recited in claim 8 ,
6. The method as recited in claim 4 , further comprising: wherein said one or more sensors include a depth sensor.
determining that the non - call participant has moved out- 65 15. The communication device as recited in claim 8 ,
side of the area surrounding the local communication wherein said determining further comprises determining the
device ; and identity by using a voice recognition algorithm .
US 10,616,398 B2
21 22
16. A local communication device comprising: determine an age associated with the non -call participant,
a communication module , implemented at least in part in determine an attenuation to apply to an audio output level
hardware , to establish a communication session with a associated with the communication session based , at least in
remote communication device ; part , on the age , and apply the attenuation to audio output at
an inidentity detection module , implemented at least
hardware, to identify, while the communication
in part 5 the local communication device .
session is in progress, an identity of a non -call partici 18. The communication device as recited in claim 17, the
pant present in an area surrounding the local commu device assistant module being further to determine that the
nication device and to , in response to determining the non - call participanthasmoved outside of the area surround
identity of the non -call participant, scan communica 10 ing the local communication device , and amplify the audio
tions from the remote computing device for a keyword output level to revert back from said application of the
associated with the identity ; and attenuation to the audio output.
a device assistant module, implemented at least in part in 19. The communication device as recited in claim 17 ,
hardware, to , in response to identification of the key wherein to determine the attenuation is to determine the
word in the communications from the remote comput 15 attenuation based on range detection information .
ing device, automatically, and without human interven 20. The communication device as recited in claim 16 , the
tion , modify the communication session by forwarding device assistant module being further to modify the com
an audible alert to the remote computing device that munication
indicates that the communication session is not private participant bysession based on the identity of the non -call
using user - defined settings to determine how
and indicates the presence of the non- call participant.
17. The communication device as recited in claim 16, the 20 to perform said modification .
device assistant module being further to use the identity to

You might also like