0% found this document useful (0 votes)
12 views19 pages

Snow Sarah STS Research Paper

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views19 pages

Snow Sarah STS Research Paper

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

The Struggle over YouTube’s Recommendation Algorithm

A Sociotechnical Research Paper


presented to the faculty of the
School of Engineering and Applied Science
University of Virginia

by

Sarah Snow

April 6, 2021

On my honor as a University student, I have neither given nor received unauthorized aid
on this assignment as defined by the Honor Guidelines for Thesis-Related Assignments.

Sarah Snow

Sociotechnical advisor: Peter Norton, Department of Engineering and Society


The Struggle over YouTube’s Recommendation Algorithm

The online video-sharing platform YouTube has fundamentally changed the way that

users interact with digital content. In early 2012, engineers at the company “encourage[d] people

to spend more time watching, interacting, and sharing” in order to “increase the amount of time

that the viewer will spend watching videos on YouTube” (Meyerson). YouTube CEO Susan

Wojcicki prioritized growth above profit, supporting the major shift from channel-based

subscriptions to a system that revolved around recommendations (Nicas, 2016). By 2017, the

company was celebrating a significant milestone in its efforts to “make YouTube a more

engaging place” with viewers watching over “a billion hours of YouTube’s incredible content

every single day” (Goodrow). However, the deliberately engaging nature of the platform has

been the center of intense controversy. Over the past decade, critics and defenders of the

YouTube recommendation algorithm have utilized various strategies to advance their agendas.

Influential technology reporters, content consumers, academic researchers, and public interest

groups have played an essential role in shaping the algorithm’s evolution. Intense pressure from

reporters and consumers initially prompted YouTube to change their policies, while researchers

and interest groups held the company accountable for making modifications to the

recommendation algorithm.

Review of Research

In the early days of YouTube, research showed that over-consumption of online media

could negatively affect well-being. Shaw and Black showed that “excessive or inappropriate use

of computers and the Internet has been the subject of increased attention in the professional

1
literature and popular media.” Researchers at the time “seem[ed] to agree that it involves

problematic computer usage that is time consuming and causes distress or impairs one’s

functioning in important life domains” (2008). This work suggests that internet addiction was a

familiar phenomenon, something YouTube engineers should have considered. Kuss and

Griffiths noted that “[social networking site] addiction treatment cannot be total abstinence from

using the Internet” but rather “controlled use of the Internet and its respective functions,

particularly social networking applications” (2011). Internet use was inevitable, but platforms

could reduce the addictive nature of social networking sites.

The increased use of recommender systems raised concerns regarding content diversity.

Pariser feared that recommendation systems would accelerate the “filter bubble effect” and

create ideological echo chambers (2011). Davidson et al. revealed the YouTube recommendation

system utilized “a user’s personal activity (watched, favorited, liked videos) as seeds and

expanded the set of videos by traversing a co-visitation based graph of videos” (2010). Previous

use patterns would be a prominent factor in recommendations. In an early study on “the potential

for online personalization to effectively isolate people from a diversity of viewpoints or content,”

Nguyen et al. found credible evidence that “recommender systems expose users to a slightly

narrowing set of items over time” (2014). O’Callaghan et al. showed the YouTube algorithm had

“identif[ed] the existence of an [extreme right] ideological bubble” and “suggest[ed] that it [was]

possible for a user to be immersed in this content following a short series of clicks” (2015).

The implementation of deep learning techniques radically changed the YouTube

recommendation system. In 2016, YouTube researchers announced “a fundamental paradigm

shift towards using deep learning as a general-purpose solution for nearly all learning problems”

would emphasize watch time over click-through rate. Engineers working on the recommendation

2
system would be closely aligned with Google Brain, the artificial intelligence research team

(Covington et al.). YouTube’s goal of increased user engagement was straightforward, but the

recommendation algorithm was becoming increasingly complex.

Influential Media Coverage Shapes Policy Decisions

Critical investigative reporting has been a powerful force in influencing YouTube content

moderation policies. Days after a deadly shooting in Las Vegas, BuzzFeed reporter Charlie

Warzel showed “conspiratorial content high in search results” at YouTube could cause users to

inadvertently “stumble down an algorithm-powered conspiracy video rabbit hole” (2017). Within

twenty-four hours of the article’s publication, a YouTube spokesperson acknowledged the

company’s effort to “promot[e] more authoritative sources in search results” (Nicas, 2017). Less

than a month after the shooting, a blog post by independent journalist James Bridle showed

verified channels “using YouTube to systematically frighten, traumatize, and abuse children”

(2017). The post went viral, receiving over 175,000 likes. Within two weeks, YouTube publicly

announced a new approach to “protect families on YouTube” that included the application of

“machine learning technology and automated tools to quickly find and escalate [videos] for

human review” (Wright, 2017). Despite their increased efforts, problems surrounding the sexual

exploitation of children on the platform persisted. A viral YouTube video and Reddit post by

Matt Watson on the r/Drama subreddit reignited the debate (Alexander, 2019). Watson showed

the recommendation algorithm was “facilitating pedophiles’ ability to connect with each-other,

trade contact info, and link to actual CP in the comments” (MattsWhatItIs). Again, YouTube was

quick to respond. Policy changes to disable comments and limit monetization were the result of

“swift action” to “keep minors and the creator ecosystem safe” (YouTube Team, 2019b).

3
In the beginning, mainstream media coverage of the recommendation algorithm was

favorable and led to a positive public perception of the platform. During an interview,

YouTube’s engineering director Cristos Goodrow announced their objective to maximize watch

time and “help viewers find the videos that they would enjoy watching.” The interviewer

commended the company’s effort to “increas[e] the quality of its content and promot[e] the right

videos to the right people” (D’Onfro, 2015). Technology columnist Casey Newton remarked that

the recommendations “started to seem weirdly good” and “not only personalized but deadly

accurate” (2017). YouTube CPO Neal Mohan proudly announced that mobile users spent over an

hour during average watch sessions “because of what our recommendation engines are putting in

front of you” (Solsman, 2018). Quartz reporter Ashley Rodriguez noted that the platform was “a

master of getting you to watch videos you didn’t know existed” and applauded how “the

algorithms are constantly evolving to get smarter” (2018). Meanwhile, Wojcicki announced five

top priorities for the company, and “looking forward to YouTube’s best, most transparent and

most exciting year yet” made no mention of the recommendation system (2018).

Despite initially positive coverage, concern over the algorithm escalated after a former

YouTube engineer publicly criticized the platform’s potential to spread radicalized content.

Guillaume Chaslot worked on the recommendation system at YouTube for three years. He

became highly critical of the company for “not optimizing for what is truthful, or balanced, or

healthy for democracy” (P. Lewis, 2018). Chaslot used computer simulations to model a

YouTube user’s behavior and found that “YouTube systematically amplifies videos that are

divisive, sensational and conspiratorial” (P. Lewis, 2018). The company was quick to refute

Chaslot’s claims and “strongly disagree … with the methodology, data and, most important, the

conclusions made in [his] research” (P. Lewis, 2018). However, the story gained traction on

4
Twitter among many influential technology journalists. Prominent academic writer Zeynep

Tufekci called it “a fascinating, important in-depth investigation of how YouTube’s

recommendation algorithm apparently functioned during the 2016 election” (2018a). New York

Times reporter Sheera Frenkel told her followers: “If you want to read something that’ll rattle

you … read this Guardian story on the rabbit holes YouTube sends you down” (2018). Another

verified account commended Chaslot’s team for going “above and beyond to conduct their own

research and dispute every claim that came from YouTube/Google’s representatives” (Fishkin,

2018). With criticism escalating, YouTube backtracked and updated the statement to include an

appreciation for the “work to shine a spotlight on this challenging issue” (P. Lewis &

McCormick, 2018).

Public pressure continued to build, and the company released a series of reactionary and

ambiguous statements. An investigation by The Wall Street Journal found “recommendations

often lead users to channels that feature conspiracy theories, partisan viewpoints, and misleading

videos, even when those users haven’t shown interest in such content” (Nicas, 2018). This time,

YouTube executives acknowledged the need to “help prevent the spread of blatantly misleading,

low-quality, offensive or downright false information” but did not announce any specific

changes to the algorithm (Nicas, 2018). In an op-ed, Tufekci dubbed YouTube “the great

radicalizer.” She hypothesized the platform “may be one of the most powerful radicalizing

instruments of the 21st century” (2018b). In response to Tufecki’s claim, Wojcicki disclosed that

YouTube was “figure[ing] out how [they] can continue to diversify the content you’re seeing,

continue to improve recommendations, and rely on the authoritativeness of the publishers”

(Thompson, 2018). Wojcicki did not elaborate on how YouTube would determine the

“authoritativeness of a publisher.”

5
The media continued its effort to hold YouTube accountable, and another compelling

investigation led to the first official policy change regarding the algorithm. In a simulation of the

typical viewer experience, BuzzFeed employees performed 147 YouTube searches and

continuously clicked on the top-recommended video. In one specific viewing session, “the list of

consecutively recommended videos … goes from a BBC News clip to a series of QAnon

conspiracy videos after 10 jumps” (O’Donovan et al., 2019). However, the results were

inconclusive, and the user experience was inconsistent. Repeated searches of identical queries

would often result in different recommendations (O’Donovan et al., 2019). One researcher was

“not sure anyone — perhaps even many inside the company — truly understands YouTube’s

recommendation algo” (Warzel, 2019). The system was flawed, and the goal of improving

recommendations was unfulfilled. Shortly after the simulation results were published, YouTube

announced an official effort to “begin reducing recommendations of borderline content and

content that could misinform users in harmful ways,” regardless of whether the videos directly

violate YouTube’s community guidelines (YouTube Team, 2019a). Although significant, the

statement from YouTube “did not reveal much about how it would determine which videos

would be excluded from recommendations,” and skepticism surrounding the algorithm remained

(Wakabayashi, 2019).

YouTube Users Share Their Dissatisfaction

With limited access to large-scale user data, compelling stories from individual YouTube

viewers became a necessary tool for analysis. In 2019, a profile of Caleb Cain was on the front

page of the New York Times. Cain describes his experience “falling deeper and deeper” into a

community of far-right YouTube personalities. His entire YouTube viewing history was

6
analyzed, and “the bulk of his media diet came from far-right channels … exploring a part of

YouTube with a darker, more radical group of creators” (Roose, 2019b). Another user was

recommended transphobic videos and said the site “will always be a place that reminds LGBT

individuals that they are hated” (Cook, 2019). A crowdsourced campaign by the Mozilla

Foundation highlighted the abundance of these incidents. One viewer’s recommended feed “just

kept feeding [them] paranoia, fear and anxiety one video after another” (Mozilla, 2019). One

child using the site progressed from watching “Thomas the Tank Engine” videos to a

“compilation that contained graphic depictions of train wrecks” (Mozilla, 2019). These stories

are extreme, but they are not unique. The dangerous and addictive nature of the YouTube

algorithm has impacted many users.

Viewers were vocal on support forums and message boards about their dissatisfaction

with the quality of recommendations. In a widely shared Reddit post, users complained that “the

algorithm is way too reactive,” with a top commenter admitting that he “had to start pulling

[him]self away from amateur political commentaries … that’ll suck you in and keep you glued to

the site” (UnspecifiedIndex). YouTube’s support forum was full of complaints from disgruntled

users. A post entitled “Why is this garbage showing up in my recommended videos?” speculated

that “either the algorithm is broken or [they are] being hacked” (EvilAvocado). The post

received over 1,200 upvotes and more than 300 written responses. One commentator noted that

recommendations “not just irrelevant but outright offensive … most of the time they have

absolutely NOTHING to do with your current video and are blatantly agendist” (EvilAvocado).

In response to user concerns, YouTube employees suggested “some tips … to help our systems

understand what sorts of videos you actually enjoy” (TeamYouTube, 2019). However, the

7
suggested tips involved extensive manual flagging and watch history manipulation, only

available for users with an account.

Some of YouTube’s most influential users, their employees, began to speak out against

the company. Multiple employees at the company “wanted to flag troubling videos” or “track

them in a spreadsheet to chart their popularity” but were turned down by their superiors (Bergen,

2019). Company insiders were not immune to the adverse effects. One employee had to restrict

his daughter from accessing YouTube.com after she “was recommended a clip that featured both

a Snow White character drawn with exaggerated sexual features and a horse engaged in a sexual

act” (Bergen, 2019). Lawyers at the company reportedly discouraged employees from

investigating harmful content. “The company would have a bigger liability if there was proof

that staffers knew and acknowledged those videos existed” (Garun, 2019). Meanwhile, Mohan

was publicly referring to the “rabbit hole” effect as “purely a myth” and remarking that “it’s

equally — depending on a user’s behavior — likely that you could have started on a more

extreme video and actually moved in the other direction” (Roose, 2019a). Not everyone agreed

to be silent, and “at least five senior employees have left YouTube over its unwillingness to

tackle the issue,” bringing attention to the inner company turmoil (Garun, 2019).

Dissent Among Academic Researchers

Multiple studies found credible evidence that the recommendation algorithm featured

highly polarized and extremist content. Data & Society published an extensive report on “the

Alternative Influence Network (AIN): an assortment of scholars, media pundits, and internet

celebrities who use YouTube to promote a range of political positions” (R. Lewis, 2018). The

study was not exclusively focused on the recommendation system but found that “members of

8
the AIN are experiencing great success, with a countless number of their videos showing up in

search results and video recommendations” (R. Lewis, 2018). Researchers had difficulty

identifying “the role of the recommender system in the radicalization process.” However, they

found that “even without personalization, [they] were still able to find a path in which users

could find extreme content from large media channels” (Ribeiro et al., 2020). Another study

specifically focused on personalization found “an overall pattern of opinion reinforcement and

polarization after exposure to algorithm-recommended content” with “the potential to solidify

personal political convictions and encourage polarized opinions” (Cho et al., 2020). Over fifteen

months, a group of researchers analyzed eight million recommendations that “indicate[d] that

YouTube experienced a conspiracy boom at the end of 2018,” but “monitored a consistent

decrease in conspiratorial recommendations until the beginning of June 2019” (Faddoul et al.,

2020). The data suggests that YouTube’s January 2019 promise to “improve the

recommendations experience on YouTube” had tangible consequences on the user’s experience

with recommendations.

Despite these results, contradictory research failed to identify conclusive evidence of

radicalized recommendations. Munger and Phillips attributed the popularity of alt-right content

to “affordances that make content creation easy for fringe political actors who tap into an

existing base of disaffecting individuals” (2019). They did not find statistically significant

evidence of a radicalization pipeline. Another 2019 study went a step further, suggesting

“YouTubes recommendation algorithm actively discourages viewers from visiting radicalizing or

extremist content … favor[ing] mainstream media and cable news content over independent

YouTube channels” (Ledwich & Zaitsev). This research was one of the first studies to support

deradicalization through recommendations. A more recent study supported “the existence of

9
distinct political news ‘echo chambers’ on YouTube” but found “little evidence that the

YouTube algorithm is responsible for these trends” (Hosseinmardi et al., 2020).

The proprietary nature of the algorithm contributed to conflicting methodologies and

dissent among academic researchers. Munger and Phillips criticized the Ribeiro study for

“fail[ing] to demonstrate that the algorithm has a noteworthy effect on the audience for Alt-Right

content” (2019). Ribeiro responded to their critique, noting that “their ‘Supply and Demand’

approach is unable to explain the phenomena that we have the most evidence about: user

migration” (2019a). Ribeiro addressed further criticism on his blog, clarifying that “the paper

says migration from these communities did happen, but does not really answer why this

happen[s]” (2019b). The study by Ledwich and Zaitsev was also heavily scrutinized. Princeton

professor Arvind Narayanan “wanted to call it wrong, but that would give the paper too much

credit.” His Twitter thread denouncing the study received nearly 5,000 likes. Amidst the

discussion, Tufecki expressed frustrations “that, at the moment, only the companies can fully

study phenomenon such as the behavior of recommendation algorithms” (2019). Rebecca Lewis

joined in the discussion, adding that “quantitative methods are often ill-suited to studying

radicalization on YouTube via the algorithm” (2019). Ribeiro disagreed with Ledwich and

Zaitsev’s methodologies but acknowledged “the research challenges associated with large-scale

measurement and analysis of social media” (2019c). Ledwich and Zaitsev defended the use of

anonymous recommendations and noted the lack of “any solutions around this problem that

would present a representative sample and provide enough data” (2019). The lack of

transparency from YouTube continued to restrict comprehensive independent analysis of the

platform.

10
Formation of Internet Regulation Advocacy Groups

Many researchers established public interest groups to bring additional awareness to the

controversial nature of the algorithm. TransparencyTube, a website run by Mark Ledwich and

Sam Clark, aimed to address the “absence of reliable data when it comes to the internal and

external working of YouTube.” Their intuitive visualizations “fill[ed] this data vacuum to help

journalists, researchers, and the curious better understand YouTube’s political landscape”

(2020). The project exposed the severity of false election claims, showcasing that “unfounded

claims of widespread election fraud garnered about 137 million views between Nov. 3 and 10”

(Telford). Chaslot started a website dedicated to “spread[ing] the word about YouTube content

amplification by sharing any borderline or polarizing content that YouTube is recommending”

(Our Manifesto). The site outlines eight critical factors in the fight for algorithmic transparency.

The Anti-Defamation League is involved in numerous efforts to reduce online hate, including a

study of YouTube in 2021 (Center for Technology and Society). They found “the audience for

videos from alternative or extremist channels is dominated by people who already have high

levels of racial resentment” (Chen et al.). In addition to the #YouTubeRegrets campaign, the

Mozilla Foundation released a list of comprehensive recommendations for YouTube. They

requested “access to meaningful data,” “better simulation tools,” and “tools that empower, not

limit, large-scale research and analysis” (Geurkink, 2019). These groups leveraged collective

power to advocate for meaningful change.

Conclusion

The evolution of the YouTube recommendation algorithm was not simply a result of

technological advancements. Modification decisions resulted from persistent and calculated

11
advocacy from users affected by the system. The controversy surrounding YouTube resulted

from a much larger problem within the technology industry. Personalized systems that make

decisions based on user data can fundamentally affect consumer behavior, and the social

implications of these systems deserve attention. It is increasingly common for technology

executives to be motivated by growth and incentivized to keep users engaged with their

platforms at all costs. While users do not have the power to amend these systems directly, they

can be vocal and unapologetic in their demands. Big Tech is listening.

12
References

Alexander, J. (2019, February 19). YouTube still can’t stop child predators in its comments. The
Verge. https://www.theverge.com/2019/2/19/18229938/youtube-child-exploitation-
recommendation-algorithm-predators

Bergen, M. (2019, April 02). YouTube Executives Ignored Warnings, Letting Toxic Videos Run
Rampant. Bloomberg. https://www.bloomberg.com/news/features/2019-04-02/youtube-
executives-ignored-warnings-letting-toxic-videos-run-rampant

Bridle, J. (2017, November 06). Something is wrong on the internet. Medium.


https://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2

Center for Technology and Society. Anti-Defamation League. https://www.adl.org/who-we-


are/our-organization/advocacy-centers/center-for-technology-and-society

Chen, A., Nyhan, B., Reifler, J. Robertson, R. E., & Wilson, C. (2021). Exposure to Alternative
& Extremist Content on YouTube. Anti-Defamation League.
https://www.adl.org/resources/reports/exposure-to-alternative-extremist-content-on-
youtube

Cho, J., Ahmed, S., Hilbert, M., Liu, B., & Luu, J. (2020). Do Search Algorithms Endanger
Democracy? An Experimental Investigation of Algorithm Effects on Political
Polarization. Journal of Broadcasting & Electronic Media, 64(2).
https://doi.org/10.1080/08838151.2020.1757365

Cook, J. (2019, October 15). Hundreds Of People Share Stories About Falling Down YouTube’s
Recommendation Rabbit Hole. HuffPost. https://www.huffingtonpost.ca/entry/youtube-
recommendation-rabbit-hole-mozilla_n_5da5c470e4b08f3654912991?utm_hp_ref=ca-
tech

Covington, P., Adams, J., & Sargin, E. (2016). Deep Neural Networks for YouTube
Recommendations. Proceedings of the 10th ACM Conference on Recommender Systems.

Davidson, J., Liebald, B., Liu, J., Nandy, P., Van Vleet, T., Gargi, U., Gupta, S., He, Y.,
Lambert, M., Livingston, B., & others. (2010). The YouTube video recommendation
system. Proceedings of the Fourth ACM Conference on Recommender Systems, 293–296.

D’Onfro, J. (2015, July 03). The ‘terrifying’ moment in 2012 when YouTube changed its entire
philosophy. Business Insider. https://www.businessinsider.com/youtube-watch-time-vs-
views-2015-7

13
[EvilAvocado]. (2019, November 01). Why is this garbage showing up in my recommended
videos? [Online forum post]. YouTube Help.
https://support.google.com/youtube/thread/18280900?hl=en

Faddoul, M., Chaslot, G., & Farid, H. (2020). A longitudinal analysis of YouTube’s promotion
of conspiracy videos. https://arxiv.org/pdf/2003.03318.pdf

Fishkin, R. [@randfish]. (2018, February 03) Some thoughts on


https://theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth,
one of the better pieces of reporting I’ve read on algorithmic recommendations (and a
must-read. [Tweet]. Twitter. https://twitter.com/randfish/status/959865452657754112

Frenkel, S. [@sheeraf]. (2018, February 02) If you want to read something that’ll rattle you don’t
read the memo, read this Guardian story on the rabbit. [Tweet]. Twitter.
https://twitter.com/sheeraf/status/959479975295434752

Garun, N. (2019, April 02). YouTube reportedly discouraged employees from reporting fake,
toxic videos. The Verge. https://www.theverge.com/2019/4/2/18292530/youtube-toxic-
conspiracy-video-employees-internal-report

Geurkink, B. (2019, October 14). Our Recommendation to YouTube. Mozilla.


https://foundation.mozilla.org/en/blog/our-recommendation-youtube/

Goodrow, C. (2017, February 27). You know what’s COOL? A billion hours. YouTube Official
Blog. https://blog.youtube/news-and-events/you-know-whats-cool-billion-hours/

Hosseinmardi, H., Ghasemain, A., Clauset, A., Rothschild, D. M., Mobius, M., & Watts D. J.
(2020). Evaluating the scale, growth, and origins of right-wing echo chambers on
YouTube. https://arxiv.org/abs/2011.12843

Kuss D. J., Griffiths M. D. (2011). Online Social Networking and Addiction—A Review of the
Psychological Literature. International Journal of Environmental Research and Public
Health, 8(9) 3528-3552. https://doi.org/10.3390/ijerph8093528

Ledwich, M., & Clark, S. (2020). A Window into Culture and Politics on YouTube.
Transparency.tube. https://transparency.tube/

Ledwich, M., & Zaitsev, A. (2019, December 30). Response to critique on our paper
“Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization.”
Medium. https://anna-zaitsev.medium.com/response-to-critique-on-our-paper-
algorithmic-extremism-examining-youtubes-rabbit-hole-of-8b53611ce903

14
Ledwich, M., & Zaitsev, A. (2020). Algorithmic extremism: Examining YouTube’s rabbit hole
of radicalization. First Monday. doi:10.5210/fm.v25i3.10419

Lewis, R. (2018). Alternative Influence. Data & Society.


https://datasociety.net/library/alternative-influence/

Lewis, R. [@beccalew]. (2019, December 29) Fantastic thread on why quantitative methods are
often ill-suited to studying radicalization on YouTube via the algorithm. [Tweet
attached]. [Tweet]. Twitter. https://twitter.com/beccalew/status/1211270945672843265

Lewis, P. (2018, February 02). ‘Fiction is outperforming reality’: how YouTube’s algorithm
distorts the truth. The Guardian.
https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-
truth

Lewis, P & McCormick, E. (2018, February 02). How an ex-YouTube insider investigated its
secret algorithm. The Guardian.
https://www.theguardian.com/technology/2018/feb/02/youtube-algorithm-election-
clinton-trump-guillaume-chaslot

MattsWhatItIs. (2019, February 17). Youtube is Facilitating the Sexual Exploitation of Children,
and It’s Being Monetized (2019) [Video]. YouTube.

Meyerson, E. (2012, August 10). YouTube now: Why we focus on watch time. YouTube Official
Blog. https://blog.youtube/news-and-events/youtube-now-why-we-focus-on-watch-time/

Mozilla. (2019). YouTube Regrets. https://foundation.mozilla.org/en/campaigns/youtube-regrets/

Munger, K., & Phillips, J. (2019). A supply and demand framework for YouTube politics. Penn
State Political Science. https://osf.io/73jys

Narayanan, A. [@random_walker]. (2019, December 29) A new paper has been making the
rounds with the intriguing claim that YouTube has a *de-radicalizing* influence.
https://arxiv.org/abs/1912.11211 Having [Tweet]. Twitter.
https://twitter.com/random_walker/status/1211262124724510721

Newton, C. (2017, August 30). How YouTube Perfected the Feed. The Verge.
https://www.theverge.com/2017/8/30/16222850/youtube-google-brain-algorithm-video-
recommendation-personalized-feed

Nguyen, T. T., Hui, P.-M., Harper, F. M., Terveen, L. G., & Konstan, J. A. (2014). Exploring the
filter bubble: The effect of using recommender systems on content diversity. WWW, 677–
686. https://doi.org/10.1145/2566486.2568012

15
Nicas, J. (2016, June 08). YouTube’s Susan Wojcicki on transforming the video service. The
Wall Street Journal. https://www.wsj.com/articles/youtubes-susan-wojcicki-on-
transforming-the-video-service-1465358465?mod=article_inline

Nicas, J. (2017, October 05). YouTube Tweaks Search Results as Las Vegas Conspiracy
Theories Rise to Top. The Wall Street Journal. https://www.wsj.com/articles/youtube-
tweaks-its-search-results-after-rise-of-las-vegas-conspiracy-theories-1507219180

Nicas, J. (2018, February 07). How YouTube Drives People to the Internet’s Darkest Corners.
The Wall Street Journal. https://www.wsj.com/articles/how-youtube-drives-viewers-to-
the-internets-darkest-corners-1518020478

O’Callaghan, D., Greene, D., Conway, M., Carthy, J., & Cunningham, P. (2015). Down the
(white) rabbit hole: The extreme right and online recommender systems. Social Science
Computer Review, 33(4), 459–478.

O’Donovan, C., Warzel, C., McDonald, L., Clifton, B., & Woolf, M. (2019, January 24). We
Followed YouTube’s Recommendation Algorithm Down The Rabbit Hole. BuzzFeed
News. https://www.buzzfeednews.com/article/carolineodonovan/down-youtubes-
recommendation-rabbithole

Our Manifesto. AlgoTransparency. https://www.algotransparency.org/our-manifesto.html

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK.

Ribeiro, M. H. (2019a, October 07). Comments on “A Supply and Demand Framework for
YouTube Politics.” Manoel Horta Ribeiro.
https://manoelhortaribeiro.github.io/posts/2019/10/comments-supply-and-demand

Ribeiro, M. H. (2019b, November 20). Comments on “Auditing Radicalization Pathways on


YouTube.” Manoel Horta Ribeiro.
https://manoelhortaribeiro.github.io/posts/2019/08/radicalization-youtube

Ribeiro, M. H. (2019c, December 29). Comments on “Algorithmic Extremism: Examining


YouTube’s Rabbit Hole of Radicalization.” Manoel Horta Ribeiro.
https://manoelhortaribeiro.github.io/posts/2019/12/algorithmic-extremism

Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira, W. (2020). Auditing
radicalization pathways on YouTube. Proceedings of the 2020 Conference on Fairness,
Accountability, and Transparency. https://doi.org/10.1145/3351095.3372879

Rodriguez, A. (2018, January 13). YouTube’s recommendations drive 70% of what we watch.
Quartz. https://qz.com/1178125/youtubes-recommendations-drive-70-of-what-we-watch/

16
Roose, K. (2019a, March 29). YouTube’s Product Chief on Online Radicalization and
Algorithmic Rabbit Holes. The New York Times.
https://www.nytimes.com/2019/03/29/technology/youtube-online-extremism.html

Roose, K. (2019b, June 9). The Making of a YouTube Radical. The New York Times.
https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html

Solsman, J. (2018, January 10). YouTube’s AI is the puppet master over most of what you
watch. CNET. https://www.cnet.com/news/youtube-ces-2018-neal-mohan/

Shaw, M., & Black, D. (2008). Internet Addiction. CNS Drugs, 22(5), 353-365.
https://doi.org/10.2165/00023210-200822050-00001

[TeamYouTube]. (2019, February 19). [YouTube Recommendations] Ask us anything! YouTube


Team will be here Friday February 08 [Online forum post]. YouTube Help.
https://support.google.com/youtube/thread/1456096?hl=en

Telford, T. (2020, December 09). YouTube removes 8,000 channels promoting false election
claims. The Washington Post.
https://www.washingtonpost.com/business/2020/12/09/youtube-false-2020-election-
claims/

Thompson, N. (2018, March 15). Susan Wojcicki on YouTube’s Fight Against Misinformation.
Wired. https://www.wired.com/story/susan-wojcicki-on-youtubes-fight-against-
misinformation/

Tufecki, Z. [@zeynep]. (2018a, February 02) A fascinating, important in-depth investigation of


how YouTube’s recommendation algorithm apparently functioned in the 2016 election:
no matter which political [Article attached]. [Tweet]. Twitter.
https://twitter.com/zeynep/status/959454862646874112

Tufekci, Z. (2018b, March 10). YouTube, the Great Radicalizer. The New York Times Opinion.
http://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html

Tufecki, Z. [@zeynep]. (2019, December 29) Yep, that “paper” isn’t even wrong. One tragedy
of all this is that, at the moment, only the companies can [Tweet attached]. [Tweet].
Twitter. https://twitter.com/zeynep/status/1211333765051670529

[UnspecifiedIndex]. (2018, March 21). The YouTube recommendations algorithm is way too
reactive. I watched one Jordan Peterson video and this is my home page [Online forum
post]. Reddit.
https://www.reddit.com/r/youtube/comments/861xkl/the_youtube_recommendations_alg
orithm_is_way_too/?sort=top

17
Wakabayashi, D. (2019, January 25). YouTube Moves to Make Conspiracy Videos Harder to
Find. The New York Times. https://www.nytimes.com/2019/01/25/technology/youtube-
conspiracy-theory-videos.html

Warzel, C. (2017, October 04). Here’s How YouTube Is Spreading Conspiracy Theories About
The Vegas Shooting. BuzzFeed News.
https://www.buzzfeednews.com/article/charliewarzel/heres-how-youtube-is-spreading-
conspiracy-theories-about

Warzel, C. [@cwarzel]. (2019, January 24). we ran 147 total “down the rabbit hole” searches
for 50 unique terms, resulting in a total of 2,221 videos [Tweet attached]. [Tweet].
Twitter. https://twitter.com/cwarzel/status/1088543397890027520

Wojcicki, S. (2018, February 02). My Five Priorities for Creators in 2018. YouTube Official
Blog. https://blog.youtube/inside-youtube/my-five-priorities-for-creators-in-2018_1/

Wright, J. (2017, November 22). 5 ways we’re toughening our approach to protect families on
YouTube and YouTube Kids. YouTube Official Blog. https://blog.youtube/news-and-
events/5-ways-were-toughening-our-approach-to

YouTube Team (2019a, January 25). Continuing our work to improve recommendations on
YouTube. YouTube Official Blog. https://blog.youtube/news-and-events/continuing-our-
work-to-improve

YouTube Team (2019b, February 22). Update on our actions related to the safety of minors on
YouTube. YouTube Help. https://support.google.com/youtube/thread/1805616

18

You might also like