EPQ Project Self-Review and Evaluation
Project Title: Should AI be given human-like intelligence?
1. Evaluation of Project Aims and Outcomes
Initially, my project goal was a cross-disciplinary assessment to determine
whether humanity should attempt to create artificially intelligent beings
that simulate human thought. From a confident perspective, almost
guaranteed levels of achievement would have met this goal. The project
exceeded an expected overview of literature and technology, extending to
a cross-disciplinary integration of multiple arguments across fields of
computer science, ethics, sociology, philosophy and policy development.
Therefore, my dissertation not only provided a well-derivative response to
the central research question, a confident and justified 'no', but also
provided a justified 'yes' to something entirely different: the development
of Intelligence Augmentation (IA). That humankind should develop AI only
as a tool to assist human intelligence. It should never attempt to replicate
human intelligence, indicating that project success was more than an
assessment of literature. It was a conclusive and recommended answer to
a question with a sustained response.
2. Analysis of Project Methodology and Interpretations
Methodology:
The methodology I chose was a literature review, and this was necessary
due to the breadth of the question. The significant strength of this
approach, however, is that I was able to gather decades' worth of
thinking, from the preliminary developments in computing, as outlined in
Alan Turing's 1950 paper, to the policy decisions surrounding AI use
today, such as those outlined in the EU's AI Act. By gaining a
comprehensive survey of work over the years, I was able to note the
intersections of thought,e.g. "AI Winters," the "alignment problem," the
debate between functionalism and embodied intelligence, which
contributed to the nuance of my argument and established a clear
trajectory of a now emergent, interdisciplinary field, not solely relegated
to philosophy or computer science.
The only major limitation of my approach pertains to this over-breadth. By
incorporating the work of various fields of study, there is no possibility
that the resulting project could attain the specialised depth that a more
focused inquiry would have achieved. For example, while the paper briefly
introduces the concept of the "black box" problem, a computer science
investigation with depth would cite a specific neural network architecture
and analyse it through its various layers to gain an understanding of one
type of abstraction. Likewise, the philosophical pieces I connect are
appropriately related to my argument, but each one could constitute a
dissertation on its own. Thus, while this breadth of perspective allows for a
more holistic and interdisciplinary approach to this type of question, it
sacrifices depth in many critical areas.
Interpretations:
My major interpretation throughout is that the ethical, social, and
existential dangers of trying to create AI that is indistinguishable from
humans are far too great and outweigh any potential benefits. I support
this claim with much of the discourse, citing relevant citations from
Bostrom and Zuboff, as well as Searle, with certainty. The connections
drawn between various fields of inquiry, extending even beyond
philosophy to sociology, ethics, and technology, are both nuanced and
well-established. For example, I take great interest in relating a discussion
of consciousness, which is often an esoteric philosophical issue, to
developments like surveillance capitalism and socio-political
accountability as ethical issues with tangible consequences.
The primary limitation of my interpretation is that it risks confirmation
bias. I am staunchly opposed to the pursuits behind Artificial General
Intelligence (AGI). While I thoroughly defend this position, I do less to
"steel-mann" the contrary viewpoint. For example, the section dedicated
to hopeful AGI potentials from Ray Kurzweil or Demis Hassabis should
have taken up more space to cite their claims in further detail before
dismissing them. By doing so, I would have incorporated a further level of
intellectual integrity on my part, which could have strengthened my
conclusion.
3. Justification of Alternative Approaches (What I Would Do
Differently)
If I were to conduct similar research in future studies, there are two key
features I would change based on the deficiencies above.
First, to narrow the scope of the analysis while still producing impactful
research, I employ a case-study approach. Instead of using
neighbourhood examples to illustrate the potential of AI development, I
would focus on AI Humanization in one specific field where the impact of
AI is either overwhelmingly positive or negative, concentrating solely on AI
that is already in use in the criminal justice system or for diagnostic
purposes related to the disease. As a focused research object, I could cite
the specific algorithms used, the datasets generated, the associated legal
loopholes and pitfalls, and the impact on human life (i.e., it could alter
one's freedom or health). The findings presented here are educated
projections based on real-life situations rather than hypothetical ones. I
could analyse the results from a more qualitative rather than risk-based
perspective.
Second, to reduce bias and strengthen my interpretative justification, I
employ primary research via expert interviews. While the literary
review provided a cohesive outlook, citing original perspectives from
researchers who studied AI, as well as policymakers who disagreed with
the consensus, would have offered new arguments or rebuttals to my
findings. This would make my project more credible in an original fashion,
almost like a journalistic endeavour, and support my conclusion as valid
against those in the field supporting its findings.
4. Perceptive Conclusions on the Research Process and Future
Learning
This was my first-ever foray into independent academic research. Thus,
the most challenging aspect and the most valuable lesson stem from the
experience of the project itself. I had to synthesise and create a linear
argument out of a vast, disconnected array of sources. For
example, learning how to use JSTOR to find journal articles, assess an
author's authority (which I noted in my annotated bibliography alongside
any bias or lack thereof), and ultimately incorporate such varying
perspectives into a cohesive piece was invaluable. This was also the most
critical lesson in writing a literature review to support a persuasive
research essay: from understanding the broader conversations happening
within a field to identifying the gap my argument could fill.
Thus, these skills would integrate seamlessly into future university
studies. Whether interdisciplinary problem-solving through critical analysis
of pedagogical alignment over a timeline of decades or the idealisation of
one undebatable position supported by reams of evidence, is the
cornerstone of any humanities and social sciences undergraduate or
graduate career, and even ethically based, scientifically driven degrees
would benefit from such skill sets. Furthermore, my EPQ would put me in
good stead for any university project related to Law (acknowledging and
mediating conflicting values), Politics (deconstructing persuasive
language), Philosophy (so many philosophy courses implore students to
engage with ethical situations), Computer Science (where ethics
increasingly leverages itself in Computer Science curricula). My EPQ has
answered a prior unknowable question and equipped me with the
confidence to conquer any university.