COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
Chapter 15 – UNDERSTANDING THE NON-STARTER
APPLICATION
This book's previous chapters looked at what AI is and isn't, as well as which problems it can effectively
solve and which ones seem out of reach. Even with all of this information, it is simple to identify a
potential application that will never see the light of day because AI cannot satisfy that particular
requirement. AI has a tendency to develop solutions to issues that do not actually exist. Although the
solution's marvels certainly appear impressive, nobody will purchase it unless it addresses a genuine
need. Technologies only succeed when they satisfy requirements for which users are willing to pay. The
last section of this chapter looks at ways to solve problems that don't exist.
Limits of AI You might not realize you're talking to a machine when you talk to Alexa. The machine
doesn't understand what you're saying, doesn't know you as a person, and doesn't really want to talk to
you; It only does what the algorithms you made for it and the data you give it say it should do.
Nonetheless, the outcomes are remarkable. Without realizing it, it is simple to view the AI as an
extension of a human-like entity and anthropomorphize it. However, the essentials described in the
following sections are missing from an AI.
Creativity An AI's supposedly creative output can be found in an infinite variety of articles, websites,
music, art, and writings. The fact that AI cannot create anything is the issue. Think about thought
patterns when you think about creativity. Beethoven, for instance, had a distinct perspective on music.
Even if you aren't familiar with all of Beethoven's works, you can still recognize a classic piece because
the music follows a particular pattern shaped by how Beethoven thought.
A simulated intelligence can make another Beethoven piece by survey his manner of thinking
numerically, which the simulated intelligence does by gaining from Beethoven music models.
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
Mathematical principles form the basis for a new Beethoven composition. In point of fact, you can hear
an AI play Beethoven from the perspective of one of the Beatles as a result of the mathematics of
patterns at https://techcrunch.com/2016/04/29/paul-mccartificial-intelligence/. The issue with equating
creativity with mathematics is that mathematics is not creative. To be creative is to come up with a novel
way of thinking that no one else has done before (for more information, see
https://www.csun.edu/vcpsy00h/creativity/define.htm). Inventiveness isn't simply the demonstration of
reasoning fresh; It's the process of creating a new category.
If you insist on the mathematical perspective, creativity also entails developing a different perspective,
which is basically defining a different kind of dataset. An AI can only work with the data you give it. It
is unable to generate its own data; It can only alter the already existing data, which is the data it learned
from. This concept of perspective is elaborated upon in Chapter 13's sidebar titled "Understanding
teaching orientation." A human must decide which data orientation to provide for an AI to learn
something novel, different, or amazing.
Whether it's music, art, writing, or any other activity that produces something that others can see, hear,
touch, or otherwise interact with, creating is defining something real. Because imagination is an
abstraction of creation, it falls even further outside the capabilities of AI. Things that are not real and can
never be real can be imagined. The mind's play with what might be if the rules weren't in the way is what
we mean when we talk about imagination. A strong imagination is frequently the source of true
creativity.
An AI must also exist within the confines of reality, just as it cannot develop new data or thought
patterns without utilizing existing resources. As a result, it is highly unlikely that anyone will ever create
an imaginative AI. An AI does not possess either type of intelligence, so imagination cannot be achieved
without creative or intrapersonal intelligence.
Like many human characteristics, imagination is emotional. AI lacks feelings. In point of fact, when
comparing the capabilities of an AI to those of a human, it frequently pays to simply inquire as to
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
whether the task necessitates emotion.
Ideas that are original To develop an idea is to imagine something, make something real from what was
imagined, and then use that real-world example of something that never existed before. Humans require
high levels of creative, interpersonal, and intrapersonal intelligence to successfully generate an idea. If
you want to define one-of-a-kind versions of something or entertain yourself, creating something new is
great. However, you must share it with others in a way that allows them to see it as well for it to become
an idea.
DEFICIENCIES IN THE DATA The "Considering the Five Mistakes in Data" section of Chapter 2
explains the data issues that an AI must overcome in order to carry out the tasks for which it was
designed. The only drawback is that, unless there is an accompanying wealth of example data that does
not contain these falsehoods, an AI typically is unable to quickly identify data errors. Humans, on the
other hand, are often able to spot lies quickly. Through imagination and creativity, a human can spot the
lies because they have seen more examples than any AI will ever see. The artificial intelligence (AI) is
stuck in reality, so it cannot imagine the lie in a way that a human can.
There are so many different ways that mistruths are added to data that it is impossible to list them all.
These lies are frequently added by people without thinking about it. In fact, perspective, bias, and frame-
of-reference all play a role in the occurrence of mistruths, making it difficult to avoid them at times. The
data used to make decisions will always be lacking in some way because an AI cannot detect all lies. The
nature and severity of the deficit, as well as the capabilities of the algorithms, determine whether it has
an effect on the AI's capacity to produce useful output.
However, the most bizarre form of data deficiency to consider is when a human actually desires a false
output. The only way to overcome this particular human problem is through the subtle communication
provided by interpersonal intelligence, which an AI lacks. This situation occurs more frequently than
most people believe. Someone, for instance, buys new clothes. They look awful, at least to you (and
clothes can be so incredibly subjective). On the other hand, if you're intelligent, you'll say that the clothes
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
look great. The individual is not seeking your impartial opinion; rather, the individual is seeking your
support and approval. The inquiry no longer revolves around "How do these clothes look?" which is
what the computer based intelligence could hear, however one of, "Do you support me?" or "Will you
back my choice to purchase these clothes?" You might be able to partially solve the issue by suggesting
accessories that go well with the clothes or by using other techniques, like subtly showing the person that
they might not even wear the clothes in public.
Additionally, speaking hurtful truths is a problem that an AI cannot handle because it lacks emotion. A
hurtful truth is one in which the recipient receives information that is harmful to their mental, physical,
or emotional well-being rather than anything useful. A child might not be aware, for instance, that one
parent was unfaithful to another. The information is no longer relevant because both parents have passed
away, so it would be best to let the child be happy. However, someone comes along and discusses the
unfaithfulness in detail, ensuring that the child's memories are ruined. The child suffers, but he or she
does not gain anything. By reviewing family information in ways that the child would never consider, an
AI could cause the same kind of harm. The AI tells the child about the unfaithfulness once more, again
causing harm by telling the truth after discovering the unfaithfulness through a combination of police
reports, hotel records, store receipts, and other sources. However, the AI's lack of emotional intelligence
(empathy) results in the disclosure of the truth; The AI does not comprehend the child's need to maintain
a blissful state regarding the fidelity of the parents. Sadly, even when a dataset contains sufficient
accurate and truthful information for an AI to produce a useful result, the outcome may be more harmful
than beneficial.
Applying computer based intelligence inaccurately
The constraints of computer based intelligence characterize the domain of opportunities for applying
man-made intelligence accurately. In any case, even inside this domain, you can get a startling or
pointless result. You could, for instance, give an AI a number of inputs and then ask it to give you a
probability that certain events will take place based on those inputs. The AI can produce a result that
matches the mathematical foundation of the input data when there is sufficient data. The AI, on the other
hand, is unable to generate new data, devise solutions based on that data, envision novel methods of
working with that day, or offer suggestions for putting a solution into action. This large number of
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
exercises live inside the human domain. A probability prediction is all you should anticipate.
There is also the question of whether the dataset contains any opinions, which are much more common
than you might think. An opinion is not the same as a fact because a fact can be proven in every way, and
everyone agrees that a fact is true (at least, everyone who has an open mind). When there aren't enough
scientific facts to support the data, opinions arise. Additionally, opinions arise when feelings are
involved. Some people prefer to rely on opinion rather than facts, even when confronted with conclusive
evidence to the contrary. We feel at ease by the opinion; This is not the case. When opinion is involved,
AI will almost always fail. Someone will be dissatisfied with the results, even with the best algorithm
available.
The preceding sections of the chapter discuss how problems will arise when an AI is applied in less-than-
real circumstances or when certain tasks are expected to be performed by it.
Sadly, humans do not appear to understand that AI will never be able to perform the kinds of tasks that
many of us believe it can. There are numerous sources of these unreasonable expectations, including »
Media: The goal of all forms of media is to elicit an emotional response from us. But that emotional
response is the very thing that leads to having high expectations. We imagine that an AI can perform
certain tasks, but in reality, it cannot.
» Human anthropomorphism: Humans also have a tendency to form attachments to everything, in
addition to the emotions that are evoked by media. Individuals frequently name their vehicles, converse
with them, and keep thinking about whether they're feeling awful when they stall. A simulated
intelligence can't believe, can't comprehend, can't impart (truly), can do nothing other than do the math
— endlessly loads of numbers. The outcome is doomed to failure when the expectation is that the AI will
suddenly develop feelings and behave humanly.
» Unidentified issue: A simulated intelligence can tackle a characterized issue, yet not an unclear one. It
is possible to expect a human to generate a matching question using extrapolation when presented with a
set of potential inputs. Imagine that the majority of a series of tests fails, but some test takers accomplish
the desired outcome. By finding new test subjects with characteristics similar to those of the survivors,
an artificial intelligence might attempt to improve test results through interpolation. However,
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
extrapolation can be used by a human to improve test results by questioning why some test subjects
succeeded and locating the cause, regardless of whether the cause is based on test subject characteristics
(perhaps the test subject's attitude has changed or the environment has changed). However, a human
must be able to express a problem in a way that the AI can comprehend in order for an AI to solve it. An
AI simply cannot solve undefined problems—those that represent something outside of human
experience.
» Inadequate technology: You'll find instances in this book where a lack of technology prevented a
solution from being found. It isn't practical to request that a man-made intelligence take care of an issue
when the innovation is inadequate. For instance, in the 1960s, self-driving cars would not have been
possible due to a lack of sensors and processing power; however, technological advancements today
have made this possible.
When scientists and other individuals make promises about the benefits of AI that do not materialize
within a predetermined time frame, these AI winters result in a lack of funding for AI and research
moving at a glacial rate. There have been two AI winters worldwide since 1956. The world is currently
experiencing its third AI summer.) The causes, effects, and outcomes of AI winter are discussed in
greater detail in the following sections.
Understanding the artificial intelligence winter
It's difficult to say exactly when man-made intelligence started. After all, even the ancient Greeks had the
idea of creating mechanical men like those in the Greek myths about Hephaestus and Galatea from
Pygmalion, and we can assume that these mechanical men would be intelligent. As a result, one could
argue that the first AI winter actually occurred sometime between the fall of the Roman empire and the
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
middle ages, when people dreamed of an alchemical method for transforming mind into matter, such as
the Takwin of Jbir ibn Hayyn, the Homunculus of Paracelsus, and the Golem of Rabbi Judah Loew.
However, these endeavors are unfounded tales and not of the scientific variety that would later emerge in
1956 with the establishment of Dartmouth College's government-funded artificial intelligence research.
The fact that early advocates of AI believed that all human thought could be formalized as algorithms is
one aspect of the issue with overpromising capabilities. There were two outcomes: It is impossible to
formalize all mathematical reasoning; Additionally, the reasoning that forms the foundation of AI can be
automated in areas where formalization is possible.
Overoptimism is another aspect of the issue with overpromising. Computers learned to speak English,
proved geometry theorems, and solved algebraic word problems in the early days of artificial
intelligence. When you consider that the computer is merely parsing input and transforming it into a
form that it can manipulate, the first two outputs are reasonable. The third of these outputs is the source
of the issue. The computer was not speaking English at all; all things being equal, it was changing over
text based information into computerized designs that were thus switched over completely to simple and
result as something that seemed like discourse, however wasn't. The computer had no understanding of
the English language or any other language. Although the computer simply observed 0s and 1s in a
particular pattern that the computer did not perceive as language, the scientists did hear English.
It was frequently done so that the computer appeared to be doing more than it actually was, deceiving
even the researchers. For instance, ELIZA by Joseph Weizenbaum appeared to listen to input and then
act intelligently. The application was not hearing, understanding, or saying anything because the
responses were canned. However, even though it was only a very small step forward, ELIZA was the
first chatterbot. The issue that AI faces today is that the hype was simply significantly greater than the
actual technology. Scientists and promoters continue to set themselves up for failure by displaying glitz
rather than real technology because people are disappointed when they see that the hype is false.
Predictions like these brought about the first AI winter:
» H.A. Simon said, According to "machines will be capable, within twenty years, of doing any work a
man can do," "digital computers will be the world's chess champion within ten years" (1958). 1965) »
Allen Newell stated, A significant new mathematical theorem will be discovered and demonstrated by a
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
digital computer within ten years. 1958) » Marvin Minsky stated: Within the span of a generation “In
three to eight years, we will have a machine with the general intelligence of an average human being”
(1967), and “the problem of creating ‘artificial intelligence’ will substantially be solved.” 1970) In light
of these absurd claims now, it is simple to comprehend why governments withdrew funding. Chapter 5's
"Considering the Chinese Room argument" provides an example of one of many arguments that people
outside of the AI community used to refute these predictions.
The same issues that led to the first AI winter—overpromising, overexcited, and overoptimism—caused
the second AI winter. In this instance, the boom began with the expert system, an AI program that
follows logical rules to solve problems. The Japanese also got involved with their Fifth Generation
Computer project, which was a computer system with a lot of parallel processing. The goal was to make
a computer that could do a lot of things at once, like a human brain. Finally, John Hopfield and David
Rumelhart revived connectionism, a theory that depicts mental processes as networks of simple units that
are interconnected.
The end was like a bubble in the economy. Even when run on specialized computer systems, the expert
systems proved to be brittle. Newer, more common computer systems were able to easily and
significantly less expensively replace the specialized systems, which turned out to be economic
sinkholes. In point of fact, this economic bubble also resulted in the failure of the Japanese Fifth
Generation Computer project. It turned out to be extremely costly to construct and maintain.
Rebuilding expectations with new objectives An AI winter need not always be disastrous. Contrarily, in
fact: These times can be viewed as an opportunity to take a step back and reflect on the various issues
that arose during the frantic effort to create something extraordinary. During the first AI winter, in
addition to providing minor benefits to other areas of thought, two major areas of thought gained:
» Programming logic: Presenting a set of sentences that express facts and rules about a particular
problem domain in a logical manner (executed as an application) is the goal of this school of thought.
Prolog, Answer Set Programming (ASP), and Datalog are a few examples of programming languages
that employ this particular paradigm. Expert systems are based on rule-based programming, which is a
type of programming.
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
» Reasoning from common sense: A method of simulating the human capacity to predict the outcome of
an event sequence based on an object's properties, purpose, intentions, and behavior is utilized in this
school of thought.
Because it affects a wide range of fields, such as computer vision, robotic manipulation, taxonomic
reasoning, action and change, temporal reasoning, and qualitative reasoning, commonsense reasoning is
an essential part of AI.
Additional changes brought about by the second AI winter have contributed to AI's current prominence.
These modifications included » Using standard hardware: Expert systems and other applications of AI
once required specialized hardware. The reason for this is that standard hardware lacked the memory and
processing power required. However, when confronted with unusual circumstances, these custom
systems proved extremely fragile, difficult to program, and costly to maintain. Common hardware is
designed for general use and is less prone to problems caused by attempting to solve a problem (for more
information, see the chapter's upcoming "Creating Solutions in Search of a Problem" section).
» Recognizing the need to learn: Expert systems and other early forms of AI were extremely rigid
because they required specialized programming to meet each requirement. It soon became clear that
computers would need to be able to learn from the data, sensors, and environment they were in.
» Creating a setting that is adaptable: Between the first and second AI winters, the systems that were able
to do useful work followed strict guidelines. These systems were prone to grotesque output errors when
the inputs did not quite meet expectations. It became clear that any new systems would need to be able to
handle real-world data, which is frequently formatted incorrectly, has many errors, and is incomplete.
» Using fresh strategies: Imagine that you work for the government and have promised a variety of
amazing AI-based things, but none of them appeared to come to pass. That is the issue with the
subsequent computer based intelligence winter: The promises made by AI had been put into practice in a
variety of ways by various governments. These same governments began looking for alternative
approaches to advancing computing when it became clear that the strategies that were in place weren't
working. Some of these approaches have led to interesting outcomes, such as advancements in robotics.
The point is that AI winters are not always detrimental. In point of fact, these occasions are crucial for
taking stock of the progress—or lack thereof—of current strategies. When one is rushing headlong into
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
the next hopeful achievement, it is difficult to take these thoughtful moments.
An American futurist and scientist named Roy Charles Amara coined the adage, which is also known as
Amara's law: "When considering AI winters and the resulting renewal of AI with updated ideas and
objectives, it is worth remembering." We frequently overestimate a technology's short-term impact and
understate its long-term impact. There is always a point in time when people can't clearly see the long-
term impact of a new technology and the revolutions it brings with it, despite all the hype and
disappointment. No matter how many winters it still has to endure, AI is here to stay and will alter our
world for the better and worse.
A jumble of what appear to be junk-like objects—wires, wheels, metal specks, and other oddities—is
being examined by two individuals. The first person inquires, "What does it do?" to the second. The
second responds, "What does it not accomplish?" However, the invention that appears to accomplish
everything fails to accomplish anything at all. There are numerous examples of finding a solution to a
problem in the media. We laugh because everyone has previously encountered the problem-solving
solution. Even when they work, these solutions are a waste of time because they don't address a pressing
need. The AI solution in search of a problem is discussed in greater detail in the following sections.
Defining a gadget AI-related gadgets abound everywhere. Some of those gadgets are actually useful,
while others aren't, and a few of them fall somewhere in between. For instance, Alexa comes with a lot
of useful features, but it also comes with a lot of gadgets that make you wonder how to use them.
Although John Dvorak's article may appear overly pessimistic, it offers some food for thought regarding
the Alexa features: Just say no to Amazon's echo show at https://www.pcmag.com/commentary/354629.
Any application that appears to do something interesting at first glance but ultimately fails to perform
useful tasks is an AI gizmo. When deciding whether something is a gadget, these are some of the most
common characteristics to look for: Each bullet in the list starts with the letter CREEP, which means
don't make a creepy AI application:
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
» Value for money: An AI application must demonstrate that it costs the same or less than existing
solutions before anyone will agree to use it. Everyone is seeking a bargain. Paying more for the same
thing won't get anyone's attention.
» Repeatable: Even if the circumstances of the task change, an AI application's output must be
reproducible. People expect an AI to adapt and learn from experience, in contrast to procedural solutions
to problems. This means that reproducible results are expected of it.
» Effective: Users look for alternatives when an AI solution suddenly uses a lot of resources.
Particularly, businesses have become extremely focused on completing tasks with the fewest resources
possible.
» Beneficial: It is not sufficient to simply provide a practical, cost-effective, and effective benefit; An AI
must also offer a complete answer to a problem. Effective solutions make it possible for someone to let
the automation do the job without having to constantly recheck the results or support the automation.
» Concise: A practical advantage must be provided by a useful application. The benefit has to be
something that the end user needs, like access to a map or medication reminders.
Avoiding the infomercial that enthralls potential AI application users is a surefire sign that the
application will fail. Strangely, the applications that prevail no sweat are those whose reason and plan are
clear all along. It is clear that a voice recognition program exists: The computer responds to your
conversations with useful actions. You don't have to convince anyone of the value of voice recognition
software. There are a number of these truly useful applications in this book, none of which require the
infomercial or hard sell approach. When people start asking what something does, the project needs to be
rethought.
Understanding when humans are better at something This chapter focuses on keeping humans informed
while using AI. You've seen sections talking about things we can do better than AI, if AI can even master
them. Humans are best suited to handle anything requiring imagination, creativity, truth-seeking, opinion
management, or idea generation. Strangely, the limits of AI leave a lot of room for humans to go. Many
of these places aren't even possible right now because humans spend too much time doing the same,
COURSE TITLE: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
MODULE TITLE: UNDERSTANDING THE NON-STARTER APPLICATION
boring things that an AI could do.
In the future, AI may serve as a human assistant. In fact, this application of AI will become increasingly
common over time. The best simulated intelligence applications will be those that hope to help, instead
of supplant, people. Yes, it is true that robots will take over from humans in dangerous situations.
However, humans will still need to decide how to avoid making the situation worse, so they will need to
be in a safe location to direct the robot. Humans and technology work together to make this happen.