Artificial Intelligence
Artificial Intelligence
particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines
to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such
machines may be called AIs.
Some high-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used
by YouTube, Amazon, and Netflix); interacting via human speech (e.g., Google Assistant, Siri, and Alexa); autonomous
vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT, Apple Intelligence, and AI art); and superhuman play and analysis
in strategy games (e.g., chess and Go).[2] However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general
applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[3][4]
Alan Turing was the first person to conduct substantial research in the field that he called machine intelligence.[5] Artificial intelligence was founded
as an academic discipline in 1956,[6] by those now considered the founding fathers of AI: John McCarthy, Marvin Minksy, Nathaniel Rochester,
and Claude Shannon.[7][8] The field went through multiple cycles of optimism,[9][10] followed by periods of disappointment and loss of funding,
known as AI winter.[11][12] Funding and interest vastly increased after 2012 when deep learning surpassed all previous AI techniques,[13] and after
2017 with the transformer architecture.[14] This led to the AI boom of the early 2020s, with companies, universities, and laboratories
overwhelmingly based in the United States pioneering significant advances in artificial intelligence.[15]
The growing use of artificial intelligence in the 21st century is influencing a societal and economic shift towards increased automation, data-driven
decision-making, and the integration of AI systems into various economic sectors and areas of life, impacting job
markets, healthcare, government, industry, education, propaganda, and disinformation. This raises questions about the long-term effects, ethical
implications, and risks of AI, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
The various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research
include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General
intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals. [16]
To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical
optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics.[b] AI also draws
upon psychology, linguistics, philosophy, neuroscience, and other fields.[17]
Goals
The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that
researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI
research.[a]
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make
logical deductions.[18] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing
concepts from probability and economics.[19]
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become
exponentially slower as the problems grow.[20] Even humans rarely use the step-by-step deduction that early AI research could model. They solve
most of their problems using fast, intuitive judgments.[21] Accurate and efficient reasoning is an unsolved problem.
Knowledge representation
An ontology represents knowledge as a set of concepts within a domain and the
relationships between those concepts
Knowledge representation and knowledge engineering[22] allow AI programs to answer questions intelligently and make deductions about real-
world facts. Formal knowledge representations are used in content-based indexing and retrieval,[23] scene interpretation,[24] clinical decision
support,[25] knowledge discovery (mining "interesting" and actionable inferences from large databases),[26] and other areas.[27]
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations,
concepts, and properties used by a particular domain of knowledge.[28] Knowledge bases need to represent things such as objects, properties,
categories, and relations between objects;[29] situations, events, states, and time;[30] causes and effects;[31] knowledge about knowledge (what we
know about what other people know);[32] default reasoning (things that humans assume are true until they are told differently and will remain true
even when other facts are changing);[33] and many other aspects and domains of knowledge.
Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the
average person knows is enormous);[34] and the sub-symbolic form of most commonsense knowledge (much of what people know is not
represented as "facts" or "statements" that they could express verbally).[21] There is also the difficulty of knowledge acquisition, the problem of
obtaining knowledge for AI applications