AI AGENTS, EnVIRONMENTS by Pardon Toda, Tinotenda Maposa and Grace Kuchekenya
AI AGENTS, EnVIRONMENTS by Pardon Toda, Tinotenda Maposa and Grace Kuchekenya
PARDON TODA
TINOTENDA MAPHOSA
AI is crucial for its potential to revolutionize how we live and work. It has been effectively used in
business to automate tasks done by humans such as customer service work, lead generation, fraud
detection, and quality control. In many areas, AI outperforms humans in tasks like analyzing large
numbers of legal documents quickly with minimal errors due to its ability to process massive datasets.
Generative AI tools are expanding rapidly across fields from education to marketing.
What is an AI Agent?
In artificial intelligence, an agent is a computer program or system that is designed to perceive its
environment, make decisions, and take actions to achieve a specific goal or set of goals. The agent
operates autonomously, meaning it is not directly controlled by a human operator. An agent can be
viewed as perceiving its environment through sensors and acting upon that environment through
actuators. Every agent can perceive its own actions but not always the effects.
1. Design: During the design phase, the objectives and requirements of the intelligent agent are
defined. This stage involves determining the goals the agent is expected to achieve,
understanding its interaction with the environment, and specifying its capabilities and
limitations.
2. Development: In the development stage, the intelligent agent is implemented based on the
design specifications. This involves writing code, integrating algorithms for perception and
decision-making, and testing the functionality of the agent.
3. Training: Training is a crucial stage in the life cycle of an intelligent agent, especially in
machine learning-based agents. During training, the agent learns from data or simulations to
improve its performance and decision-making abilities. This process often involves
reinforcement learning or supervised learning techniques.
4. Deployment: Once trained and tested, the intelligent agent is deployed into its operational
environment. Deployment involves integrating the agent into existing systems or platforms
where it can interact with users or other software components to fulfill its intended purpose.
6. Evaluation: Periodic evaluation of the intelligent agent is necessary to assess its impact on
achieving desired outcomes, its efficiency in decision-making, and its overall effectiveness in
fulfilling its designated tasks. Evaluation helps identify areas for improvement or optimization.
7. Evolution: As technology advances and requirements change, intelligent agents may need to
evolve over time to adapt to new challenges or opportunities. Evolution may involve retraining
models, updating algorithms, or expanding capabilities to meet evolving needs.
The structure of an AI agent
1. Architecture: Which refers to the machinery that the agent executes on, such as sensors
and actuators like those found in robotics cars, cameras, or PCs.
2. Agent Program: This is the implementation of an agent function, which is a map from
the percept sequence to an action.
The Structure of Intelligent Agents The structure of intelligent agents can be broken down
into two components:
Performance Measure of Agent: This refers to the criteria that determine how
successful an agent is in achieving its objectives.
Behavior of Agent: It signifies the actions that an agent performs based on the sequence
of percepts it receives.
Percept: These are the inputs received by the agent from its environment at a given
instance.
Percept Sequence: It represents the history of all percepts that an agent has encountered
so far.
Agent Function: This is a mapping from percept sequences to actions.
Rationality
Types of Agents
Simple Reflex Agents: Simple reflex agents operate based on the current percept without
considering the percept history. They follow condition-action rules and are suitable for
fully observable environments. However, they lack intelligence and struggle in partially
observable environments.
Model-Based Reflex Agents: Model-based reflex agents maintain an internal state that is
updated with each percept received. By using a model of the world, they can handle
partially observable environments more effectively than simple reflex agents. The Model-
based agent can work in a partially observable environment, and track the situation. A
model-based agent has two important factors which are Model: It is knowledge about
"how things happen in the world," so it is called a Model-based agent and Internal
State: It is a representation of the current state based on percept history. These agents
have the model, "which is knowledge of the world" and based on the model they
perform actions. Updating the agent state requires information about, How the world
evolves? And How the agent's action affects the world?.
1.
Goal-Based Agents: Goal-based agents make decisions based on how close they are to
achieving their goals. Every action they take aims to reduce the distance from the desired
goal state. These agents require explicit representation of knowledge and often involve
search and planning processes. The knowledge of the current state environment is not
always sufficient to decide for an agent to what to do. The agent needs to know its goal
which describes desirable situations. Goal-based agents expand the capabilities of the
model-based agent by having the "goal" information. They choose an action, so that they
can achieve the goal. These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such considerations of
different scenario are called searching and planning, which makes an agent proactive.
Utility-Based Agents: Utility-based agents select actions based on preferences or utilities
associated with different states. They consider not only reaching a goal but also
maximizing expected utility, which reflects the agent’s “happiness” or satisfaction level.
These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state. Utility-based agent act based not only goals but also the best way to achieve the
goal.The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.The utility function maps each
state to a real number to check how efficiently each action achieves the goals.
Learning Agents: Learning agents have the capability to learn from past experiences and
improve their performance over time. They consist of components such as a learning
element, critic, performance element, and problem generator to facilitate adaptive
behavior through learning. A learning agent in AI is the type of agent which can learn
from its past experiences, or it has learning capabilities. It starts to act with basic
knowledge and then able to act and adapt automatically through learning. A learning
agent has mainly four conceptual components, which are: Learning element: It is
responsible for making improvements by learning from environment. Critic: Learning
element takes feedback from critic which describes that how well the agent is doing with
respect to a fixed performance standard. Performance element: It is responsible for
selecting external action. Problem generator: This component is responsible for
suggesting actions that will lead to new and informative experiences.
2. Multi-Agent Systems (MAS): MAS involve multiple interacting agents working
together to achieve common goals. These systems can be homogeneous (agents with
similar capabilities) or heterogeneous (agents with diverse capabilities). MAS can exhibit
cooperative, competitive, or mixed behaviors depending on the context.
Components of an AI agent.
1. Environment: The environment refers to the area or domain in which an AI agent operates. It
can be a physical space, like a factory floor, or a digital space, like a website.
2. Sensors: Sensors are the tools that an AI agent uses to perceive its environment. These can be
cameras, microphones, or any other sensory input that the AI agent can use to understand what is
happening around it.
3. Actuators: Actuators are the tools that an AI agent uses to interact with its environment. These
can be things like robotic arms, computer screens, or any other device the AI agent can use to
change the environment.
4. Decision-making mechanism: A decision-making mechanism is the brain of an AI agent. It
processes the information gathered by the sensors and decides what action to take using the
actuators. The decision-making mechanism is where the real magic happens.AI agents use
various decision-making mechanisms, such as rule-based systems, expert systems, and neural
networks, to make informed choices and perform tasks effectively.
5. Learning system: The learning system enables the AI agent to learn from its experiences and
interactions with the environment. It uses techniques like reinforcement learning, supervise d
learning, and unsupervised learning to improve the performance of the AI agent over time.
Agent environments
States: It represents the current configuration of the main environment at a given time.
Actions: These are possible changes that the agent can make to the environment.
Rewards: These are the feedback signals that include how well the agent is performing in
Environments in AI can vary from entirely artificial settings like computer systems to rich,
complex domains with real-time decision-making requirements. Software agents may operate in
both real and artificial environments, adapting to different scenarios.
A fully observable environment is one in which the agent can access the complete state of the
environment at each point in time through its sensors. In contrast, a partially observable
environment is one in which the agent does not have access to the complete state of the
environment. Instead, the agent has access to only partial information about the state of the
environment.
For example, in a game of chess, the state of the environment is fully observable because the
agent (i.e., the player) can see the entire board and all of the pieces on the board. In contrast, in a
game of poker, the state of the environment is partially observable because the agent can only
see its own cards and not the cards held by the other players.
In contrast, a stochastic environment is one in which the next state of the environment is not
completely determined by the current state and the action taken by the agent. Instead, the next
state of the environment is determined probabilistically.
For example, in a game of chess, the environment is deterministic because the next state of the
environment (i.e., the next board position) is completely determined by the current state (i.e., the
current board position) and the action taken by the agent (i.e., the move made by the player). In
contrast, in a game of craps, the environment is stochastic because the next state of the
environment (i.e., the outcome of the roll of the dice) is determined probabilistically.
A competitive environment is one in which the agent competes against another agent to optimize
its output. In contrast, a collaborative environment is one in which multiple agents cooperate to
produce the desired output.
For example, in a game of chess, the environment is competitive because the two players are
competing against each other to win the game. In contrast, in a team of self-driving cars, the
environment is collaborative because the cars are working together to reach their destinations
efficiently and safely.
A single-agent environment is one in which there is only one agent interacting with the
environment. In contrast, a multi-agent environment is one in which there are multiple agents
interacting with the environment.
For example, a person navigating a maze is in a single-agent environment because there is only
one agent (i.e., the person) interacting with the environment. In contrast, a team of robots
working together to assemble a car is in a multi-agent environment because there are multiple
agents (i.e., the robots) interacting with the environment.
For example, a room with fixed objects and no external influences is a static environment
because the state of the environment does not change over time. In contrast, a city street with
moving cars and pedestrians is a dynamic environment because the state of the environment
changes over time.
A discrete environment is one in which the state of the environment can be described using a
finite set of values. In contrast, a continuous environment is one in which the state of the
environment can be described using an infinite set of values.
For example, a game of chess is a discrete environment because the state of the environment can
be described using a finite set of values (i.e., the positions of the pieces on the board). In
contrast, the state of a self-driving car is a continuous environment because the state of the
environment can be described using an infinite set of values (i.e., the positions and velocities of
all of the objects in the environment).
An episodic environment is one in which the agent’s actions are divided into atomic incidents or
episodes, and there is no dependency between current and previous incidents. In contrast, a
sequential environment is one in which the agent’s previous decisions can affect all future
decisions.
For example, a pick-and-place robot is in an episodic environment because the robot’s decisions
about each part are independent of its decisions about previous parts. In contrast, a game of
checkers is in a sequential environment because the player’s previous moves can affect all future
moves.
A known environment is one in which the output for all probable actions is given. In contrast, an
unknown environment is one in which the agent must gain knowledge about how the
environment works in order to make a decision.For example, a simulation of a car driving on a
track is a known environment because the output of all actions is known in advance. In contrast,
a real-world self-driving car is an unknown environment because the car must learn about the
environment through experience in order to make decisions.
Applications of AI Agents
1. Healthcare Industry: AI agents are revolutionizing healthcare by enhancing patient
care, optimizing workflows, diagnosing medical conditions through advanced image
recognition, providing personalized treatment plans, and streamlining administrative
tasks.
2. Financial Industry: In finance, AI agents excel at tasks like fraud detection and
customer service enhancement. They analyze financial data to identify patterns and
anomalies for secure transactions and provide personalized interactions with customers
through natural language processing.
5. Data Analysis: AI agents excel in real-time data analysis across industries like finance,
healthcare, and more. They sift through vast datasets to identify trends, market
fluctuations, potential risks, empowering organizations to make informed decisions
swiftly.
7. Multilingual Chatbots: Businesses can expand their global reach using multilingual
chatbots powered by AI agents. These chatbots facilitate communication with customers
in multiple languages, enhancing user experience and engagement on a global scale.
References
https://www.javatpoint.com/types-of-ai-agents
https://jrodthoughts.medium.com/6-types-of-artificial-intelligence-
environments-825e3c47d998