0% found this document useful (0 votes)
2 views8 pages

2288ch02

The document discusses the complexities of cooperative control in multi-agent robotic systems, emphasizing the importance of distributed approaches over centralized ones for efficiency and scalability. It explores various research areas that contribute to cooperative robotics, including distributed artificial intelligence, distributed systems, and biological inspirations. Additionally, it highlights the significance of learning, evolution, and adaptability in enhancing the performance and cooperation of robotic agents in dynamic environments.

Uploaded by

QuickerMan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views8 pages

2288ch02

The document discusses the complexities of cooperative control in multi-agent robotic systems, emphasizing the importance of distributed approaches over centralized ones for efficiency and scalability. It explores various research areas that contribute to cooperative robotics, including distributed artificial intelligence, distributed systems, and biological inspirations. Additionally, it highlights the significance of learning, evolution, and adaptability in enhancing the performance and cooperation of robotic agents in dynamic environments.

Uploaded by

QuickerMan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Liu, J. & Wu, J.

"Toward Cooperative Control"


Multi-Agent Robotic Systems
Boca Raton: CRC Press LLC,2001
2
Toward Cooperative Control

Partnership is an essential characteristic of sustainable communities.


The cyclical exchanges of energy and resources in an ecosystem are
sustained by pervasive cooperation. Indeed, we have seen that since
the creation of the first nucleated cells over two billion years ago,
life on Earth has proceeded through ever more intricate arrange-
ments of cooperation and coevolution. Partnership – the tendency to
associate, establish links, live inside one another, and cooperate – is
one of the hallmarks of life.1
Fritjof Capra

The cooperation of robots in unknown settings poses a complex control prob-


lem. Solutions are required to guarantee an appropriate trade-off in task objec-
tives within and among the robots. Centralized approaches to this problem are not

1 The Web of Life, Harper Collins Publishers, Great Britain, 1996, p 293.

©2001 CRC Press LLC


efficient or applicable due to their inherent limitations – for instance, the require-
ment of global knowledge about an environment and a precise design to consider
all possible states [PM00]. Distributed approaches, on the other hand, are more
appealing due to their properties of better scaling and reliability.

2.1 Cooperation-Related Research


An overview of approaches and issues in cooperative robotics can be found in
[Ark98, CFK97, Mat95b]. Parker [Par99] has demonstrated multi-robot target ob-
servation using the ALLIANCE architecture [Par94], where action selection con-
sists of inhibition (through motivational behaviors). As opposed to ALLIANCE,
Pirjanian and Mataric [PM00] have developed an approach to multi-robot coordi-
nation in the context of cooperative target acquisition. Their approach is based on
multiple objective behavior coordination extended to multiple cooperative robots.
They have provided a mechanism for distributed command fusion across a group
of robots to pursue multiple goals in parallel. The mechanism enables each robot
to select actions that not only benefit itself but also benefit the group as a whole.
Hirata et al. [HKA+ 99] have proposed a decentralized control algorithm for mul-
tiple robots to handle a single object in coordination. The motion command is
given to one of the robots, referred to as a leader; and the other robots, referred to
as followers, estimate the motion of the leader from the motion of the object and
handle the object based on the estimated reference.
Studies on cooperation in multi-agent robotic systems have benefited from a
number of distinct fields such as social sciences, life sciences, and engineering.
According to Cao et al. [CFK97], the disciplines that are most critical to the
development of cooperative robotics include distributed artificial intelligence,
distributed systems, and biology.

2.1.1 Distributed Artificial Intelligence


Grounded in traditional symbolic AI and social sciences, DAI is composed of two
major areas of study: Distributed Problem Solving (DPS) and Multi-Agent Sys-
tems (MAS) [Ros93]. DPS considers how the task of solving a particular prob-
lem can be divided among agents that cooperate in dividing and sharing knowl-
edge about the problem and its evolving solutions. One important assumption in
DPS is that the agents are predisposed to cooperate. Cao et al. [CFK97] advo-
cate DPS research on “developing frameworks for cooperative behavior between
willing agents” rather than “developing frameworks to enforce cooperation be-
tween potentially incompatible agents.” The MAS research studies the collective
behavior of a group of possibly heterogeneous agents with potentially conflicting
goals [CFK97]. Durfee et al. [DLC89] define MAS as “a loosely coupled network
of agents that work together to solve problems that are beyond their individual
capabilities.”

©2001 CRC Press LLC


2.1.2 Distributed Systems
The field of distributed systems is a natural source of ideas and solutions for study-
ing multi-robot systems. Some researchers have noted that distributed comput-
ing can contribute to the theoretical foundations of cooperative robotics [CFK97,
FMSA99]. In [CFK97], distributed control is considered as a promising frame-
work for the cooperation of multiple robots, i.e., distributed control methods real-
ize many advantages (flexibility, adaptability, and robustness etc.) when the pop-
ulation of robots increases. By noting the similarities with distributed computing,
theories pertaining to deadlock, message passing, and resource allocation, and the
combination of the above as primitives can be applied to cooperative robotics.

2.1.3 Biology
The majority of existing work in the field of cooperative robotics has cited biolog-
ical systems as inspiration or justification [Bal94]. Well-known collective behav-
iors of ants, bees, and other eusocial insects provide striking proof that systems
composed of simple agents can accomplish sophisticated tasks in the real world
[CFK97]. Although the cognitive capabilities of these insects are very limited,
the interactions between the agents, in which each individual obeys some simple
rules, can result in the emergence of complex behaviors. Thus, rather than follow-
ing the traditional AI that models robots as deliberative agents, some researchers
in cooperative robotics have chosen to take a bottom-up approach in which in-
dividual agents are more like ants – they follow simple reactive rules [Mat94a,
BHD94, SB93, DGF+ 91, BB99, DMC96]. The behavior of insect
colonies can be generally characterized as self-organizing systems.

2.2 Learning, Evolution, and Adaptation


An important goal in the development of multi-agent robotic systems is to design
a distributed control infrastructure to enable robots to perform their tasks over a
problem-solving period without human supervision. These lifelong robotic sys-
tems must be capable of dealing with dynamic changes occurring over time, such
as unpredictable changes in an environment or incremental variations in their own
performance capabilities.
Learning, evolution, and adaptation endow an agent in a multi-agent system
with the ability to improve its likelihood of survival within an environment through
appropriate competition or cooperation with other agents. Learning is a strategy
for an agent to adapt to its environment. Through its experience of interacting with
the environment, the agent can form its cognition for the application of a specific
behavior, incorporating certain aspects of the environment in its internal structure.
On the other hand, evolution is considered as a strategy for a population of agents
to adapt to the environment. Adaptation refers to an agent’s learning by mak-
ing adjustments with respect to its environment. As identified by Colombetti and

©2001 CRC Press LLC


Dorigo [CD98], two kinds of adaptation that are relevant to multi-agent robotics
are evolutionary adaptation and ontogenetic adaptation. The former concerns the
way in which species adapt to environmental conditions through evolution, and
the latter is the process by which an individual adapts to its environment during
its lifetime. As far as behavior is concerned, ontogenetic adaptation is a result
of learning . Adaptability allows agents to deal with noise in their internal and
external sensors as well as inconsistencies in the behaviors of their environment
and other agents.
In the opinion of Nolfi and Floreano [NF99], evolution and learning are two
forms of biological adaptation that differ in space and time. Evolution is “a pro-
cess of selective reproduction and substitution” based on the existence of a dis-
tributed population of individuals. Learning, on the other hand, is “a set of mod-
ifications taking place within each individual during its own lifetime.” Evolution
and learning operate on different time scales. Evolution is “a form of adaptation
capable of capturing relatively slow environmental changes that might encom-
pass several generations.” Learning, on the other hand, “allows an individual to
adapt to environmental changes that are unpredictable at the generational level.”
Learning may include a variety of mechanisms that produce adaptive changes in
the individual during its lifetime, such as physical development, neural matura-
tion, and synaptic plasticity.
Although evolution and learning are two distinct kinds of change that occur in
two distinct types of entities, Parisi and Nolfi [PN96] argue that the two strategies
may influence each other. The influence of evolution on learning is not surprising.
Evolution causes the changes in a genotype.

Each individual inherits a genome that is a cumulative result at the level of


the individual of the past evolutionary changes that occur at the level of a
population.

The individual’s genome partially specifies a resulting phenotypic individual – it


constrains how the individual will behave and what it will learn. The way is open
for an influence of evolution on learning. On the other hand,

Evolution can converge to a desired genome more quickly than if learning


is absent, although it remains true that learned changes are not inherited. If
evolution is unaided by learning, its chances of success are restricted to the
case that the single desired genome suddenly emerges because of the chance
factors operating at reproduction. Learning can accelerate the evolutionary
process both when learning tasks are correlated with the fitness criterion and
when random learning tasks are used.

From an evolutionary perspective, learning has several adaptive functions. It al-


lows individuals to adapt to changes in the environment that occur in the lifespan
of an individual or across a few generations. Learning supplements evolution,
as it enables an individual to adapt to changes in the environment that happen

©2001 CRC Press LLC


too quickly to be tracked by evolution. In summary, learning can help and guide
evolution [FU98, NF99].
Although the distinction between learning and adaptation is not always clear,
Weiss [Wei96] has shown that multi-robot learning can usually be distinguished
from multi-robot adaptation by the extent to which new behaviors and processes
are generated. Typically, in multi-robot learning, new behaviors or behavior se-
quences are generated, or functions are learned, thus giving a robot team radically
new capabilities. Frequently, the learning takes place in an initial phase, where
performance during learning is not of importance. In multi-agent adaptation, the
robot team exercises a control policy that gives reasonable results for the ini-
tial situation. The team is able to gradually improve its performance over time.
The emphasis in multi-robot adaptation is the ability to change its control policy
online – while the team is performing its mission – in response to changes in the
environment or in the robot team.
In multi-agent robotics, evolutionary algorithms have been widely used to
evolve controllers [HHC92, HHC+ 96, Har96, Har97, HHCM97]. Generally speak-
ing, the controllers that become well adapted to environmental conditions during
evolution may not perform well when the conditions are changed. Under these cir-
cumstances, it is necessary to carry out an additional evolutionary process, which,
as Urzelai and Floreano [UF00] have stated, can take a long time. On the other
hand, the integration of evolution and learning may offer a viable solution to this
problem by providing richer adaptive dynamics than when parameters are entirely
genetically determined.

2.3 Design of Multi-Robot Control


Finding the precise values for control parameters that lead to a desired coopera-
tive behavior in multi-robot systems can be a difficult, time-consuming task for
a human designer. Harvey et al. [HHC+ 96] point out that there are at least three
major problems that a designer may encounter:

1. It is not clear how a robot control system should be decomposed.


2. The interactions between separate subsystems are not limited to
directly visible connecting links; interactions are also mediated via the
environment.
3. As system complexity grows, the number of potential interactions
between the components of the system grows exponentially.

As Harvey [Har96] indicates, classical approaches to robotics have often as-


sumed a primary decomposition into perception, planning, and action modules.
Brooks [Bro86], on the other hand, acknowledges problems 2: and 3: in his sub-
sumption architecture, and he advocates the careful design of a robot control
system layer by layer by hand. An obvious alternative approach is to abandon

©2001 CRC Press LLC


State Sensorimotor
Actions
Controller
Goal

FIGURE 2.1. A general model of robotic control.

hand design and explicitly use evolutionary techniques to incrementally evolve


complex robot control systems.
The control in a multi-agent robotic system determines its capacities to achieve
tasks and to react to various events. The controllers of autonomous robots must
possess both decision-making and reactive capabilities. And the robots must react
to the events in a timely fashion. Figure 2.1 presents a general model of robot con-
trollers. Generally speaking, a robot controller should demonstrate the following
characteristics [Ark98, ACF+ 98, BH00]:

1. Situatedness: The robots are entities situated and surrounded by the


real world. They do not operate upon abstract representations of reality,
but rather upon reality itself.
2. Embodiment: Each robot has a physical presence (a body). This
spatial reality has consequences in its dynamic interactions with the
world (including other robots).
3. Programmability: A useful robotic system cannot be designed only
for a single environment or task. It should be able to achieve
multiple tasks described at an abstract level. Its functions should be
easily combined according to the task to be executed.
4. Autonomy and adaptability: The robots should be able to carry out
their actions and to refine or modify the task and their own behavior
according to the current goal and execution context as perceived.
5. Reactivity: The robots have to take into account events with time
bounds compatible with the correct and efficient achievement of their
goals (including their own safety).
6. Consistent behavior: The reactions of the robots to events must be
guided by the objectives of their tasks.
7. Robustness: The control architecture should be able to exploit the
redundancy of the processing functions. Robustness will require the
control to be decentralized to some extent.
8. Extensibility: Integration of new functions and definition of new tasks
should be easy. Learning capabilities are important to consider here;
the architecture should make learning possible.
9. Scalability: The approach should easily scale to any number of robots.
10. Locality: The behaviors should depend only on the local sensors of
each robot.

©2001 CRC Press LLC


11. Flexibility: The behaviors should be flexible to support many social
patterns.
12. Reliability: The robot can act correctly in any given situation over
time.

©2001 CRC Press LLC

You might also like