0% found this document useful (0 votes)
69 views21 pages

Autonomous Weapon Systems Overview

The document outlines the agenda for the Disarmament and International Security Committee (DISEC) at SFSMUN, focusing on the development and implications of Lethal Autonomous Weapons Systems (LAWS). It discusses the current lack of binding international agreements on LAWS, the role of existing frameworks, and the ethical considerations surrounding their use. The document also highlights the importance of international cooperation and the need for regulation to ensure global peace and security in the context of autonomous weapon systems.

Uploaded by

Manjunath Rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views21 pages

Autonomous Weapon Systems Overview

The document outlines the agenda for the Disarmament and International Security Committee (DISEC) at SFSMUN, focusing on the development and implications of Lethal Autonomous Weapons Systems (LAWS). It discusses the current lack of binding international agreements on LAWS, the role of existing frameworks, and the ethical considerations surrounding their use. The document also highlights the importance of international cooperation and the need for regulation to ensure global peace and security in the context of autonomous weapon systems.

Uploaded by

Manjunath Rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Table Of Contents

Table Of Contents 1

Letter from the EB 2

Committee Overview 3

4
Introduction to the Agenda
5
International and Regional Framework

Role of International System 6

Existing Autonomous and 7


Semi- Autonomous Weapon Systems

Limitations of Autonomous Weapon System 15

Accountability of Autonomous Weapon System 16

Ethical and Humanitarian aspects of


17
Autonomous Weapon System

Bloc Positions 18

QARMA 19

Bibliography 20

1|P a g e
Letter from the Executive Board

Dear delegates,

Congratulations to all delegates on getting the splendid opportunity to


participate in this edition of SFSMUN. It is an honour to be appointed as
the Executive Board of the Disarmament & International Security
Committee.

MUN is a unique platform which enables us to delve into the depths of


diplomacy and foreign policy. The Disarmament and International
Security Committee, the first committee of the General Assembly,
discusses various matters regarding disarmament and international
security. This committee will help in developing keen interest on various
spheres of human development, ranging from autonomous weapon
systems, modern technology and foreign policy.

The development of Artificial Intelligence has indeed led to remarkable


progress of mankind in the field of technology. The agenda
‘Development of Autonomous Weapon System’ gives a unique view on
the recent developments in autonomy. It will help delegates understand
the political views of a country on the agenda.

The Executive Board, hope to keep you on your toes, teach you to react to
crisis developments, face international challenges and mark your process
of addressing these challenges whilst having fun and co-operating with a
good understanding of international law and active participation in the
committee will certainly be well rewarded. The following pages will give
you handy information about the agenda, as well as useful links to help
you in your research. The guide will provide a bird’s eye view on the
agenda and the important sub-topics discussed. It will also provide a
chronological observation on the agenda.

Looking forward to an enriching and promising debate!

Regards,
Executive Board,
[email protected]

2|P a g e
Committee Overview
DISEC stands for Disarmament and International Security Committee. It
is the first committee of the General Assembly and deals with issues
relating to disarmament and threats to peace that affect the international
community and recommends solutions to the challenges in the
international security system.

It considers all disarmament and international security matters within the


scope of the Charter or relating to the powers and functions of any other
organ of the United Nations; the general principles of cooperation in the
maintenance of international peace and security, as well as principles
governing disarmament and the regulation of armaments; promotion of
cooperative arrangements and measures aimed at strengthening stability
through lower levels of armaments.

The committee works closely with the United Nations Security Council to
take steps in promoting the fundamental goal of the UN, the promotion of
international peace and security. All the nations who are part of the
United Nations and agreed to the United Nations charter are permitted to
be members of DISEC.

The final resolutions that are agreed upon in the DISEC are
communicated to the General Assembly (GA) and the Security Council
(SC). DISEC’s decisions are highly regarded because it is the part of the
General Assembly where every nation gets equal representation in voting
procedures, while in the Security Council, the voting procedure mainly
depends upon the votes of the permanent five members.

DISEC does not have the power to impose sanctions on other nations.
Delegates, please remember that DISEC is recommendatory in nature and
does not have the power to authorize anything without the approval of the
United Nations Security Council. The chief aim of DISEC is to solve the
problems that threaten international peace and security with great care
and urgency to ensure global peace and security.

3|P a g e
Introduction to the Agenda
The topic of Lethal Autonomous Weapons Systems (LAWS) does not have a long
history within the international arena because LAWS have only become a policy issue
over the last few years as technology has evolved. Thus, because of the burgeoning
nature of the topic, there are no binding agreements specifically targeting LAWS.
Current international and regional frameworks relevant to the use of LAWS, such as
the Geneva Conventions and their Additional Protocols, focus on international
humanitarian law (IHL) and international human rights law. However, in the past
three years, treaty bodies such as the one which oversees the Convention on Certain
Conventional Weapons (CCW), have held meetings of experts on LAWS to begin
discussions on pre-emptive moves to address LAWS. Moreover, numerous civil
society organizations (CSOs) have been working together and in conjunction with the
United Nations (UN) to promote awareness of the potential impact of LAWS and to
take definitive action in prohibiting their manufacture and implementation. LAWS go
by many names, such as Lethal Autonomous Robotics (LARs), Fully Autonomous
Weapon Systems (FAWS), remotely piloted aerial systems, or even “Killer Robots,”
and their definition is just as ambiguous.

The UN Special Rapporteur’s report to the Human Rights Council on autonomous


weapon systems – or ‘Lethal Autonomous Robots’ – provides the definition:

Autonomous weapon system: “Lethal Autonomous Robotics (LARs) refers to


robotic weapon systems that, once activated, can select and engage targets without
further intervention by a human operator. The important element is that the robot has
an autonomous ‘choice’ regarding selection of a target and the use of lethal force.”

Human Rights Watch has also provided some definitions according to the level of
human input and supervision in selecting and attacking targets:

Human-in-the-Loop Weapons: “Robots that can select targets and deliver force only
with a human command”;
Human-on-the-Loop Weapons: “Robots that can select targets and deliver force
under the oversight of a human operator who can override the robots’ actions”;
Human-out-of-the-Loop Weapons: “Robots that are capable of selecting targets
and delivering force without any human input or interaction.”

Autonomous weapon system: “The term ‘fully autonomous weapon’ refers to both
out-of-the-loop weapons and those that allow a human on the loop, but that are
effectively out-of-the-loop weapons because the supervision is so limited.” The ICRC
has also raised some general definitions in its 2011 report on IHL and challenges in
contemporary armed conflicts:

Automated weapon system: “An automated weapon or weapons system is one that
is able to function in a self-contained and independent manner although its
employment may initially be deployed or directed by a human operator. Although,

4|P a g e
deployed by humans, such systems will independently verify or detect a particular
type of target object and then fire or detonate.”

Autonomous weapon system: “An autonomous weapon system is one that can
learn or adapt its functioning in response to changing circumstances in the
environment in which it is deployed. A truly autonomous system would have artificial
intelligence that would have to be capable of implementing IHL.” Common to all the
above definitions is the inclusion of weapon systems that can independently select and
attack targets, with or without human oversight. This includes both weapon systems
that can adapt to changing circumstances and ‘choose’ their targets and
weapon systems that have pre-defined constraints on their operation and potential
targets or target groups. However, the distinction between autonomous and automated
weapon systems is not always clear since both have the capacity to independently
select and attack targets within the bounds of their human-determined programming

Though some level of automation in weapon systems is distinguishable in current


military and law enforcement applications, no state actors have implemented lethal
fully autonomous systems yet. In fact, some governments are already concerned with
the use of existing technology and have tried to curb its use. The debate on LAWS
currently centers on three main aspects: technical specifications and functions, legal
implications, and moral, ethical, and humanitarian concerns.

International and Regional Framework


Currently, no treaties or resolutions specifically target the development and use of
LAWS and international and regional agreements which would concern LAWS, such
as the fourth Geneva Convention and Additional Protocols I and II, only deal with the
issue indirectly, focusing on the protections afforded to combatants and civilians that
LAWS may violate.

In addition to these historical documents United Nations (UN) Convention on Certain


Conventional Weapons (CCW) is the epicentre of the global debate on autonomy in
weapons systems. The CCW's purpose “is to ban or restrict the use of specific types
of weapons that are considered to cause unnecessary or unjustifiable suffering to
combatants or to affect civilians indiscriminately”.In CCW parlance, the weapon
autonomy issue is called “emerging technologies in the area of lethal autonomous
weapons systems” (LAWS).

In November 2019, CCW States Parties decided to, once again, continue their
deliberations on LAWS. For the first time, however, these talks, which had previously
been conducted between 2014 and 2016 in informal meetings and since 2017 within
the framework of an expert subsidiary body called a Group of Governmental Experts
(GGE), were mandated to produce a specific outcome. For ten days in 2020 and for an
as-yet-unknown number of days in 2021 (when the CCW's next Review Conference is
due), the GGE was and is tasked with debating and fleshing out “aspects of the
normative and operational framework” on LAWS.2 In addition, in Annex III of their
2019 report, States Parties adopted eleven guiding principles to take into account
going forward.3

5|P a g e
After the first five-day meeting of 2020 was postponed and then conducted in a hybrid
format due to the current global COVID-19 pandemic, the second meeting had to be
shelved, and it is currently unclear when and how the talks can resume. While some
States – most prominently Russia – have displayed no interest in producing new
international law in the CCW, arguing that “concerns regarding LAWS can be
addressed through faithful implementation of the existing international legal
norms”,4 others – such as Germany – claim that nothing short of “an important
milestone” has already been reached with the 2019 report cited above, even
describing the adopted eleven guiding principles as a “politically binding regulation”.

Role of International System


LAWS can be understood by breaking down its name. ‘Lethal’ encompasses the
means ‘capable of causing death.’ ‘Weapons system’ may be referred not only to the
actual weapon, but rather they are the storage, transportation, and delivery
mechanisms. ‘Autonomous’ may be used to define technological autonomy; i.e. the
ability of software to operate without human involvement.

The United Nations has initiated discussions on the topic ‘Lethal Autonomous
weapon systems’ or ‘LAWS’ in the past few years. Due to abilities given to the first
committee of the General Assembly by the United Nations charter, LAWS comes
under its jurisdiction, not only due to their nature as weapons, but also due to their
nature of posing a threat to the international peace and security. The First Committee
works with the UN Disarmament Commission (UNDC) and the CD in discussing how
international disarmament issues relate to LAWS.

The United Nations Secretary-General has stated that “machines with the power and
discretion to take lives without human involvement are politically unacceptable,
morally repugnant and should be prohibited by international law.” Additionally, the
first clause of the General Assembly Resolution 61/55 adopted on 6th December 2006,
states ‘Affirms that scientific and technological progress should be used for the
benefit of all mankind to promote the sustainable economic and social development of
all States and to safeguard international security, and that international cooperation in
the use of science and technology through the transfer and exchange of technological
know-how for peaceful purposes should be promoted; ’ is in direct opposition to the
development of LAWS (Lethal Autonomous Weapon Systems).

The European Parliament’s resolution 2018/2752, clause 4 states that ‘Stresses, in this
light, the fundamental importance of preventing the development and production of
any lethal autonomous weapon system lacking human control in critical functions
such as target selection and engagement;’

The first clause of the General Assembly Resolution 63/36 states, ‘Reaffirms that
effective measures should be taken to prevent the emergence of new types of weapons
of mass destruction;’ which is violated by the development of Lethal Autonomous
Weapon Systems.

6|P a g e
The UN Institute for Disarmament Research (UNIDIR) has published a series of
documents considering legal and ethical issues of the development and use of LAWS,
as well as the application of international human rights, humanitarian, and criminal
law on LAWS.

Although the UN bodies have acknowledged the need and have begun to discuss
LAWS within international and regional discussions, the Civil Society Organizations
(CSO) have been far more active in promoting the topic.

Existing autonomous and semi-


autonomous system
Weapon systems that, once deployed, can independently detect, identify, track, select
and potentially attack targets without human involvement do not belong to a distant
future; in fact, some have been used for decades. These include some types of :
(a) air defence systems; (b) active protection systems; (c) robotic sentry weapons; (d)
guided munitions; and (e) loitering weapons

Air defence systems

Definition and characteristics

Air defence systems are weapon systems that are specifically designed to nullify or
reduce the effectiveness of hostile air action. These can be parsed out in different
categories depending on their end-use—for example, missile defence systems, anti-
aircraft systems and close-in weapon systems (CIWSs). All these systems operate in
the same way: they use a radar to detect and track incoming threats (missiles, rockets
or enemy aircraft), and a computer-controlled fire system that can prioritize, select
and potentially autonomously attack these threats.

Functions and capabilities

Autonomy in air defence systems has no other function than supporting targeting. The
aim is to detect, track, prioritize, select and potentially engage incoming air threats
more rapidly and more accurately than a human possibly could. Two examples that
highlight the performance of such systems are the S-400 Triumf and the Rapier. The
S-400 Triumf, a Russian-made air defence system, can reportedly track more than300
targets and engage with more than 36 targets simultaneously, at a distance of upto 250
kilometres. The Rapier, which is produced by MBDA and is the UK’s primary
defence system, takes 6 seconds from target detection to missile launch. The

7|P a g e
technology behind air defence systems has not fundamentally changed since the
invention of the Mark 56. The performance of radar and fire control systems has
certainly improved but the operating principle remains the same.

Target detection/identification :- Air defence systems typically use a radar system to


detect potential targets. The radar system emits radio-frequency signals and detects
targets based on the return of the reflected signals. Incoming threats are generally
identified using two simple criteria: trajectory and velocity.

Target prioritization :- When several incoming threats are detected, systems typically
proceed to a threat assessment to determine which target to engage first. Once again,
the assessment is made based on preset parameters. In the case of CIWSs, they
generally engage the target that represents the most imminent threat to the ship or
specific location that they are supposed to protect. For missile defence systems, such
as the Iron Dome, the parameter very much depends on the operational scenario, but
the assessment works in the same way: the system assesses where the incoming
missile or rocket is likely to land and evaluates accordingly whether it is worth
deploying countermeasures.

Target engagement:- The fire control systems of air defence systems have two modes
of engagement: human-in-the-loop and human-on-the-loop. In the human-in-the-loop
mode, the operator must always approve the launch, and there are one or several
‘decision leverage points’ where operators can give input on and control the
engagement process. In the human-on-the-loop mode, the system, once activated and
within specific parameters, can deploy countermeasures autonomously if it detects a
threat. However, the human operator supervises the system’s actions and can always
abort the attack if necessary.

Human control

Existing systems seem to be governed by different rules of engagement, but


information is too scarce to make detailed comparisons between them. It is assumed,
however, that CIWSs, because they work as a last line of defence, are likely to operate
on a human-on-the-loop mode as standard. In the case of missile defence systems,
such as the Patriot system or Iron Dome, selection of the mode of engagement will
depend on the nature and imminence of the threats and the operational circumstances.
For the Aegis Combat System, which is an integrated combat system that can conduct
both defensive and offensive missions, the use of full automatic mode is reserved for
self-defence against anti-ship cruise missiles.

All air defence systems are intended to operate under human supervision. The
decision to activate the system is retained by a commander who also maintains
oversight during the operation and can stop the weapon at any time. However, history
has shown that direct human control and supervision is not always a remedy to the
problems that emerge with the use of advanced autonomy in the targeting process.
One tragic example is the human failure that led to the destruction of a commercial
air-craft—Iran Air Flight 655—on 3 July 1988 by the Aegis Combat System on the
USS Vincennes, a US Navy warship. It was reported that the Aegis Combat System
accurately detected Flight 655 and notified the crew that it was emitting signals on a

8|P a g e
civilian frequency and climbing. However, the crew on the USS Vincennes mistook
the airliner for an attacking combat aircraft and decided to shoot it down. According
to reports, the commanding officers were under stress when assessing the information
provided by the Aegis Combat System and had a preconceived notion that the airliner
was a combat aircraft descending to attack. As a result, they took the decision to
respond, believing that they were defending themselves. This incident illustrates that
human supervision is no intrinsic guarantee of reliable use; rather, it may be a source
of problems if personnel are not properly trained, or if the information interface
provided by the system is too complex for a trained operator to handle in an urgent
situation

Active protection systems

Definition and characteristics

Active protection systems (APSs) are weapon systems that are designed to protect
armoured vehicles against incoming anti-tank missiles or rockets. APSs operate on the
same basic principle as air defence systems. They combine a sensor system, typically
a radar, IR or ultraviolet (UV) detection sensor that detects incoming projectiles, with
a fire control system that tracks, evaluates and classifies the incoming threats. The
systems then launch the appropriate countermeasures (hard-kill or soft-kill) at the
optimal location and point in time. Hard-kill countermeasures usually consist of firing
rockets or shotgun blasts at the incoming projectiles to (a) alter the angle at which
they approach the armoured vehicle; (b) decrease the chances of penetration; (c)
trigger a premature or improper initiation of the warhead; or (d) destroy the outer shell.
Soft-kill measures include using IR jammers, laser spot imitators or radar jammers to
prevent the guided munitions from remaining locked onto the vehicle that the APS is
meant to protect.

Functions and capabilities

There are 17 different APS models (note that only hard-kill APSs were considered):
seven are in use, three are still under development, six have been developed but never
formally acquired or used, and one has been retired. All these systems operate more or
less in the same way, but there are variations in terms of actual capabilities.

Target detection:- APSs function by detecting incoming projectiles in various ways


(radar, IR or UV detectors). The area that an APS can cover varies. The first
operational APS—the Drozd (Russia)—could only cover the forward 60 degrees of a
tank’s gun turret; the tank crew had to move the turret to change the tank’s protective
pro-file. The APS that currently equips Russia’s newest tank, the T-14 Armata,
reportedly covers only threats that are lateral to the turret, which means that it cannot
protect the tank against air-launched guided missiles or projectiles that use a top-
attack mode.

9|P a g e
Target identification:- The way APSs identify and classify incoming threats is very
similar to CIWSs: their sensors evaluate the speed and trajectory of the incoming
threats. Some systems, such as Israel’s Trophy, include additional advanced features
that allow the system to also calculate the shooter’s location.

Target prioritization;- The ability to simultaneously detect and track multiple targets
seems to be a standard feature of APSs. The soon-to-be-deployed Afghanit will
supposedly be capable of detecting and tracking up to 40 ground targets and 25 aerial
targets. As with CIWSs, the parameters that APSs use to prioritize targets are
classified information but are very likely to be a combination of risk variables such as
time until impact and nature of the incoming projectiles.

Target engagement:- Each of the 17 models of APS identified is designed to execute


the entire process of detecting, identifying, tracking and selecting incoming projectiles
in complete autonomy. This is due to the fact that APSs are supposed to act within a
time frame that is far too short to allow human authorization or supervision of the
target engagement. An APS’s reaction time is its key performance measure. One
recent report noted that an APS with a reaction time of 300 milliseconds would only
be able to intercept a typical anti-tank missile if it were launched from at least 400
metres away.

Human control

Once activated, APSs are supposed to function in complete autonomy. However,


when they are mounted on a manned vehicle, humans inside the vehicle can override
or manually shut down the system in case of problems. There is less open-source
information available on how the functioning of APSs on unmanned systems is
supervised and managed by human operators. Published details on Israel’s Trophy
APS suggest that when it is mounted on an unmanned system, it will shut down in the
event of a communication failure with the remote operator. As APSs have seen only
limited use in combat, little is known about the doctrines that govern their use and the
effects that their use might have on civilians and friendly forces. Reportedly, the use
of the Trophy APS during the 2014 Gaza–Israel conflict did not result in any civilian
casualties

Robotic Sentry Weapons

Definition and characteristics


Robotic sentry weapons are gun turrets that can automatically detect, track and
(potentially) engage targets. They can be used as stationary weapons or be mounted
on various types of vehicles. They resemble CIWSs but they use smaller calibre
rounds and are usually employed as anti-personnel weapon. Robotic sentry weapons
remain relatively rare. As per the latest UN report, there are only three different
models, namely Samsung’s SGR-A1 (South Korea), Raphael’s Sentry Tech (Israel)
and DODAAM’s Super aEgis II (South Korea). The development of each of these
systems was completed only very recently: 2006 for the SGR-A1, 2007 for the Sentry

10 | P a g e
Tech and 2010 for the Super aEgis II. The Super aEgis II and the Sentry Tech are
currently in use; the SGR-A1 is already retired.
Israel and South Korea are the only two countries that currently produce and sell anti-
personnel sentry weapons. Both countries initiated the development of these systems
for border security purposes. Israeli armed forces used the Sentry Tech for protecting
Israel’s border along the Gaza Strip. South Korea invested in the development of the
SGR-A1 and Super aEgis II for potential deployment in the Demilitarized Zone
(DMZ)—the buffer zone at the border between North and South Korea. The Korean
War Armistice Agreement of 1953 prohibits the deployment of weapons in the zone,
so these systems have never been fielded in the DMZ. The South Korean Army has,
however, deployed the SGR-A1 on an experimental basis outside South Korea,
notably in Afghanistan and Iraq.

Functions and capabilities

As they are currently employed, robotic sentry weapons might be more accurately
described as weaponized autonomous surveillance systems. Autonomy serves
primarily to guarantee that they are keeping a sharp and unblinking eye on the
perimeters under their protection.

Target detection:- All three robotic sentry weapon systems commonly use a combi-
nation of digital cameras and IR cameras to detect targets within a relatively large
perimeter. The Super aEgis II, for instance, can supposedly detect and lock on to
human-sized targets at a distance of up to 2.2 km at night and 3 km in daylight.

Target identification:- Robotic sentry weapons recognize targets based chiefly on heat
and motion patterns. They are therefore unable to distinguish between ‘civilian’ and
‘military’ human targets. They do, however, include some features that allow them to
detect more than simple human presence. The SGR-A1 can reportedly recognize
surrender motions (arms held high to indicate surrender), while the Super aEgis II can
sense whether a human target is carrying explosives under his or her outer clothing.

Target engagement:- The SGR-A1, the Sentry Tech and the Super aEgis II each
feature different modes of target engagement. The SGR-A1 and the Sentry Tech
reportedly only have the possibility of alerting an operator to the presence of a human
in the surveillance zone; at that point, a human operator takes control over the system.
The operator then uses the video and audio equipment mounted on the system to
establish communication and issue a warning to a person or people that the system has
detected. Depending on the target’s reaction, the human operator might decide to fire
or not to fire the weapon.

In its original design, the Super aEgis II was intended to execute all the steps in the
process fully autonomously. It was built with a speech interface that allows it to
interrogate and warn detected targets. Prospective users of the system reportedly
expressed concern that it might make mistakes and requested the introduction of
safeguards.

11 | P a g e
DODAAM therefore revised the system to include three modes: human-in-the-loop
(the human operator must enter a password to unlock the robot’s firing ability and
give the manual input that permits the robot to shoot);
human-on-the-loop (a human operator supervises and can override the actions of the
system);
human-out-of-the-loop (the system is fully autonomous and not supervised in real-
time by a human operator). According to DODAAM, all the current users have
configured the system to human-in-the-loop mode.

Human control

As they are currently employed, robotic sentry weapons hand over control to a human
command-and-control centre once targets are detected. The SGR-A1, for instance,
reportedly requires a minimum of two people to operate each robot, one operator and
one commander. The question of whether the use of a robotic sentry weapon in a fully
autonomous mode would be lawful is still a matter of contention. Some have argued
that the system’s inability to distinguish between civilian and military targets and
make proportionality assessments would make the use of full autonomous mode
necessarily unlawful. Others have argued that the legality of the system is very much
context-based and that using the system in human-out-of-the-loop mode would not be
legally problematic as long as it is deployed in an area where (a) it is reasonable to
assume there would be no civilian presence; and (b) circumstances would make the
use of force proportionate (e.g. the DMZ).

Guided munitions

Definition and characteristics

Guided munitions—also called smart bombs or precision-guided munitions—are


explosive projectiles that can actively correct for initial-aiming or subsequent errors
by homing in on their targets or aim-points after being fired, released or launched. It
is also debatable whether guided munitions may be described as autonomous weapon
systems, as they are assigned targets in advance by human operators. They only use
autonomy to travel to, track or engage the pre-assigned target. However, analysis of
their development and use provides some interesting insights into the advance of
targeting technology and human control. Guided munitions encompass a wide range
of systems—missiles, torpedoes, sensorfused munitions and encapsulated torpedo
mines—which may feature very different characteristics and properties.

Functions and Capabilities

As previously mentioned, the vast majority of guided munitions use autonomy only to
find, track and hit targets or target locations that have been pre-assigned by humans.

12 | P a g e
In that sense, autonomy does not support the target selection process; it solely
supports the execution of the attack. The few guided munitions with some target
selection autonomy include the Long-Range Anti-Ship Missile (LRASM) (USA),
Dual-Mode Brimstone (UK) and the Naval Strike Missile/Joint Strike Missile
(NSM/JSM) (Norway). These are all missile systems.

Functions and capabilities

In contrast to regular guided missiles, the Dual-Mode Brimstone and the NSM/JSM
are not assigned a specific target; rather, they are assigned a target area, where they
will have the task of finding targets that match a predefined target type .

Mobility:- Before launch, the missiles are assigned a specific area they are allowed to
engage. The operator has to assess whether within that area there is a risk of hitting
friendly forces or civilians or civilian objects, and program the systems accordingly.
The operator also sets parameters such as altitude and minimum time of flight.

Target detection/identification:- The missiles are programmed to search for specific


target types, automatically rejecting targets that do not fit their assigned signature.
The Dual-Mode Brimstone only targets armoured vehicles and reportedly can identify
buses, cars and buildings as invalid targets through high-resolution radar images.95
The NSM/JSM targets ships. Kongsberg, the company that develops the system,
reports that operators use photographs of ships to semi-automatically create
silhouettes and 3-D IR models, which are stored in the NSM/JSM Target Library
System, providing a database of potential targets.

Target prioritization/engagement:- These systems are likely to include parameters


that allow them to prioritize between targets, but this is classified information.
Regarding target engagement, the NSM/JSM reportedly can decide whether to engage
on the ‘tactical situation/scene data’ following criteria assigned by the operator. These
criteria include the zones it can fly and engage in, altitude, minimum time of flight,
‘target approach heading’ and ‘minimum detection by target’. The criteria are
reprogrammable to meet different rules of engagement.

Human control

The Dual-Mode Brimstone is the only guided munition featuring target selection
autonomy that is currently operational. It works like a fire-and-forget missile. Once
launched, the missile operates in full autonomy; it does not include a human-in-the-
loop mode. However, it can be optionally guided with an external laser as well,
providing control to the operator if needed. The precise nature of the human–systems
command-and-control relationships used by the NSM/JSM and the LRASM remains
unclear.

13 | P a g e
Loitering weapons

Definition and characteristics


Loitering weapons—also labelled ‘loitering munitions’ or ‘suicide drones’ are a
hybrid type of weapon system, which fits a niche between guided munitions and
unmanned combat aerial systems (UCASs). Loitering weapons combine the purpose
and attack mode of guided munitions (loitering weapons dive-bomb their targets) with
the maneuverability of UCASs. They can loiter for an extended time to find and strike
targets on the ground . Their operational utility lies in the fact that they (a) are not
aimed at a predefined target but rather a target area (in contrast to guided munitions);
and (b) are disposable. They can conduct offensive and defensive missions.

The large majority of loitering weapons operate under remote control. The ones
identified only four operational systems that can find, track and attack targets in
complete autonomy once launched: the Orbiter 1K ‘Kingfisher’, the Harpy, the Harop
and the Harpy NG (all from Israel). As previously noted, Germany, the UK and the
USA all started development on loitering weapons with a fully autonomous engage-
ment mode.

Examples of these systems include (a) the Low Cost Autonomous Attack System
(LOCAAS) (USA); (b) the Non-Line-of-Sight Launch System (NLOS-LS) (USA); (c)
the Taifun/TARES (Germany); and (d) the Battlefield Loitering Artillery Direct
Effect (BLADE) (UK).

None of these systems went beyond the R&D phase. Besides technical and cost
issues, a key reason for the cancellation of these programmes was the controversy
around the use of autonomy for targeting. The US Air Force, for instance, was
reportedly reluctant to have a weapon system that it could not control at all times.

Functions and capabilities

The Harpy, which is the oldest system, operates in complete autonomy. The Harop
and the Harpy NG, which are upgrades of the Harpy, as well as the Orbiter 1K,
include both a human-in-the-loop and fully autonomous mode. However, the fully
autonomous mode seems to be reserved only for SEAD missions. In such
circumstances, they operate very much like an anti-radiation missile.

Once launched, the loitering weapon flies to the predetermined geographical area in
which it is allowed to engage, using GPS coordinates or pre-programmed flight
routes. Upon arrival, it activates its anti-radar seeker to search for and locate potential
targets. It may have programmed rules to prioritize between targets. If it cannot find
the prioritized targets, it is supposed to move on to, and engage with, secondary
targets.

14 | P a g e
The human-in-the-loop mode seems to be preferred for operations against high-value
targets such as armoured vehicles. In such cases, loitering weapons use optical and IR
sensors to search for, locate and monitor predefined target types. A human operator
supervises the system and retains the ability to abort the attack up until a few seconds
before impact.

The Harop was recently used in armed conflict. Azerbaijan’s armed forces used the
system in Nagorno–Karabakh in April 2016 to hit six Armenian military targets,
including a bus full of volunteers, artillery systems, air defence systems and a military
runway. Azerbaijan’s armed forces reportedly used the human-in-the-loop mode.

circumstances, but it would also have to take into consideration variations in models
of command-and-control for swarm operations.

Limitations of Autonomous weapons


There are several technical aspects of LAWS currently under inquiry, including their
targeting systems and the level of human control present and deciding what exactly
constitutes autonomy in weapon systems.

We know that lethal autonomous weapons can locate and attack any person without
any human intervention, but this may also include innocent civilians. They can be
mass produced and made so small that it can kill any human without being identified.
Beyond ethical issues, replacing human decision-making with algorithms is highly
destabilizing. Cheap mass-production and/or copying of the code of algorithms
for lethal autonomous weapons would fuel proliferation to both state and non-state
actors as they could just copy the code and develop AWS (Autonomous Weapon
Systems).

Autonomous weapons have been described as the third revolution in warfare, after
gunpowder and nuclear arms, but the era of completely relying on AI (Artificial
Intelligence) has not come yet. Thus, this flaw proves that the completely autonomous
weapon systems must be replaced with semi-autonomous weapon systems where a
human gives the command to shoot or attack.

The Human Rights Watch describes an autonomous weapons system as a “Robots


that can select targets and deliver force only with a human command”; The most
important thing to note is that autonomous weapons do not simply follow a
predetermined path programmed by humans, but rather they acquire, interpret, and
react to data on their own, according to the algorithms on which their systems operate,
i.e. they react to their surroundings.

Some argue that because robots can react to a situation far more quickly than humans
can, the human-on-the-loop is pointless as they would be unable to cancel an attack
deemed disproportionate or indiscriminate before the robot executes the attack.
Finally, regardless of the amount of human control within a weapon system, there is a
question of where to place blame for law violations if or when they occur.

15 | P a g e
Accountability of Autonomous Weapon System

The advent of artificial intelligence (AI) and the emergence of Lethal


Autonomous Weapon Systems (LAWS) has made it necessary to re-
contextualise many of the established principles of international humanitarian
law (IHL). While AI represents the maturing of critical technologies around
data collection, computing power and algorithmic decision-making the
conversation around LAWS has begun to engage deeper issues around the
accountability ethics in the conduct of warfare and democratic decision -making
itself.

Traditionally, where humans would take the decision to use force, this could lead to
prosecutions, disciplinary action or the need to pay compensation. The question arises,
however, of what happens where humans do not exercise meaningful human control
over the use of force during armed conflict or law enforcement, but delegate it to
computers.

The underlying assumption of this question is that autonomous weapon systems are
not illegal weapons – that they may be used under certain circumstances. There is of
course a view according to which they are illegal weapons under existing law and/or
should be declared as such by new law. This is based on arguments, for example, that
their use cannot meet the requirements of IHL that protect the lives of civilians (such
as distinction and proportionality) in the case of armed conflict, or that they cannot
meet the requirements of IHL that protect those against whom force may be used in
the context of law enforcement.

It has also been argued that delegating decisions over life and death to
machines are inherently wrong, whether this is done in conformity with the formal
requirements of IHL or IHRL, or not. What is at stake here is not just the protection of
the lives of those mentioned, but also the human dignity of anyone at the receiving
end of the use of such autonomous force (including combatants and suspected
perpetrators who may otherwise lawfully be targeted).

To use such weapons under any circumstances would be illegal and any use should
lead to accountability. These weapons should also be banned formally because they
violate the public conscience. However, assuming they are not illegal weapons and
may be used under certain circumstances There may be a malfunction;
the machines may learn things they were not supposed to learn; there could be other
unexpected results.

Normally humans are held accountable based on the control they exercised in making
decisions, but humans are by definition out of the loop where machines
are used that take autonomous, and in many cases unpredictable, decisions. It clearly
makes no sense to punish a machine for its autonomous decisions. The question arises
of whether there will be an accountability vacuum. This will not be acceptable

16 | P a g e
because it will mean that the underlying values – the protection of humanitarian
values and the rights to life and dignity – are in effect rendered without protection.

Ethical and Humanitarian aspects of


autonomous weapon system
Ethical and humanitarian issues are at the heart of the debate about the acceptability
of autonomous weapon systems. It is precisely anxiety about the loss of human
control over weapon systems and the use of force that goes beyond questions of the
compatibility of autonomous weapon systems with our laws to encompass
fundamental questions of acceptability to our values. A prominent aspect of the
ethical debate has been a focus on autonomous weapon systems that are designed to
kill or injure humans, rather than those that destroy or damage objects, which are
already employed to a limited extent.

The fundamental ethical question is whether the principles of humanity and the
dictates of the public conscience can allow human decision-making on the use of
force to be effectively substituted with computer-controlled processes, and life-and-
death decisions to be ceded to machines.

It is clear that ethical decisions by States, and by society at large, have preceded and
motivated the development of new international legal constraints in warfare, including
constraints on weapons that cause unacceptable harm. In international humanitarian
law, notions of humanity and public conscience are drawn from the Martens Clause.
As a potential marker of the public conscience, opinion polls to date suggest a general
opposition to autonomous weapon systems—with autonomy eliciting a stronger
response than remote-controlled systems.

Another concern in the debate on ethics and humanity is that while unmanned
weapons open the possibility to attack an enemy who cannot fight back, the enemy
will often compensate for their inability to attack appropriate targets by attacking
innocent people. Additionally, the possibility of terrorist organizations obtaining the
technology poses a threat to international peace and security, thus highlighting the
humanitarian aspect of LAWS. Because legislation most often develops in response to
new technology, it is important to create an ethical structure on which to base the
legal framework now, while the use of unmanned robots is still nascent and their
implications are uncertain. The International Covenant on Civil and Political Rights
states, “Every human being has the inherent right to life. This right shall be protected
by law. No one shall be arbitrarily deprived of his life.” Allowing robots to make the
decision to kill makes those deaths arbitrary because robots lack the capacity to judge
and interpret their targets the way humans can interpret and review subjects in
consideration of existing laws.

The report by the UN Panel of Experts on Libya indicates that a Kargu-2 kamikaze
drone manufactured by Turkey’s state-owned company STM was likely used in

17 | P a g e
March 2020 in clashes between the forces of the Turkish-backed Government of
National Accord and the Libyan National Army of eastern warlord Khalifa Hifter
following the latter’s besiegement of Tripoli. STM describes Kargu-2 as a loitering
rotary-wing attack drone with real-time image processing capabilities and embedded
machine learning algorithms that are also equipped with swarming capabilities that
allow up to 20 drones to work together. Along with Kargu-2, the Alpagu fixed-wing
loitering munition system and the Togan autonomous multi-rotor reconnaissance
drone — both also developed by STM — stand out as examples of advanced
autonomous capabilities in the Turkish defense industry. According to the company,
all three unmanned aerial vehicles use computer imaging for targeting and are
programmed with machine learning algorithms to optimize target classification,
tracking and attack capabilities without the need for a GPS connection.
While such technologies sound like a revolutionary step in warfare, a global
debate has been simmering since the early 2000s on whether lethal autonomous
weapon systems should be regulated or banned, given ethical concerns over their
ability to select and hit targets without human intervention. The release of the UN
report on Libya has rekindled the debate, which had been largely hypothetical thus far.

Bloc Positions
A report published in April 2019 by PAX, an international peace organization,
detailed that the US, China, the Russian Federation, UK, Israel, South Korea, and
France have the most complex autonomous weapon technology to date.
Correspondingly, according to the Campaign to Stop Killer Robots,28 member states
are in favour of banning LAWS entirely. Some of these countries included Algeria,
Argentina, Austria, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Djibouti,
Ecuador, Egypt, El Salvador, Ghana, Guatemala, Iraq, Mexico, Morocco, Nicaragua,
Pakistan, Peru, Uganda, Venezuela, and Zimbabwe. However, 12 countries have
explicitly protested negotiating an additional treaty about LAWS. These include
Australia, Belgium, France, Germany, Israel, The Republic of Korea, Russia, Spain,
Sweden, Turkey, The United States of America, and the United Kingdom.

United States of America - On the whole, the US has proven that it is a


global leader in the autonomous weapon sector. Most prominently, the US and the
UK are the only countries that have begun the process of formulating policies about
lethal autonomous weapons. In November 2012, the US Department of Defense
passed a directive that widened the possibility for a legal framework surrounding the
development and deployment process of autonomous weapons while emphasizing the
protection of innocent citizens. The US also possesses the MQ-1 Predator Drone,
which contains a multitude of offensive capabilities (such as hellfire missiles). This
unmanned aircraft has been used in Afghanistan, Pakistan, Bosnia, Serbia, Iraq,
Yemen, Libya, and Somalia to date. Given their past efforts in developing and
deploying LAWS, it is unlikely that United States of America would support banning
them altogether.

China - China’s foreign policy is exceptional–while it has expressed interest in


negotiating an additional protocol to “ban the use of fully autonomous lethal weapons
systems,” it is against downright preventing the development of LAWS.In other

18 | P a g e
words, although China is currently not interested in using LAWS in combat, they
want to continue developing them. In 2018, Ding Xiangrong, the Deputy Director of
the General Office at the Chinese Central Military Commission, also explained their
objective of engaging in the “ongoing military revolution… centred on information
technology and intelligent technology”, alluding to their intention to use them in the
future.

European Union - Most European countries have publicly declared their


intention to ban LAWS in light of international humanitarian obligations. The
European Parliament has also expressed an interest in enforcing review and approval
processes for new weapons. United Kingdom has acted divergently from this policy
by recognizing the various benefits that LAWS can pose, although it has produced
several of its own autonomous weapon systems and opposed internationally binding
bans on LAWS.

African Union - Algeria, Egypt, Ghana, Uganda, Zimbabwe, and South Africa
were among the first countries to advocate for a ban on LAWS. Moreover, on April 9,
2018, member states of the African Group called for a legally binding framework on
LAWS, stating that “fully autonomous weapons systems or LAWS that are not under
human control should be banned.”These countries include: Algeria, Angola, the
Central African Republic, Republic of the Congo, the Democratic Republic of the
Congo, Egypt, Kenya, Liberia, Libya, Mauritania, Mauritius, Morocco, Niger, Nigeria,
Rwanda, Senegal, Somalia, South Africa, South Sudan, Sudan and Zimbabwe, among
others.
In reality, African states are just as threatened by LAWS in the context of national
security as other global powers. The looming presence of ethnic violence, civil wars,
terrorism, and various insurgencies make LAWS an even bigger threat in countries
like Nigeria, Somalia, and South Sudan, among others. Given that several non-state
actors have already gained access to autonomous weapons without the presence of
any legal and regulatory frameworks, it is evident that further action is promptly
needed. Not only is their possession of such dangerous weapons a symbolic threat, but
also a threat to international safety as they cannot be easily held accountable for
atrocities committed in warfare— especially given ceaseless turmoil in the areas they
operate.

QARMA( Questions a Resolution Must Answer)


1. What will be the definition of LAWS that is accepted worldwide?

2. What are the technical challenges to overcome towards developing fully


autonomous weapons systems particularly with regard to the identification of
targets?

3. Responsibility and accountability are core aspects of IHL. Is the uninterrupted


accountability chain within an armed force challenged by increasing autonomy
in weapons systems?

19 | P a g e
4. Where do LAWS pose challenges in terms of compliance with IHL?
(Distinction, proportionality or precautions in attack)

5. Should a separate UN body be established whose sole purpose would be to


monitor LAWS? If yes, how will it function; If no, which UN body will
monitor the development of LAWS?

6. What would the impact of the development and deployment be of LAWS on


human rights, in particular the right to life and the right to dignity?

7. What ethical questions arise from the development and deployment of LAWS?

8. Should the trade and deployment of LAWS be stopped?

Bibliography
https://research.un.org/en/docs/ga/quick/regular/61

https://undocs.org/en/A/RES/61/55&Lang=E

https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war

https://autonomousweapons.org/

http://www.lyonmun.com/wp-content/uploads/2018/05/UNSC.Def_.pdf

https://news.un.org/en/story/2019/03/1035381

https://static1.squarespace.com/static/57b632432994cab0b44562ae/t/5e33e57c95862d328a707b16
/1580459396212/DISEC%2BA+Backgrounder.pdf

https://international-review.icrc.org/articles/stepping-back-from-brink-regulation-of-autonomous-
weapons-systems-913

https://link.springer.com/article/10.1007/s43154-020-00024-3

https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war

https://static1.squarespace.com/static/51e9d93ee4b0c80288b6ff52/t/5c0836566d2a7321cc0e7dce/
1544042076751/GA+1+CR+1+%28HIAMUN+%2719%29.pdf

https://www.nmun.org/assets/documents/conference-archives/new-york/2015/NY15_BGG_GA1.pdf

https://www.law.upenn.edu/institutes/cerl/conferences/ethicsofweapons/

20 | P a g e

You might also like