Autonomous Weapon Systems Overview
Autonomous Weapon Systems Overview
Table Of Contents 1
Committee Overview 3
4
Introduction to the Agenda
5
International and Regional Framework
Bloc Positions 18
QARMA 19
Bibliography 20
1|P a g e
Letter from the Executive Board
Dear delegates,
The Executive Board, hope to keep you on your toes, teach you to react to
crisis developments, face international challenges and mark your process
of addressing these challenges whilst having fun and co-operating with a
good understanding of international law and active participation in the
committee will certainly be well rewarded. The following pages will give
you handy information about the agenda, as well as useful links to help
you in your research. The guide will provide a bird’s eye view on the
agenda and the important sub-topics discussed. It will also provide a
chronological observation on the agenda.
Regards,
Executive Board,
[email protected]
2|P a g e
Committee Overview
DISEC stands for Disarmament and International Security Committee. It
is the first committee of the General Assembly and deals with issues
relating to disarmament and threats to peace that affect the international
community and recommends solutions to the challenges in the
international security system.
The committee works closely with the United Nations Security Council to
take steps in promoting the fundamental goal of the UN, the promotion of
international peace and security. All the nations who are part of the
United Nations and agreed to the United Nations charter are permitted to
be members of DISEC.
The final resolutions that are agreed upon in the DISEC are
communicated to the General Assembly (GA) and the Security Council
(SC). DISEC’s decisions are highly regarded because it is the part of the
General Assembly where every nation gets equal representation in voting
procedures, while in the Security Council, the voting procedure mainly
depends upon the votes of the permanent five members.
DISEC does not have the power to impose sanctions on other nations.
Delegates, please remember that DISEC is recommendatory in nature and
does not have the power to authorize anything without the approval of the
United Nations Security Council. The chief aim of DISEC is to solve the
problems that threaten international peace and security with great care
and urgency to ensure global peace and security.
3|P a g e
Introduction to the Agenda
The topic of Lethal Autonomous Weapons Systems (LAWS) does not have a long
history within the international arena because LAWS have only become a policy issue
over the last few years as technology has evolved. Thus, because of the burgeoning
nature of the topic, there are no binding agreements specifically targeting LAWS.
Current international and regional frameworks relevant to the use of LAWS, such as
the Geneva Conventions and their Additional Protocols, focus on international
humanitarian law (IHL) and international human rights law. However, in the past
three years, treaty bodies such as the one which oversees the Convention on Certain
Conventional Weapons (CCW), have held meetings of experts on LAWS to begin
discussions on pre-emptive moves to address LAWS. Moreover, numerous civil
society organizations (CSOs) have been working together and in conjunction with the
United Nations (UN) to promote awareness of the potential impact of LAWS and to
take definitive action in prohibiting their manufacture and implementation. LAWS go
by many names, such as Lethal Autonomous Robotics (LARs), Fully Autonomous
Weapon Systems (FAWS), remotely piloted aerial systems, or even “Killer Robots,”
and their definition is just as ambiguous.
Human Rights Watch has also provided some definitions according to the level of
human input and supervision in selecting and attacking targets:
Human-in-the-Loop Weapons: “Robots that can select targets and deliver force only
with a human command”;
Human-on-the-Loop Weapons: “Robots that can select targets and deliver force
under the oversight of a human operator who can override the robots’ actions”;
Human-out-of-the-Loop Weapons: “Robots that are capable of selecting targets
and delivering force without any human input or interaction.”
Autonomous weapon system: “The term ‘fully autonomous weapon’ refers to both
out-of-the-loop weapons and those that allow a human on the loop, but that are
effectively out-of-the-loop weapons because the supervision is so limited.” The ICRC
has also raised some general definitions in its 2011 report on IHL and challenges in
contemporary armed conflicts:
Automated weapon system: “An automated weapon or weapons system is one that
is able to function in a self-contained and independent manner although its
employment may initially be deployed or directed by a human operator. Although,
4|P a g e
deployed by humans, such systems will independently verify or detect a particular
type of target object and then fire or detonate.”
Autonomous weapon system: “An autonomous weapon system is one that can
learn or adapt its functioning in response to changing circumstances in the
environment in which it is deployed. A truly autonomous system would have artificial
intelligence that would have to be capable of implementing IHL.” Common to all the
above definitions is the inclusion of weapon systems that can independently select and
attack targets, with or without human oversight. This includes both weapon systems
that can adapt to changing circumstances and ‘choose’ their targets and
weapon systems that have pre-defined constraints on their operation and potential
targets or target groups. However, the distinction between autonomous and automated
weapon systems is not always clear since both have the capacity to independently
select and attack targets within the bounds of their human-determined programming
In November 2019, CCW States Parties decided to, once again, continue their
deliberations on LAWS. For the first time, however, these talks, which had previously
been conducted between 2014 and 2016 in informal meetings and since 2017 within
the framework of an expert subsidiary body called a Group of Governmental Experts
(GGE), were mandated to produce a specific outcome. For ten days in 2020 and for an
as-yet-unknown number of days in 2021 (when the CCW's next Review Conference is
due), the GGE was and is tasked with debating and fleshing out “aspects of the
normative and operational framework” on LAWS.2 In addition, in Annex III of their
2019 report, States Parties adopted eleven guiding principles to take into account
going forward.3
5|P a g e
After the first five-day meeting of 2020 was postponed and then conducted in a hybrid
format due to the current global COVID-19 pandemic, the second meeting had to be
shelved, and it is currently unclear when and how the talks can resume. While some
States – most prominently Russia – have displayed no interest in producing new
international law in the CCW, arguing that “concerns regarding LAWS can be
addressed through faithful implementation of the existing international legal
norms”,4 others – such as Germany – claim that nothing short of “an important
milestone” has already been reached with the 2019 report cited above, even
describing the adopted eleven guiding principles as a “politically binding regulation”.
The United Nations has initiated discussions on the topic ‘Lethal Autonomous
weapon systems’ or ‘LAWS’ in the past few years. Due to abilities given to the first
committee of the General Assembly by the United Nations charter, LAWS comes
under its jurisdiction, not only due to their nature as weapons, but also due to their
nature of posing a threat to the international peace and security. The First Committee
works with the UN Disarmament Commission (UNDC) and the CD in discussing how
international disarmament issues relate to LAWS.
The United Nations Secretary-General has stated that “machines with the power and
discretion to take lives without human involvement are politically unacceptable,
morally repugnant and should be prohibited by international law.” Additionally, the
first clause of the General Assembly Resolution 61/55 adopted on 6th December 2006,
states ‘Affirms that scientific and technological progress should be used for the
benefit of all mankind to promote the sustainable economic and social development of
all States and to safeguard international security, and that international cooperation in
the use of science and technology through the transfer and exchange of technological
know-how for peaceful purposes should be promoted; ’ is in direct opposition to the
development of LAWS (Lethal Autonomous Weapon Systems).
The European Parliament’s resolution 2018/2752, clause 4 states that ‘Stresses, in this
light, the fundamental importance of preventing the development and production of
any lethal autonomous weapon system lacking human control in critical functions
such as target selection and engagement;’
The first clause of the General Assembly Resolution 63/36 states, ‘Reaffirms that
effective measures should be taken to prevent the emergence of new types of weapons
of mass destruction;’ which is violated by the development of Lethal Autonomous
Weapon Systems.
6|P a g e
The UN Institute for Disarmament Research (UNIDIR) has published a series of
documents considering legal and ethical issues of the development and use of LAWS,
as well as the application of international human rights, humanitarian, and criminal
law on LAWS.
Although the UN bodies have acknowledged the need and have begun to discuss
LAWS within international and regional discussions, the Civil Society Organizations
(CSO) have been far more active in promoting the topic.
Air defence systems are weapon systems that are specifically designed to nullify or
reduce the effectiveness of hostile air action. These can be parsed out in different
categories depending on their end-use—for example, missile defence systems, anti-
aircraft systems and close-in weapon systems (CIWSs). All these systems operate in
the same way: they use a radar to detect and track incoming threats (missiles, rockets
or enemy aircraft), and a computer-controlled fire system that can prioritize, select
and potentially autonomously attack these threats.
Autonomy in air defence systems has no other function than supporting targeting. The
aim is to detect, track, prioritize, select and potentially engage incoming air threats
more rapidly and more accurately than a human possibly could. Two examples that
highlight the performance of such systems are the S-400 Triumf and the Rapier. The
S-400 Triumf, a Russian-made air defence system, can reportedly track more than300
targets and engage with more than 36 targets simultaneously, at a distance of upto 250
kilometres. The Rapier, which is produced by MBDA and is the UK’s primary
defence system, takes 6 seconds from target detection to missile launch. The
7|P a g e
technology behind air defence systems has not fundamentally changed since the
invention of the Mark 56. The performance of radar and fire control systems has
certainly improved but the operating principle remains the same.
Target prioritization :- When several incoming threats are detected, systems typically
proceed to a threat assessment to determine which target to engage first. Once again,
the assessment is made based on preset parameters. In the case of CIWSs, they
generally engage the target that represents the most imminent threat to the ship or
specific location that they are supposed to protect. For missile defence systems, such
as the Iron Dome, the parameter very much depends on the operational scenario, but
the assessment works in the same way: the system assesses where the incoming
missile or rocket is likely to land and evaluates accordingly whether it is worth
deploying countermeasures.
Target engagement:- The fire control systems of air defence systems have two modes
of engagement: human-in-the-loop and human-on-the-loop. In the human-in-the-loop
mode, the operator must always approve the launch, and there are one or several
‘decision leverage points’ where operators can give input on and control the
engagement process. In the human-on-the-loop mode, the system, once activated and
within specific parameters, can deploy countermeasures autonomously if it detects a
threat. However, the human operator supervises the system’s actions and can always
abort the attack if necessary.
Human control
All air defence systems are intended to operate under human supervision. The
decision to activate the system is retained by a commander who also maintains
oversight during the operation and can stop the weapon at any time. However, history
has shown that direct human control and supervision is not always a remedy to the
problems that emerge with the use of advanced autonomy in the targeting process.
One tragic example is the human failure that led to the destruction of a commercial
air-craft—Iran Air Flight 655—on 3 July 1988 by the Aegis Combat System on the
USS Vincennes, a US Navy warship. It was reported that the Aegis Combat System
accurately detected Flight 655 and notified the crew that it was emitting signals on a
8|P a g e
civilian frequency and climbing. However, the crew on the USS Vincennes mistook
the airliner for an attacking combat aircraft and decided to shoot it down. According
to reports, the commanding officers were under stress when assessing the information
provided by the Aegis Combat System and had a preconceived notion that the airliner
was a combat aircraft descending to attack. As a result, they took the decision to
respond, believing that they were defending themselves. This incident illustrates that
human supervision is no intrinsic guarantee of reliable use; rather, it may be a source
of problems if personnel are not properly trained, or if the information interface
provided by the system is too complex for a trained operator to handle in an urgent
situation
Active protection systems (APSs) are weapon systems that are designed to protect
armoured vehicles against incoming anti-tank missiles or rockets. APSs operate on the
same basic principle as air defence systems. They combine a sensor system, typically
a radar, IR or ultraviolet (UV) detection sensor that detects incoming projectiles, with
a fire control system that tracks, evaluates and classifies the incoming threats. The
systems then launch the appropriate countermeasures (hard-kill or soft-kill) at the
optimal location and point in time. Hard-kill countermeasures usually consist of firing
rockets or shotgun blasts at the incoming projectiles to (a) alter the angle at which
they approach the armoured vehicle; (b) decrease the chances of penetration; (c)
trigger a premature or improper initiation of the warhead; or (d) destroy the outer shell.
Soft-kill measures include using IR jammers, laser spot imitators or radar jammers to
prevent the guided munitions from remaining locked onto the vehicle that the APS is
meant to protect.
There are 17 different APS models (note that only hard-kill APSs were considered):
seven are in use, three are still under development, six have been developed but never
formally acquired or used, and one has been retired. All these systems operate more or
less in the same way, but there are variations in terms of actual capabilities.
9|P a g e
Target identification:- The way APSs identify and classify incoming threats is very
similar to CIWSs: their sensors evaluate the speed and trajectory of the incoming
threats. Some systems, such as Israel’s Trophy, include additional advanced features
that allow the system to also calculate the shooter’s location.
Target prioritization;- The ability to simultaneously detect and track multiple targets
seems to be a standard feature of APSs. The soon-to-be-deployed Afghanit will
supposedly be capable of detecting and tracking up to 40 ground targets and 25 aerial
targets. As with CIWSs, the parameters that APSs use to prioritize targets are
classified information but are very likely to be a combination of risk variables such as
time until impact and nature of the incoming projectiles.
Human control
10 | P a g e
Tech and 2010 for the Super aEgis II. The Super aEgis II and the Sentry Tech are
currently in use; the SGR-A1 is already retired.
Israel and South Korea are the only two countries that currently produce and sell anti-
personnel sentry weapons. Both countries initiated the development of these systems
for border security purposes. Israeli armed forces used the Sentry Tech for protecting
Israel’s border along the Gaza Strip. South Korea invested in the development of the
SGR-A1 and Super aEgis II for potential deployment in the Demilitarized Zone
(DMZ)—the buffer zone at the border between North and South Korea. The Korean
War Armistice Agreement of 1953 prohibits the deployment of weapons in the zone,
so these systems have never been fielded in the DMZ. The South Korean Army has,
however, deployed the SGR-A1 on an experimental basis outside South Korea,
notably in Afghanistan and Iraq.
As they are currently employed, robotic sentry weapons might be more accurately
described as weaponized autonomous surveillance systems. Autonomy serves
primarily to guarantee that they are keeping a sharp and unblinking eye on the
perimeters under their protection.
Target detection:- All three robotic sentry weapon systems commonly use a combi-
nation of digital cameras and IR cameras to detect targets within a relatively large
perimeter. The Super aEgis II, for instance, can supposedly detect and lock on to
human-sized targets at a distance of up to 2.2 km at night and 3 km in daylight.
Target identification:- Robotic sentry weapons recognize targets based chiefly on heat
and motion patterns. They are therefore unable to distinguish between ‘civilian’ and
‘military’ human targets. They do, however, include some features that allow them to
detect more than simple human presence. The SGR-A1 can reportedly recognize
surrender motions (arms held high to indicate surrender), while the Super aEgis II can
sense whether a human target is carrying explosives under his or her outer clothing.
Target engagement:- The SGR-A1, the Sentry Tech and the Super aEgis II each
feature different modes of target engagement. The SGR-A1 and the Sentry Tech
reportedly only have the possibility of alerting an operator to the presence of a human
in the surveillance zone; at that point, a human operator takes control over the system.
The operator then uses the video and audio equipment mounted on the system to
establish communication and issue a warning to a person or people that the system has
detected. Depending on the target’s reaction, the human operator might decide to fire
or not to fire the weapon.
In its original design, the Super aEgis II was intended to execute all the steps in the
process fully autonomously. It was built with a speech interface that allows it to
interrogate and warn detected targets. Prospective users of the system reportedly
expressed concern that it might make mistakes and requested the introduction of
safeguards.
11 | P a g e
DODAAM therefore revised the system to include three modes: human-in-the-loop
(the human operator must enter a password to unlock the robot’s firing ability and
give the manual input that permits the robot to shoot);
human-on-the-loop (a human operator supervises and can override the actions of the
system);
human-out-of-the-loop (the system is fully autonomous and not supervised in real-
time by a human operator). According to DODAAM, all the current users have
configured the system to human-in-the-loop mode.
Human control
As they are currently employed, robotic sentry weapons hand over control to a human
command-and-control centre once targets are detected. The SGR-A1, for instance,
reportedly requires a minimum of two people to operate each robot, one operator and
one commander. The question of whether the use of a robotic sentry weapon in a fully
autonomous mode would be lawful is still a matter of contention. Some have argued
that the system’s inability to distinguish between civilian and military targets and
make proportionality assessments would make the use of full autonomous mode
necessarily unlawful. Others have argued that the legality of the system is very much
context-based and that using the system in human-out-of-the-loop mode would not be
legally problematic as long as it is deployed in an area where (a) it is reasonable to
assume there would be no civilian presence; and (b) circumstances would make the
use of force proportionate (e.g. the DMZ).
Guided munitions
As previously mentioned, the vast majority of guided munitions use autonomy only to
find, track and hit targets or target locations that have been pre-assigned by humans.
12 | P a g e
In that sense, autonomy does not support the target selection process; it solely
supports the execution of the attack. The few guided munitions with some target
selection autonomy include the Long-Range Anti-Ship Missile (LRASM) (USA),
Dual-Mode Brimstone (UK) and the Naval Strike Missile/Joint Strike Missile
(NSM/JSM) (Norway). These are all missile systems.
In contrast to regular guided missiles, the Dual-Mode Brimstone and the NSM/JSM
are not assigned a specific target; rather, they are assigned a target area, where they
will have the task of finding targets that match a predefined target type .
Mobility:- Before launch, the missiles are assigned a specific area they are allowed to
engage. The operator has to assess whether within that area there is a risk of hitting
friendly forces or civilians or civilian objects, and program the systems accordingly.
The operator also sets parameters such as altitude and minimum time of flight.
Human control
The Dual-Mode Brimstone is the only guided munition featuring target selection
autonomy that is currently operational. It works like a fire-and-forget missile. Once
launched, the missile operates in full autonomy; it does not include a human-in-the-
loop mode. However, it can be optionally guided with an external laser as well,
providing control to the operator if needed. The precise nature of the human–systems
command-and-control relationships used by the NSM/JSM and the LRASM remains
unclear.
13 | P a g e
Loitering weapons
The large majority of loitering weapons operate under remote control. The ones
identified only four operational systems that can find, track and attack targets in
complete autonomy once launched: the Orbiter 1K ‘Kingfisher’, the Harpy, the Harop
and the Harpy NG (all from Israel). As previously noted, Germany, the UK and the
USA all started development on loitering weapons with a fully autonomous engage-
ment mode.
Examples of these systems include (a) the Low Cost Autonomous Attack System
(LOCAAS) (USA); (b) the Non-Line-of-Sight Launch System (NLOS-LS) (USA); (c)
the Taifun/TARES (Germany); and (d) the Battlefield Loitering Artillery Direct
Effect (BLADE) (UK).
None of these systems went beyond the R&D phase. Besides technical and cost
issues, a key reason for the cancellation of these programmes was the controversy
around the use of autonomy for targeting. The US Air Force, for instance, was
reportedly reluctant to have a weapon system that it could not control at all times.
The Harpy, which is the oldest system, operates in complete autonomy. The Harop
and the Harpy NG, which are upgrades of the Harpy, as well as the Orbiter 1K,
include both a human-in-the-loop and fully autonomous mode. However, the fully
autonomous mode seems to be reserved only for SEAD missions. In such
circumstances, they operate very much like an anti-radiation missile.
Once launched, the loitering weapon flies to the predetermined geographical area in
which it is allowed to engage, using GPS coordinates or pre-programmed flight
routes. Upon arrival, it activates its anti-radar seeker to search for and locate potential
targets. It may have programmed rules to prioritize between targets. If it cannot find
the prioritized targets, it is supposed to move on to, and engage with, secondary
targets.
14 | P a g e
The human-in-the-loop mode seems to be preferred for operations against high-value
targets such as armoured vehicles. In such cases, loitering weapons use optical and IR
sensors to search for, locate and monitor predefined target types. A human operator
supervises the system and retains the ability to abort the attack up until a few seconds
before impact.
The Harop was recently used in armed conflict. Azerbaijan’s armed forces used the
system in Nagorno–Karabakh in April 2016 to hit six Armenian military targets,
including a bus full of volunteers, artillery systems, air defence systems and a military
runway. Azerbaijan’s armed forces reportedly used the human-in-the-loop mode.
circumstances, but it would also have to take into consideration variations in models
of command-and-control for swarm operations.
We know that lethal autonomous weapons can locate and attack any person without
any human intervention, but this may also include innocent civilians. They can be
mass produced and made so small that it can kill any human without being identified.
Beyond ethical issues, replacing human decision-making with algorithms is highly
destabilizing. Cheap mass-production and/or copying of the code of algorithms
for lethal autonomous weapons would fuel proliferation to both state and non-state
actors as they could just copy the code and develop AWS (Autonomous Weapon
Systems).
Autonomous weapons have been described as the third revolution in warfare, after
gunpowder and nuclear arms, but the era of completely relying on AI (Artificial
Intelligence) has not come yet. Thus, this flaw proves that the completely autonomous
weapon systems must be replaced with semi-autonomous weapon systems where a
human gives the command to shoot or attack.
Some argue that because robots can react to a situation far more quickly than humans
can, the human-on-the-loop is pointless as they would be unable to cancel an attack
deemed disproportionate or indiscriminate before the robot executes the attack.
Finally, regardless of the amount of human control within a weapon system, there is a
question of where to place blame for law violations if or when they occur.
15 | P a g e
Accountability of Autonomous Weapon System
Traditionally, where humans would take the decision to use force, this could lead to
prosecutions, disciplinary action or the need to pay compensation. The question arises,
however, of what happens where humans do not exercise meaningful human control
over the use of force during armed conflict or law enforcement, but delegate it to
computers.
The underlying assumption of this question is that autonomous weapon systems are
not illegal weapons – that they may be used under certain circumstances. There is of
course a view according to which they are illegal weapons under existing law and/or
should be declared as such by new law. This is based on arguments, for example, that
their use cannot meet the requirements of IHL that protect the lives of civilians (such
as distinction and proportionality) in the case of armed conflict, or that they cannot
meet the requirements of IHL that protect those against whom force may be used in
the context of law enforcement.
It has also been argued that delegating decisions over life and death to
machines are inherently wrong, whether this is done in conformity with the formal
requirements of IHL or IHRL, or not. What is at stake here is not just the protection of
the lives of those mentioned, but also the human dignity of anyone at the receiving
end of the use of such autonomous force (including combatants and suspected
perpetrators who may otherwise lawfully be targeted).
To use such weapons under any circumstances would be illegal and any use should
lead to accountability. These weapons should also be banned formally because they
violate the public conscience. However, assuming they are not illegal weapons and
may be used under certain circumstances There may be a malfunction;
the machines may learn things they were not supposed to learn; there could be other
unexpected results.
Normally humans are held accountable based on the control they exercised in making
decisions, but humans are by definition out of the loop where machines
are used that take autonomous, and in many cases unpredictable, decisions. It clearly
makes no sense to punish a machine for its autonomous decisions. The question arises
of whether there will be an accountability vacuum. This will not be acceptable
16 | P a g e
because it will mean that the underlying values – the protection of humanitarian
values and the rights to life and dignity – are in effect rendered without protection.
The fundamental ethical question is whether the principles of humanity and the
dictates of the public conscience can allow human decision-making on the use of
force to be effectively substituted with computer-controlled processes, and life-and-
death decisions to be ceded to machines.
It is clear that ethical decisions by States, and by society at large, have preceded and
motivated the development of new international legal constraints in warfare, including
constraints on weapons that cause unacceptable harm. In international humanitarian
law, notions of humanity and public conscience are drawn from the Martens Clause.
As a potential marker of the public conscience, opinion polls to date suggest a general
opposition to autonomous weapon systems—with autonomy eliciting a stronger
response than remote-controlled systems.
Another concern in the debate on ethics and humanity is that while unmanned
weapons open the possibility to attack an enemy who cannot fight back, the enemy
will often compensate for their inability to attack appropriate targets by attacking
innocent people. Additionally, the possibility of terrorist organizations obtaining the
technology poses a threat to international peace and security, thus highlighting the
humanitarian aspect of LAWS. Because legislation most often develops in response to
new technology, it is important to create an ethical structure on which to base the
legal framework now, while the use of unmanned robots is still nascent and their
implications are uncertain. The International Covenant on Civil and Political Rights
states, “Every human being has the inherent right to life. This right shall be protected
by law. No one shall be arbitrarily deprived of his life.” Allowing robots to make the
decision to kill makes those deaths arbitrary because robots lack the capacity to judge
and interpret their targets the way humans can interpret and review subjects in
consideration of existing laws.
The report by the UN Panel of Experts on Libya indicates that a Kargu-2 kamikaze
drone manufactured by Turkey’s state-owned company STM was likely used in
17 | P a g e
March 2020 in clashes between the forces of the Turkish-backed Government of
National Accord and the Libyan National Army of eastern warlord Khalifa Hifter
following the latter’s besiegement of Tripoli. STM describes Kargu-2 as a loitering
rotary-wing attack drone with real-time image processing capabilities and embedded
machine learning algorithms that are also equipped with swarming capabilities that
allow up to 20 drones to work together. Along with Kargu-2, the Alpagu fixed-wing
loitering munition system and the Togan autonomous multi-rotor reconnaissance
drone — both also developed by STM — stand out as examples of advanced
autonomous capabilities in the Turkish defense industry. According to the company,
all three unmanned aerial vehicles use computer imaging for targeting and are
programmed with machine learning algorithms to optimize target classification,
tracking and attack capabilities without the need for a GPS connection.
While such technologies sound like a revolutionary step in warfare, a global
debate has been simmering since the early 2000s on whether lethal autonomous
weapon systems should be regulated or banned, given ethical concerns over their
ability to select and hit targets without human intervention. The release of the UN
report on Libya has rekindled the debate, which had been largely hypothetical thus far.
Bloc Positions
A report published in April 2019 by PAX, an international peace organization,
detailed that the US, China, the Russian Federation, UK, Israel, South Korea, and
France have the most complex autonomous weapon technology to date.
Correspondingly, according to the Campaign to Stop Killer Robots,28 member states
are in favour of banning LAWS entirely. Some of these countries included Algeria,
Argentina, Austria, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Djibouti,
Ecuador, Egypt, El Salvador, Ghana, Guatemala, Iraq, Mexico, Morocco, Nicaragua,
Pakistan, Peru, Uganda, Venezuela, and Zimbabwe. However, 12 countries have
explicitly protested negotiating an additional treaty about LAWS. These include
Australia, Belgium, France, Germany, Israel, The Republic of Korea, Russia, Spain,
Sweden, Turkey, The United States of America, and the United Kingdom.
18 | P a g e
words, although China is currently not interested in using LAWS in combat, they
want to continue developing them. In 2018, Ding Xiangrong, the Deputy Director of
the General Office at the Chinese Central Military Commission, also explained their
objective of engaging in the “ongoing military revolution… centred on information
technology and intelligent technology”, alluding to their intention to use them in the
future.
African Union - Algeria, Egypt, Ghana, Uganda, Zimbabwe, and South Africa
were among the first countries to advocate for a ban on LAWS. Moreover, on April 9,
2018, member states of the African Group called for a legally binding framework on
LAWS, stating that “fully autonomous weapons systems or LAWS that are not under
human control should be banned.”These countries include: Algeria, Angola, the
Central African Republic, Republic of the Congo, the Democratic Republic of the
Congo, Egypt, Kenya, Liberia, Libya, Mauritania, Mauritius, Morocco, Niger, Nigeria,
Rwanda, Senegal, Somalia, South Africa, South Sudan, Sudan and Zimbabwe, among
others.
In reality, African states are just as threatened by LAWS in the context of national
security as other global powers. The looming presence of ethnic violence, civil wars,
terrorism, and various insurgencies make LAWS an even bigger threat in countries
like Nigeria, Somalia, and South Sudan, among others. Given that several non-state
actors have already gained access to autonomous weapons without the presence of
any legal and regulatory frameworks, it is evident that further action is promptly
needed. Not only is their possession of such dangerous weapons a symbolic threat, but
also a threat to international safety as they cannot be easily held accountable for
atrocities committed in warfare— especially given ceaseless turmoil in the areas they
operate.
19 | P a g e
4. Where do LAWS pose challenges in terms of compliance with IHL?
(Distinction, proportionality or precautions in attack)
7. What ethical questions arise from the development and deployment of LAWS?
Bibliography
https://research.un.org/en/docs/ga/quick/regular/61
https://undocs.org/en/A/RES/61/55&Lang=E
https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war
https://autonomousweapons.org/
http://www.lyonmun.com/wp-content/uploads/2018/05/UNSC.Def_.pdf
https://news.un.org/en/story/2019/03/1035381
https://static1.squarespace.com/static/57b632432994cab0b44562ae/t/5e33e57c95862d328a707b16
/1580459396212/DISEC%2BA+Backgrounder.pdf
https://international-review.icrc.org/articles/stepping-back-from-brink-regulation-of-autonomous-
weapons-systems-913
https://link.springer.com/article/10.1007/s43154-020-00024-3
https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-laws-war
https://static1.squarespace.com/static/51e9d93ee4b0c80288b6ff52/t/5c0836566d2a7321cc0e7dce/
1544042076751/GA+1+CR+1+%28HIAMUN+%2719%29.pdf
https://www.nmun.org/assets/documents/conference-archives/new-york/2015/NY15_BGG_GA1.pdf
https://www.law.upenn.edu/institutes/cerl/conferences/ethicsofweapons/
20 | P a g e