Mastering automation is a foundational skill required to safely
operate any modern state-of-the-art aircraft. Under normal
circumstances, automation reduces workload, creates
efficiencies and, to a fault, is highly dependable. On rare
occasions, these highly reliable systems present pilots with an
unexpected, obscure, or highly complex scenario that if
mismanaged may quickly deteriorate and jeopardize the safety
of flight.
The challenge for professional pilots is to not only understand
their own human cognitive limitations but to fully comprehend
their aircraft’s automated systems and how each relate. From a
practical operational standpoint, those same pilots must
maintain proficiency, become experts at monitoring, combat
complacency, adhere to standard operating procedures (SOP),
and mentally stay out in front of these aircraft during all modes
of operation.
Safety reports are littered with automation interface issues that
pit the pilot against the aircraft. Often these events begin with
an automation error, and when not trapped, may evolve into
additional errors or an undesired aircraft state (UAS).
Automation errors have led to numerous accidents; categorized
as either loss of control in-flight (LOC-I), controlled flight into
terrain (CFIT), approach and landing (ALA), and runway
excursions (RE) accidents. The list of common automation
errors contributing to these accidents is long and varied and
may include data entry errors, mode confusion, mode
awareness, unexpected mode reversion, and inappropriate use
of automation.
Mismanaged automation errors are often coupled with
additional errors from the pilot monitoring (PM) such as cross-
checking or verification steps. Poor pilot monitoring skills across
the industry have been recognized as a threat to aviation safety.
When these errors are not trapped, the flightpath and/or energy
management may be compromised. Below is an example of an
early accident that combined an automation management error
and poor active pilot monitoring skills. Considered a
“watershed” moment, this accident put the industry “on notice,”
that there was a problem managing highly automated aircraft.
Nearly 26 years ago, a simple flight management system (FMS)
entry error led to the loss of a Boeing 757 near Cali, Columbia.
During this event, on an arrival from the North, ATC offered a
straight-in VOR/DME Runway 19 approach, which the crew
accepted. The actual clearance was to fly the ROZO 1 arrival
for the VOR/DME Runway 19. The captain subsequently made
a request with ATC to proceed direct to the ROZO NDB, which
was coded as “R” in the FMS. ATC denied this request and
reiterated the original clearance for the ROZO 1 arrival with
additional instructions to report their position at 21 DME from
the airport. The crew misunderstood these instructions. Still in a
descent, the pilots selected and executed “R” (ROZO NDB) in
the FMS and the aircraft turned to the left 90 degrees and
departed the desired lateral course. Passing through 9,000 feet
the GPWS “terrain” warning activated and despite the crew’s
efforts to recover from the event, the aircraft impacted a
mountainside killing 159 people and severely injuring another
four.
Verifying the FMS selection and cross-checking the aircraft’s
navigation display (it would have indicated a 90-degree turn)
may have helped prevent this accident. There were several
threats associated with this accident such as language barriers,
operating in mountainous terrain, a late-night flight, and a poorly
coded (“R”) waypoint and a similarly named procedure; all led to
a lot of confusion. From this accident, the industry has learned
a lot, but we have a long way to go, considering similar
automation-related accidents continue to occur.
The Unevolved Brain
The problem with automation management is twofold; it’s the
human, the machine, and how they interface with each other.
According to IATA’s study on FMS Data Entry and Error
Prevention, the human brain has changed little in hundreds if
not thousands of years. Remarkably, even in an “unevolved
state," the brain has been able to assimilate well into the
complex world of aviation by adapting to new environments and
accumulating countless new skills.
In simple terms, the brain has two channels. One channel
involves conscious thought in the brain’s “simple but faster
processor.” This is the cognitive channel where things like
problem-solving and decision-making take place. The other
one, the subconscious channel, is taught through repetition by
using complex movement sequences, such as tying a shoe,
ballroom dancing, or flying an airplane.
According to the IATA study, “The trouble is that each of these
channels are vulnerable.” Channel One—the cognitive one—
has limited capabilities and is prone to overload in times of
stress. Furthermore, this cognitive channel is easily misled by
confusing or contradictory inputs and does a poor job at
recognizing its own errors.
So, when a NASA Aviation Safety Reporting System submission
reads: “Needless to say, confusion was in abundance. There
are just too many different functions that control airspeed and
descent rates, all of which can control the altitude capture” or
“We missed the crossing altitude by 1,000 feet,” those pilots
were possibly overloaded, stressed, or having trouble keeping
up due to cognitive limitations.
Concerns about an over-reliance on automation and an erosion
in manual flying skills relates to Channel Two, the subconscious
one. These skills are lost due to a lack of practice, confounded
by unfamiliar circumstances or initiated at an inappropriate time.
As aircraft become more sophisticated and automated, the role
of the pilot has changed from flying to mostly monitoring and
observing. Pilots (as humans) are poor monitors because they
are vulnerable to fatigue, distractions, boredom, complacency,
illness, and stress—all things that negatively impact
concentration.
Failing to monitor an aircraft’s flightpath and energy state is
problematic and has been a causal factor in several accidents.
Monitoring airspeed is a fundamental skill acquired early during
a pilot’s training, yet these accidents occur at an alarming rate.
In 2005, a Cessna Citation 560 crashed while on approach to
Pueblo, Colorado. Two pilots and six passengers were killed.
Approaching from the east, the crew initially planned to overfly
the airport and land on Runway 8L. Upon checking in with
Pueblo Approach Control, the crew was advised that they would
land on Runway 26R. The aircraft was on autopilot during the
descent and arrival into Pueblo. According to the cockpit voice
recorder transcript, the flight crew noted the change in runway
assignment and immediately tuned the navigational radios and
inbound course for Runway 26. According to the NTSB,
however, there was an approximate 5-minute delay in
conducting the approach briefing. Minutes later the crew began
to intercept the localizer and glideslope and to slow and
configure the aircraft for landing. During this time, the pilots
continued to brief the approach. Moments later the first officer
recognized the need to “run the deice boots” and indicated that
the aircraft had slowed to Vref. The aircraft continued to slow
which caused an aerodynamic stall and the crew failed to
recover.
Four years later, in February 2009, Colgan Air Flight 3407, a
Bombardier Dash 8-Q400 crashed outside of Buffalo, New York,
killing 49 people aboard and one individual on the ground.
During this flight, while on approach to Runway 23 at KBUF the
crew failed to recognize a loss of 50 knots of airspeed over a
period of 22 seconds. The result was an aerodynamic stall that
led to a fatal LOC-I accident.
These two accidents demonstrate how distractions, fatigue,
stress, and potentially complacency—all human vulnerabilities
—can affect concentration and the ability to monitor the energy
state of an aircraft.
Human-machine Interface
Outside of the pilot, there is probably some culpability with the
design of the aircraft or machine and how it interfaces with the
pilot or human. The concept of machine is broadly defined as a
device that people interface with, such as a mobile phone,
laptop, or in this case, an aircraft.
A productive discussion on human-machine interface must
begin with two questions. (1) How do we communicate with the
machine? and (2) How does the machine communicate with
us? This two-way communication in an aircraft is accomplished
using controls, displays, audio cues, etc. The design of these
items must consider ergonomics (physical aspects) and must
align with the user’s mental model (usage architecture, logic,
and intuitiveness) to be effective. The basis for this interface
between machine and human is no different than any other form
of communication—it’s a two-way conversation.
Third-generation air transport aircraft (the first automated
aircraft with FMS and glass cockpits) introduced into service
during the 1980s incorporated crude FMS and flight mode
annunciator (FMA) displays. Early FMS incorporated
monochromatic displays with an alpha-numeric interface, while
FMA displays used symbology and (often truncated)
nomenclature that included multiple non-intuitive sub-modes.
This understandably created several challenges for pilots.
Fortunately, each new generation of aircraft has shown an
improvement. Although, as demonstrated with the Boeing 737
Max saga, there are still significant areas of improvement when
it comes to automation and the many subsystems associated
with complex aircraft. As described below, long before the Max,
there was a scenario on 737NGs where a single radar altimeter
(RA) failure would force the autothrottle system into a “retard”
mode—logic that the aircraft was in the landing flare—on
approach.
In February 2009, a Turkish Airlines Boeing 737-800 stalled
while on approach and crashed short of Runway 18 at Schiphol
Airport in Amsterdam. During this event, a single RA failure
caused the autothrottle system to enter the “retard” mode,
which the flight crew failed to recognize. As a result, during the
approach, the thrust went to idle, allowing the airspeed to
decrease to 83 knots (40 knots below Vref). The stickshaker
activated at 495 feet and the captain attempted to recover with
full power. Without enough altitude or airspeed to recover, the
aircraft struck the ground tail first (at 95 knots) and broke into
three pieces. Six passengers and three flight crew were killed.
At the time of the accident, SOPs (Turkish Airlines) required
both autopilots (a two-channel system) to be engaged during an
approach. This action would add redundancy to many critical
systems related to the autoflight systems including the
autothrottle. During this flight, however, an inexperienced first
officer—the pilot flying—failed to engage the second autopilot
system. System logic reverted to a single autopilot channel (left
only) and relied on the captain’s or left RA for autothrottle input.
When the left RA failed, the indication dropped from a valid
reading to -8 feet and causing the autothrottle system to enter
the “retard” or landing mode.
Mismanaged automation errors and poor flight path and energy
state monitoring are at the crossroads of many accident types
(CFIT, LOC-I, ALA, and RE). Operators must reinforce their
automation philosophies and protocols into training and SOPs.
According to the Flight Safety Foundation (FSF) Approach-and-
Landing Accident Reduction Toolkit (Briefing Note 1.2 –
Automation), the safe and efficient use of the autoflight system
(AFS) and FMS is based on the following three-step method.
Anticipate: Understand system operation and the results of any
action, be aware of modes being armed or selected, and seek
concurrence with the other flight crewmembers.
Execute: Perform the action on the AFS control panel or on the
FMS control display unit (CDU); and,
Confirm: Cross-check armed modes, selected modes, and
target entries on the FMA, primary flight display and navigation
display, and FMS CDU.
Likewise, pilots must engage in an active monitoring role to
identify and correct flightpath or energy state deviations. In
addition to the FSF ALAR Toolkit, the FSF has published “A
practical guide for improving flight path monitoring.” This
document is the go-to for the best practices related to flight path
monitoring—it’s a must-read.
For pilots, the guide outlines accepted practices that promote
effective monitoring and clearly defines the role of each pilot
during various flight phases. Likewise, there are discussions on
workload/task management and how best to manage
distractions and interruptions. For the operator, the guide
provides an outline to create effective SOPs and enhance
training profiles to promote better pilot monitoring skills.