Cognitive Readiness in Automated Flight Decks: Bridging Training and Human-System Integration

Cognitive Readiness in Automated Flight Decks: Bridging Training and Human-System Integration

Giacomo Belloni and Fabrizio Interlandi

The operational environment of commercial aviation represents a paradigmatic example of a highly integrated socio-technical system, where stringent demands on safety and efficiency preclude even minimal tolerance for human or systemic error (Belloni, 2020). As a domain marked by extreme complexity and interdependence—often conceptualised as hypercomplex (Della Quercia, 2017)—it necessitates the seamless coordination of human cognition, automated technologies, and institutional protocols (Salas et al., 2010; Adriaensen et al., 2019). Within this framework, cognitive psychology highlights the crucial role of mental adaptability and real-time information processing, as flight crews must continually adjust to shifting conditions, ambiguous stimuli, and dynamic task demands (Adriaensen et al., 2019).

While beneficial to overall system performance, advances in automation and digital avionics have significantly changed the cognitive landscape for pilots, increasing mental workload and altering attentional strategies (Lim et al., 2018). Still, contrary to common assumptions, it has not led to a net reduction in cognitive workload. While automation has relieved pilots from specific manual and navigational duties, this "freed" cognitive capacity is quickly occupied by emergent demands associated with system supervision, procedural management, and high-density airspace operations. Increased aircraft complexity, constantly evolving procedures, congested airspace, and a growing volume of air traffic communications characterise modern flight environments. These factors impose a continuous cognitive burden, requiring pilots to integrate a wide range of information sources and respond in real time to evolving conditions. Pilots are still required to operate in uncertain environments and interact with increasingly complex flight deck interfaces (Causse et al., 2013; Çakır et al., 2016). Modern flight decks, saturated with complex and often fragmented data streams, require operators to exert substantial cognitive effort in interpreting, integrating, and acting upon vast quantities of information. This intensification of mental load has been shown to elevate stress and compromise the efficiency of executive functions, including situational awareness, decision-making, and response selection (Vogl et al., 2022). Under elevated arousal or time-critical decision-making conditions, cognitive overload can impair performance reliability and lead to operational failures. From a cognitive psychological standpoint, this underscores the imperative for designing cockpit systems that align with human cognitive architecture and mitigate overload through adaptive interface design and workload regulation strategies.

The conceptual framework articulated by Bainbridge in her seminal work "The Ironies of Automation" (1983) has already been critically examined in our prior article, "Advanced Automation Paradoxes and Commercial Aviation". In our analysis, we highlighted Bainbridge's incisive critique of the unintended consequences of automation within complex socio-technical systems. Rather than mitigating human-related vulnerabilities, she argues, automation often amplifies them, particularly by displacing the operator into a passive supervisory role that can impair situational awareness and responsiveness. Central to Bainbridge's thesis is the assertion that many operational issues stem not from human errors per se, but from flawed assumptions embedded in system design—what she refers to as "designer errors"(Bainbridge, 1983, p. 775). She further contends that system designers frequently conceptualise the human operator as a weak link—unreliable, slow, or cognitively limited—and thus advocate for minimising or excluding human involvement. This stance paradoxically undermines system resilience (Bainbridge, 1983).

We even contended that in a future always more shaped by advanced/intelligent systems, human involvement and contribution will remain essential, especially when dealing with unexpected, complex, unprecedented or non-linear situations. Although advanced technologies are effective at recognising patterns and making predictions based on past data, they are far less reliable when facing events that fall outside prior experience (Belloni, 2024). In such cases, human operators provide the flexibility and judgment that machines currently lack. For this reason, the design of advanced cockpits must start with a careful understanding of how humans and machines interact, ensuring that systems support (and not replace) human decision-making in complex environments.

The transformation of the airline pilot's role over the past several decades has been profound and complex. In the past, they were responsible for manually managing every aspect of aircraft operation—from takeoff to landing—requiring continuous attention, physical coordination, and real-time decision-making. However, this traditional hands-on role has gradually evolved with the progressive integration of automation into the flight deck. Today, the pilot is less a direct manipulator of flight controls; he is more a full-round system manager, tasked with supervising automated processes and intervening when necessary (Belloni, 2020).

This progressive transition has been accompanied by significant operational benefits, including workload reduction, improved fuel efficiency, and enhanced precision in navigation. However, it has also introduced new challenges that, we believe, deserve critical attention. Bainbridge's assertion that human operators risk becoming "passive monitors" in highly automated systems is particularly pertinent. Pilots may become psychologically detached from the task environment when automation functions reliably—a phenomenon often called "out-of-the-loop" performance. This reduced engagement can impair situational awareness and hinder the pilot's ability to swiftly assume manual control in an unexpected failure or rapidly evolving emergency.

Indeed, the expectation that a pilot can instantly and flawlessly resume control after extended periods of passive monitoring underestimates the cognitive demands of such transitions. This misalignment between the designed role of the human operator and the realities of human cognitive performance in high-stakes, time-critical situations raises pressing concerns about the resilience of automated systems. As automation continues to evolve, and based on the considerations mentioned above when we argue that human involvement and contribution will remain essential in the future, it is therefore essential to re-examine not only how pilots interact with automated technologies, but also how the design of these systems supports—or undermines—their capacity to maintain situational awareness, exercise judgment, and contribute meaningfully to flight safety.

Recent statistics underscore a concerning rise in aviation accidents linked to human error, frequently associated with excessive reliance on automated systems or insufficient comprehension. This pattern is evidenced by findings from official accident investigations into several high-profile incidents, including the Turkish Airlines 737 crash during the approach to Schiphol Airport in 2009, the Air France Airbus A330 accident over the Atlantic Ocean near Rio de Janeiro in 2009, and the Asiana Airlines Boeing 777 crash in San Francisco in 2013. These cases compel a critical reassessment of whether automation has effectively achieved its primary objectives—namely, the mitigation of human error and the enhancement of operational safety—or whether, in attempting to resolve earlier limitations, it has inadvertently introduced new and complex challenges (Sengupta et al., 2016).

Bainbridge's (1983) notion of the "paradox of automation" is exemplified in these accidents, where automated systems malfunctioned or were inadvertently disengaged without issuing clear warnings to the flight crew. In these events, pilots—accustomed to automation managing specific flight phases—often failed to detect the system's degraded state and, due to limited situational awareness or inadequate system understanding, could not intervene effectively, resulting in catastrophic outcomes (Woods & Hollnagel, 2006).

The crash of Air France Flight 447 is particularly illustrative. Although the crew received multiple cockpit alerts, none conveyed that the displayed data or automation behaviour could not be trusted. The absence of explicit cues regarding the automation's reliability may have contributed to confusion, cognitive overload, and a misdiagnosis of the situation, ultimately impeding the crew's capacity to apply appropriate corrective measures (BEA, 2012; Dekker, 2011).

These cases highlight a critical weakness in current human-automation interaction: the failure of systems to facilitate a smooth and intelligible handover when human intervention becomes necessary. Effective reversion from automated to manual control requires the timely provision of essential, prioritised information—those elements needed for flying, navigating, and communicating (Endsley, 2015). Without such structured support, pilots may be overwhelmed at the precise moment when clarity and decisiveness are most needed.

A clear and accurate mental representation of automated systems is essential for pilots to maintain situational awareness, particularly during unexpected system behaviours or failures. When pilots' mental models do not align with the actual functioning of automation, the risk of error increases significantly. This misalignment has been identified as a contributing factor in several aviation incidents. For instance, in the case of Colgan Air Flight 3407 near Buffalo, New York, the pilots' insufficient understanding of the automation's behaviour when the plane lost control played a role in the loss of control. Similar issues occurred in the 2018 and 2019 Boeing 737 MAX accidents, where pilots' misunderstanding of the Manoeuvring Characteristics Augmentation System (MCAS) may have impeded their ability to respond appropriately. Additionally, research has shown that in aircraft such as the Boeing B737 and Airbus A320, misconceptions about flight deck automation have led to operational errors, underscoring the importance of robust mental models for safe and effective automation management (Wickens et al., 2022).

In addition to system design, another critical aspect concerns the training of the operators. These incidents highlight the critical importance of comprehensive training for pilots, especially in understanding and managing automated systems and their degradation.

Over the past decade, pilot training has made significant strides forward with the introduction of the Competence-based Training and Assessment (CBTA). From a training approach focused primarily on aircraft handling, there has been a gradual shift toward preparing pilots to act as system and operations managers. To support this shift, specific competencies have been introduced, covering both the technical and non-technical aspects of flight management.

However, the events mentioned above may reveal a potential weakness in these competencies and in their training. The operators involved did not understand how to manage the handover from automation to manual control properly, which raises important questions about whether the current competencies are sufficient or if they need to evolve further.

Aviation accidents involving automated systems' unexpected failure or disengagement—often without adequate warnings—vividly illustrate the "ironies of automation". These events expose a critical paradox: as automation becomes more sophisticated, the human operator's role in routine operations diminishes, yet their intervention becomes increasingly vital in non-routine or crisis situations.

This contradiction stems from the fact that automation is typically least reliable in those rare, high-stakes moments where complexity peaks and uncertainty prevails. When systems behave unpredictably or exceed their design limits, pilots—often relegated to supervisory roles—are abruptly required to regain control. However, such interventions frequently occur under cognitively demanding conditions, when situational understanding may be impaired due to limited engagement with the system.

The resulting challenges include delayed or inappropriate responses, driven by the need to rapidly interpret system states and make time-critical decisions with incomplete information. In this context, the complexity that automation intends to manage can become a liability.

Addressing this paradox requires designing systems and training frameworks that explicitly support human operators in routine oversight and their capacity to intervene effectively when automation falters. Interfaces must prioritise the clear communication of critical status information, and training programs should cultivate deep system understanding and adaptability.

Bainbridge's early warnings remain acutely relevant: the more capable automated systems become, the greater the need to ensure that humans are equipped to function as integral agents within both normal and abnormal operations. Rather than rendering the operator obsolete, advanced automation increases the importance of human contribution in maintaining safety and resilience—especially in those decisive moments when human judgment becomes the last line of defence.

 

References

Adriaensen, A., Patriarca, R., Smoker, A., Bergström, J. (2019). A socio-technical analysis of functional properties in a joint cognitive system: a case study in an aircraft cockpit. Ergonomics, 62(12).

Bainbridge, L. (1983). Ironies of Automation. Automatica, Vol. 19, No. 6. pp. 775 779.

BEA (2012). Final Report Air France flight AF 447. 1st June 2009. Bureau d’Enquêtes et  d’Analyses.

Belloni, G. (2020). Beyond Evidence-based Training. Analysis of the reasons that led to the development of an innovative training methodology suited to the aeronautical world's ever-increasing complexity. City of London University.

Belloni, G. (2024). The limits of data-driven technologies in the world of complexity. HUMAI Research.

Çakır, M. P., Vural, M., Koç, S. Ö., & Toktaş, A. (2016). Real‑time monitoring of cognitive workload of airline pilots in a flight simulator with fNIR optical brain imaging technology. In D. D. Schmorrow & C. M. Fidopiastis (Eds.), Foundations of augmented cognition: Neuroergonomics and operational neuroscience – Part I (Lecture Notes in Computer Science, Vol. 9743, pp. 147–158). Springer. DOI: 10.1007/978-3-319-39955-3_14

Causse, M., Dehais, F., Péran, P., Sabatini, U., and Pastor, J. (2013). The effects of emotion on pilot decision-making: a neuroergonomic approach to aviation safety. Transport. Res. Part C Emerg. Technol. 33, 272–281. DOI: 10.1016/j.trc.2012.04.005

Dalla Quercia, C. (2017) Complexity in the aviation sector. Bocconi Students for Digital Consulting (BSDC).

Dekker, S. (2011). Drift into failure: From hunting broken components to understanding complex systems. Ashgate.

Endsley, M. R. (2015). Designing for Situation Awareness: An Approach to User-Centered Design. CRC Press.

Lim, Y., Gardi, A., Sabatini, R. Ramasamy, S., Kistan, T., Ezer, N., Vince, J., Bolia, R. (2018). Avionics Human-Machine Interfaces and Interactions for Manned and Unmanned Aircraft. Progress in Aerospace Sciences 102 (2018) 1–46.

Salas, E., Wilson, K. A., Burke, C. S., Wightman, D. C. (2010). Does Crew Resource Management Training Work? An Update, Extension, and Some Critical Needs. Human Factors, 48(2), 392-412.

Sengupta, S., Donekal, A. K., Mathur, A. R. (2016). Automation in Modern Airplanes - A Safety and Human Factors Based Study. APCOSEC 2016 – 10th Asia Oceania Systems Engineering Conference. Bangalore, India, 9 Nov -11 Nov, 2016.

Vogl, J., Delgado-Howard, C., Plummer, H., McAtee, A., Hayes, A., Aura, C., St. Onge, P. (2022). A Literature Review of Applied Cognitive Workload Assessment in the Aviation Domain. U.S. Army Medical Research and Development Command Military Operational Medicine Research Program.

Wickens, C. D., Helton, W. S., Hollands, J. G., & Banbury, S. (2022). Engineering psychology and human performance (5th ed.). Routledge. DOI: 10.4324/9781003177616

Woods, D. D., & Hollnagel, E. (2006). Joint cognitive systems: Patterns in cognitive systems engineering. CRC Press.

 

Cognitive Readiness in Automated Flight Decks: Bridging Training and Human-System Integration © 2025 by Giacomo Belloni & Fabrizio Interlandi is licensed under CC BY-NC 4.0

 

Stefano Romagnoli

Docente Universitario Università Giustino Fortunato

3mo

Excellent article Giacomo! Your analysis of pilots’ cognitive readiness in increasingly automated flight decks resonates deeply with key psychological frameworks. As Sweller’s Cognitive Load Theory (1988) reminds us, reducing extraneous load is essential to free up working memory, while Beck’s Cognitive Theory (1976) highlights how automatic thoughts and core beliefs influence decisions under stress. Integrating these approaches into training, as your article suggests, is a powerful way to strengthen pilots’ adaptive thinking and cognitive resilience. A truly valuable contribution to aviation safety!

Savio Schmitz

MSc, Senior Examiner / Standards Training Captain

3mo

The state of cognitive readiness might be time-limited. Reattaching a detached brain (e.g. recovering from brain stall, startle and/or surprise, confusion, lost awareness) might first and foremost need an (almost) automatic application of basic cognitive and behavioral recovery methods. Only a reduced cognitive angle-of-attack and a stable platform enable the mental capacity for deliberate actions.

To view or add a comment, sign in

More articles by Giacomo Belloni

Others also viewed

Explore content categories