Framing The Human Dimension in Cyber Security Revised Manuscript
Framing The Human Dimension in Cyber Security Revised Manuscript
net/publication/280527732
CITATIONS READS
4 915
2 authors, including:
Jim Nixon
Cranfield University
28 PUBLICATIONS 237 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Jim Nixon on 28 July 2015.
Abstract
The advent of technologies that can seamlessly operate in different environments with differing levels of security present
new challenges to the cybersecurity specialist seeking to ensure the safety of data, process or output of a complex system.
This paper reviews the human dimension of cybersecurity. The Human Factors Integration (HFI) framework is employed
as a structure with which to consider the many ways in which the human can differentially affect the security of a system
both positively and negatively.
We conclude that when the human factors element is analysed when specifying and designing secure and safe systems, it is
far more likely that the human can assist and increase the overall level of security. As in other high technology sectors
such as aviation or petrochemical, if not considered, the human can often ‘bulldoze’ through the most carefully considered
and designed security or safety barriers
Keywords: cyber security,cyber safety, cyber warfare, human factors, human factors integration, HFI, human dimension
*
Corresponding author. Email:[email protected]
In addition to the technological aspect, the human element
1. Introduction in cybersecurity is inherently complex and as such is often
vulnerable. At the same time, the implementation of
Barely a week goes by in which a cyber-threat somewhere in technical security measures can have unforeseen human
the world is not headline news. Moreover, the appearance consequences. Often, a side-effect of implementing a
of such headlines appears to be quickening in pace. The security measure is a reduction of effectiveness or
reasons for this rising concern are clear: efficiency. Security checkpoints, for example, reduce the
flow of people in and out of buildings. Often the
(1) The critical infrastructures of society are increasingly technological security of a system is strong but the role and
dependent on cyberspace, i.e. complex information and vulnerabilities of the human in that system are less well
communication technology (ICT) networks, public and understood or considered.
private, military and civilian, including the Internet.
(2) With greater reliance comes greater vulnerability, and In this paper, we present a broad overview of human factors
the growing risk of serious, widespread systems failure issues related to cybersecurity. We partition issues in
resulting from deliberate disruption or loss of critical accordance with the Human Factors Integration (HFI)
networks. framework. HFI is an integral component of systems
(3) In contrast to physical attacks, attacks against critical engineering for defence capability development in the UK.
networks or data are relatively cheap and easy to HFI is a systematic process for identifying, tracking and
conduct remotely. resolving the wide range of human related issues in the
development of capability. The HFI framework has been
Personal communication devices such as mobile phones, or selected since BAE Systems is mandated to adhere to this
smartphones exacerbate the threat as these devices are so standard when working with the UK Ministry of Defence
interwoven into everyday life that security is either (MoD). The HFI framework is broadly comparable to the
overlooked or taken for granted. The trend of increasing Human Systems Integration framework (HSI) used in the
technology being embedded into everyday life and activities United States (Pew and Mavor, 2007; Booher, 2003). The
is not likely to abate. The combination of growing threat, Human Factors themes represented in HFI are consistent
growing vulnerability and more serious consequences with systems engineering and capability development and as
increases the total risk to national security. such should be included when designing or engineering a
system which demands strong cybersecurity in defence
applications and elsewhere. Human factors risks and
challenges that are identified through application of the HFI and behavioural insights with technical security measures
process can then be addressed using the appropriate (Kowalski et al., 2008).
methods, ensuring that the impact of the human on
cybersecurity is understood and addressed during the design One possible approach could be to replicate the US
of a system. Government-funded Insider Threat Study (ITS). Initiated in
2002, the ITS is a multi-year, multi-disciplinary and cross-
sector exploration of employees who have used their
2. Analysis organisation’s computer systems or networks to perpetrate
acts of harm against the organisation such as theft of
The HFI framework is used here to structure the human intellectual property, fraud or acts of sabotage. ITS is a
factors issues in cybersecurity. The key mandate of the HFI collaborative initiative of the Secret Service National Threat
process is to characterise and address the risks to a system Assessment Center and Carnegie Mellon University’s
generated by the human. Priority should necessarily be Computer Emergency Response Program.
given to the area or areas that present the greatest risk in any
given context. The overall objective of the ITS is to help private industry,
government, and law enforcement better understand, detect
HFI is divided into seven domains: and prevent harmful insider activity. A particular focus of
the study is to identify behavioural precursors and indicators
Social & Organisational Factors through an annual survey and in-depth case studies looking
Manpower for statistical patterns (Kowalski et al., 2008).
Personnel
Human Factors Engineering Many incidents that have involved the release of malware
System Safety into a network have stemmed from a mis-judgment or error
Training by a user, such as using an unencrypted USB device or
Health Hazard Assessment violating procedures regarding external email. The existence
of appropriate policies, standards and systems does not
Some domains of the HFI framework are more readily necessarily mean that cybersecurity will always be correctly
applicable to cybersecurity than others. In this paper the implemented. Business process and security solutions are
domains of manpower and personnel are combined as the often at odds with each other. An example is credit card
issues raised are complimentary. In addition, Human Factors online authentication systems which often bombard the user
Engineering and System Safety are combined since the key with requests for rarely used, complex passwords making
driver of ‘cybersafety’ is effective use of the equipment by online transactions an inconvenience and often leading a
the human operator and a reduction in human error. Health user to engage in unsafe behaviour such as keeping a written
hazard assessment is primarily related to environmental record of a password.
stressors that may cause illness or injury, for example noise,
vibration or radioactivity. No specific issues related to The users of a system or service and those who specify it
cybersecurity were identified in this domain and as such it is have the responsibility to not only specify the correct
excluded from this analysis. standards but also to enforce compliance and ensure correct
usage. Moreover, cyber-attackers have regularly penetrated
well-designed, secure computer systems by taking
2.1 Social & Organisational Factors advantage of the carelessness of trusted individuals, or by
deliberately deceiving them, for example by pretending to
Organisations are a mixture of socio-technical systems, with be the system administrator and asking for passwords.
each component, including each individual, presenting
vulnerabilities that are open to accidental or malicious Sasse et al. (2007) highlight a range of social and
exploitation. Boyce et al. (2011) argue that failure to organisational factors that affect the cultural acceptance of
integrate social factors in cybersecurity development could new cybersecurity requirements within an organisation. The
substantially reduce the effectiveness of cybersecurity imposition by management of tighter restrictions and
capabilities. For example, ineffective management or poor controls on staff behaviour certainly reduces security risks
cybersafety culture in an organisation could easily negate in the short term but does not necessarily change security
the expected benefits of user-centred systems, training, and awareness and motivation, and may adversely affect the
other areas of HFI. long-term relationship between management and staff. A
key message of Sasse et al.’s white paper is that “security is
Malicious attacks by people within the organisation are now everybody’s business.” To manage human vulnerabilities
considered a bigger threat than external agents (Wilding, effectively, all stakeholders need to be involved in the
2007; Shaw et al., 1998). However, gaps in the literature design and operation of secure systems. It is also essential to
have made it difficult for organisations to develop a communicate effectively about the risks and how to manage
comprehensive understanding of the insider threat. In them.
particular, there is a need to integrate social psychological
At the same time, however, Sasse et al. (2007) acknowledge An explanatory concept that can be used to design for
that there are numerous factors other than security to take shared awareness and collaboration is that of shared
into account when considering human behaviour. Trusting a situation awareness and shared mental models (Cannon-
new subcontractor, for example, may be risky in terms of Bowers et al., 1993). A mental model is an internalised
security but highly desirable in terms of business. Similarly, cognitive representation of a system (Wilson and
a work environment that imposes draconian security Rutherford, 1989). Such representations can be invoked by
measures can negatively affect employee morale. Hence, the user to predict future system status or outputs or to
the “human factors of cybersecurity” are not just security- comprehend current system status (McGuinness and
specific factors such as handling passwords but also non- Dawson, 2005). An effective mental model may be viewed
security factors that are in turn affected by security. Each as prerequisite for effective situation awareness. In shared
organisation must decide its own risk tolerance level, and awareness, common or overlapping mental models of a
that may vary from one situation to another. system are required by a group of users. Such models may
all be different but support a common goal through co-
Human vulnerabilities should ideally be identified and ordinated tasks. In highly proceduralised environments
managed before they lead to an actual breach of security. where users are co-located, such as the flight deck of a
However, according to Sasse et al. (2007), there is currently commercial airliner, this shared mental model may develop
a tendency to ignore risky human characteristics and through effective communication and application of
behaviours until an actual breach of security occurs. procedures.
One solution would be to provide a confidential and In cybersecurity effective shared mental models of emerging
anonymous security vulnerability reporting system threats can facilitate safe behaviour and reduce risk. Such a
comparable to those used in high-risk industries to report shared mental model of risk presents challenges in this
safety incidents. For example, an individual might report domain. Users may not be co-located or may work for
that he observed a colleague’s passwords written on a piece different businesses that are required to interact. For
of paper. The aim would be to not only react to such example, employees working for different companies may
vulnerabilities but to share awareness and understanding of require shared understanding of when information is
them amongst all staff, enabling long-term organisational commercially sensitive or classified despite differences in
learning. their individual tasks or roles. Such geographical and
cultural distribution leads to challenges in creating common
At the same time, a confidential reporting system should and overlapping mental models required for all individuals
encourage anonymous whistle-blowing, provided incentives to understand their responsibility to ensure safe ‘cyber-
are present for the detection and reporting of possible insider behaviour’. As a shared goal, cybersecurity is often
threats by co-workers (DHS, 2009). secondary to the primary task. Many different tasks can
require the same shared mental model to be invoked to
Within any organisation, between organisations at a national ensure safe behaviour. Organisational pressure or cultural
level, and even between nations at the global level, there is a differences can change when and how such a model is both
need for systems to support shared awareness of the cyber created and invoked leading to increased risk of unsafe
domain and its emerging threats. For example, criminal behaviour. Effective training and human factors analysis
groups and foreign intelligence groups are making increased which considers these challenges can assist organisations to
use of social networking sites such as Facebook (McGannon overcome such barriers and encourage the generation of
& Hurley, 2009). Updates on such developments and their appropriate shared mental models. Users can draw upon
implications for ICT users need to be effectively these shared mental models to reduce the risks presented in
communicated throughout an organisation. the cyber domain though effective decision making.
3
J. Nixon and B. McGuinness
In the commercial world, one factor that is of special Detection of internal threats among staff.
relevance to cybersecurity is the growing demand for Specification and recruitment of emergency cyber
outsourcing non-core functions such as data processing, response teams.
administration and server hosting. A particular concern is Specification of cybersecurity competencies required of
outsourcing to overseas suppliers often based in different ordinary staff.
continents with a variety of governance and oversight
structures. Outsourcing presents security issues whether the
outsourced activity is cybersecurity itself or other business 2.3 Training
functions (CPNI, 2009).
Training is a key issue since most often employees are
Drawing on the BT Group’s (formerly British Telecom) trained in the how of cybersecurity but not the why. As a
overseas outsourcing experience, Colwill and Jones (2007) result, the context of a security procedure or process is not
describe some of the key human factors that can impact fully understood. One consequence of this is that a user is
significantly on the security of outsourcing. They argue that not able to make reliable risk assessments if workarounds
application of technology alone will not provide solutions. are performed. Employees must feel included in the whole
The main requirement for the customer organisation is to security process and thereby assume a personal
ensure or enforce trustworthiness of outsourced personnel responsibility toward maintaining security in the
through measures such as rigorous employee vetting organisation.
requirements. This may only be effective in the long term
rather than short term – and this in itself presents a major U.S. Government research (Noonan & Archeluta, 2008)
challenge in the outsourcing world which frequently indicates that many critical infrastructure managers lack an
experiences high turnover of personnel. appropriate awareness of the threat that insiders pose.
Education and awareness presents the biggest potential
To ensure effective security in outsourced operations, clear remedy by motivating and focusing management efforts to
ownership of security is required, as well as a means of address the insider threat. Training should have the goal of
instilling in the supplier organisation an understanding of establishing a common baseline understanding of the
the customer organisation’s need for security. New emerging and dynamic insider threat to critical
approaches need to be considered for building and infrastructures (Noonan & Archeluta, 2008). Research is
maintaining trust and secure relationships between needed to determine the required content and delivery of
organisations over time. such training.
Although the demand is surging, the supply of suitably The ISO standards are based in part on the BS 7799 British
qualified cybersecurity professionals is low. Currently in the standards on information security. According to BERR
UK, despite the formation of the Institute of Information (2008), the globalisation of these standards appears to be
Security Professionals in 2006 (BERR, 2008), people are helping raise awareness in the UK, at least in certain sectors.
entering the cybersecurity profession through a diversity of Awareness is highest in the financial services,
routes. One problem is the lack of teachers: qualified experts telecommunications, technology and retail sectors; it is
are needed to teach the next generation of qualified experts. weakest in the property, travel, leisure and entertainment
In most cases, cybersecurity is taught as a single module sectors. Implementation of the international Information
within a Computer Science degree programme. Whereas Security Management standards (ISO 27000) is also on the
new legislation may require cybersecurity professionals to increase, mainly in the larger companies. Implementation
be properly certified, the dearth of qualified professionals tends to raise the security baseline by ensuring that a
has even seen agencies looking to students to help fill their minimum level of control is adopted in all areas of security
positions (Chabrow, 2009). management.
The Manpower and Personnel domains, then, face several Over time, staff awareness of cyber threats can quickly fade,
challenges to ensure continuing security: while the skills for preventing, detecting and responding to
attacks can quickly become outdated. At the same time,
The specification of required knowledge, skills and because cyber-attacks are rarely encountered by individuals,
other attributes (both currently and in the future) in the necessary skills are rarely if ever practiced after initial
cybersecurity professionals. training. Hence, there is a need for periodic staff assessment
The low availability of appropriate candidates for the and refresher training, not just to update current threat
required roles (both currently and in the future). awareness but also to minimise deterioration of security
Implementation of effective employee screening to competencies (skill fade). This requirement will differ, of
elicit risk factors. course, for different staff roles and responsibilities.
Understanding and avoiding recruitment of potential or Cybersecurity specialists could be assessed through
actual internal threats. simulation exercises at six-month intervals in the manner of
Recruitment or contracting of employees to evaluate safety-critical operators such as airline pilots. Other staff
vulnerability to external social engineering threats.
could be given monthly awareness updates or even weekly systems such as iris recognition systems in airports could
reminders in addition to annual training sessions. certainly benefit from a human factors assessment before
being deployed in the field.
Training Needs Analysis methods provide a framework to
evaluate training requirements, matching skills, knowledge An analysis of the work which the user is required to
and attitudes to tasks. Application of such formal methods is perform can greatly assist in understanding how users
standard within the defence sector. A structured approach to interact with security systems and processes, enabling
training cybersecurity can clearly identify skills and human-centric assessment of new and existing technology.
knowledge gaps, targeting training efficiently to achieve In the UK, hierarchical task analysis (HTA) is traditionally
improved cybersecurity. used as a starting point for the human factors practitioner to
understand the tasks and goals that the user is required to
Frequently, effective application of security processes and perform (for example, Stammers and Shepard, 2005; Kirwan
procedures require co-operation between groups of and Ainsworth, 1992). HTA provides a structured picture of
individuals in different locations and different companies. the sequences of tasks that a user performs to achieve a
The US Air force has developed the concept of Mission system goal. From this detailed analysis, barriers to
Essential Competencies (MECs) (Symons, 2006; Gentner et performance, weaknesses in the system or procedures can be
al. 1999). MECs can be effectively applied to analyse the identified and tasks re-ordered or changed as necessary. The
competencies required by teams to perform a task. Use of granularity of an HTA can be as fine as is required, allowing
MECs in this domain may assist understanding of the practitioner to examine the interactions of different tasks
competencies required for teams, leading to increased threat in great detail.
awareness among groups of individuals.
Task analysis that examines the cognitive work that is
required of a user is especially important in complex, socio-
2.4 Human Factors Engineering and System technical systems. The security systems and processes often
Safety form part of such systems. Understanding the cognitive
work which underlies user judgement and decision making
Systems designed to help prevent or detect cyber-attacks are is key to developing systems which reduce the risk of a user
critical. However, technical research and development tends working around or failing to comply with a process of
to emphasise technological capabilities over human procedure underlying effective cybersecurity. Cognitive task
capabilities and with insufficient regard for human analysis is a collective term that can be used to understand
limitations. The methods and tools of human factors and articulate user judgements and decision making (Stanton
engineering (HFE) can be used to redress this situation. To et al. 2004). Examples of specific methods include
be most effective, however, cybersecurity HFE should be Cognitive Work Analysis (Vincente, 1999) and Critical
integrated into the system development life cycle from Decision Method (Klein and Armstrong, 2004).
system inception. Early integration of human-centric
security concerns provides maximum return on investment Human error and reliability analysis
in cybersecurity. Unintentional behaviours that create or exacerbate an ICT
security vulnerability can be addressed by adapting the
Human-centric assessment methods of human error analysis and human reliability
The degree to which an ICT asset is “secure” can be analysis. There are two approaches to human error analysis:
measured as the extent to which its entire vulnerability to at the level of the individual and at the level of the
attack is reduced or minimised. However, the cybersecurity organisation. The individual level acknowledges the role of
field lacks adequate methods to evaluate security and/or the person as an agent with legal responsibilities defined by
vulnerability, especially at the human level. The U.S. their contract of employment. As such, an outcome of this
Department of Homeland Security identifies enterprise-level level of analysis is often the assignment of blame to the
cybersecurity metrics as one of the “hard problems” of individual concerned. However, it is now widely accepted
developing cybersecurity: “Without realistic, precise that individual errors or unsafe-acts do not arise
evaluations, the field cannot gauge its progress toward spontaneously and in isolation but in a context of unsafe-
handling security threats, and system procurement is preconditions at the system level, such as poor supervision
seriously impeded” (DHS, 2009, p.22) (Shappell and Wiegmann, 2000). Hence, individual error
analysis is complemented by analysis at the level of the
Effective measurement of human-system interaction is organisation or system, focusing on aspects such as safety
essential when evaluating security processes and procedures. culture and the effectiveness of communication processes.
Greater understanding of human perceptions, behaviours Resilience engineering is an emerging concept which can be
and attitudes as users interact with security systems and applied to understand human error at the organisational
processes is needed. Such understanding is required to level, as opposed to the individual level (Hollnagel et al.,
predict the likelihood of users’ acceptance of proposed 2006). A core dimension of resilience engineering is
security measures, whether through models or through ensuring that the organisation can respond flexibly and
human-in-the-loop evaluations. As an example, biometric effectively to disruptions. Such flexible response by
5
J. Nixon and B. McGuinness
individuals within an organisation is required when very difficult to detect. Yet Williams refers to a test in
managing and identifying threats which can be varied and which a bank of 172 current anti-virus systems failed to
unpredictable in the cybersecurity domain. detect twenty per cent of known Trojans.
The individual level of analysis is useful for understanding In HFE, a number of ways to address this issue have been
what specifically happens “at the sharp end” when things go developed (Lee & See, 2004). For example, systems can be
wrong. Human errors can be classified in various ways, such designed in ways which can elicit appropriate level of trust
as skill-based, knowledge-based and rule-based. The study (Lowrey, 2011; Duez et al., 2006; Yeh and Wickens, 2001;
of cognitive errors in judgement and decision-making is a Ockerman, 1999) although this research is still in its early
particularly rich field, revealing important limitations of stages and often challenging to apply outside the specific
everyday cognitive abilities. Although error analysis is context investigated.
retrospective, reliability analysis is prospective. One method
for predicting human reliability is probabilistic risk Perception management and deterrence
assessment: in the same way that equipment can fail, so can Underpinned by the constructs of trust and deception is the
a human user commit security errors. In the human case, concept of perception management. One of the functions of
task analysis can be used to articulate and quantify the security measures in general is to deter any would-be
probability of error. attackers. Thus, while some security measures are designed
to be highly covert in order to trap unsuspecting attackers,
Trust and deception others are highly visible and obvious. This points to the fact
Trust in a system has the ability to modify user response to that security and vulnerability invariably have two aspects:
that technology (Lee and See, 2004). Trust is defined here as actual and perceived (Oscarson, 2007)
a social response to technology which can guide a user’s
response when addressing complex, uncertain or While the actual security or vulnerability of an asset is an
unanticipated situations. Nass et al. (1995) have shown that objective fact, the perceived level of security does not
user reactions to computers are similar to those found when necessarily correlate and can be manipulated. This
users collaborate with other humans. This is relevant in the complicates our understanding of security (Camp, 2000;
cybersecurity domain since users may be deceived into Kim and Prabhakar, 2000)
performing unsafe actions that exploit their trust in the
system (McQueen and Boyer, 2009). Activities that fall into An asset is less likely to be attacked if it is perceived by
this category include deceiving a user into performing an would-be attackers to be highly secure, even if it is not so
unsafe act such as downloading and running an executable secure in actuality. In physical security, a fake security
file or falsifying identity, causing a user to believe they are measure which offers no objective protection, such as a
interacting with someone known to them. dummy CCTV camera, may be misperceived by adversaries
as an actual security measure, and thereby serve as a
Insufficient and excessive levels of trust in systems have deterrent. Creating the illusion of security or invulnerability
been shown to affect user behaviour. Parasuraman and Riley can, if successful, have the same deterrent effects as
(1997) characterise these affects as misuse and disuse. investing in actual security measures.
Misuse is described as inappropriate levels of reliance on the
system indicating a higher than appropriate level of trust. Of course, the potential to misperceive security applies to
Disuse signifies failures related to the rejection of effective asset-holders as well as would-be attackers. Asset holders
system functionality for example, possibly causing a user to might believe their information or systems to be more secure
reject a valid e-mail as a phishing attempt This is essentially than they are, especially given the rate of change of threats.
a mis-calibration of trust. For example, a computerised This shows that security measures include not only logical
system can be perceived as infallible since it has been protection but also perception management. This also
designed to replace or augment more limited human implies that a measurement of security might have to
capabilities (Itoh, 2011). In cybersecurity it is possible for include separate evaluations of both actual and perceived
users to trust that their protective systems such as firewalls security.
and anti-virus software do indeed prevent attacks when such
protection does require a degree of user awareness to be
effective. Some apparently ‘failsafe’ systems, such as 3. Conclusion
biometric access control systems, are still far from reliable.
The problem of over-trust is exacerbated by the suppliers’ With threats such as cyberwarfare, cyberterrorism and
claims and advertising, which imply that the products will cybercrime ever-present and rapidly evolving in complexity,
ensure security under all circumstances the process of delivering cybersecurity must likewise evolve
rapidly and continually. The UK is taking the cyber-threat to
For example, Williams (2009) identifies Trojans (malware national security very seriously and has begun to commit
in the guise of legitimate software) as “the number one funds to research and development to address emerging
threat to information security.” Sophisticated and rapidly issues and requirements. The HFI framework is a way in
evolving malware development means that Trojans remain
which the risks, and indeed benefits, that humans add to the [11] GENTNER, F. C., TILLER, T.C., CUNNINGHAM ,P. H., BENNETT,
cybersecurity domain can be articulated and addressed. W. (1999) Using Mission Essential MOEs/MOPs for
Evaluating Effectiveness of Distributed Mission. Training
What is striking about HFI is the breadth of human factors Proceedings of the Interservice/ Industry Training,
Simulation & Education Conference (I/ITSEC) 1999.
issues and considerations. This breadth can present [12] HERNANDEZ, J. (2010) The human element complicates
challenge given time and budget constraints. However, a cybersecurity. Defense Systems website (defense
key function of the HFI process is to examine risks in a systems,.com), 2 March 2010.
structured way. The greatest risks originating from the [13] HOLLNAGEL, E., WOODS, D.D., LEVESON, N. (2006).
human factor can be prioritised and acted upon appropriately Resilience Engineering. Ashgate.
in accordance with time and budget constraints. [14] ITOH, M. (2011) A model of trust in automation: Why
humans over-trust? Proceedings of SICE Annual Conference,
Humans are potentially the greatest strength of cybersecurity Tokyo, 13-18 Sept 2011, pp.198-201.
(Hernandez, 2010), but only if the human factor is fully [15] KIM K AND PRABHAKAR B (2000) Initial Trust, Perceived
Risk, and the Adoption of Internet Banking, In Proceedings
considered. For example, it is necessary to consider the of the Twenty-First International Conference on Information
trade-offs between the implementation of security measures Systems (ICIS 2000),December 10–13, 2000, Brisbane,
and the subtle effects of those upon employees’ perceptions, Australia.
motivations, and behaviours. Without taking account of the [16] KIRWAN, B., AINSWORTH, L. K. (1992) A Guide to Task
human dimension in the implementation of security Analysis. Taylor and Francis. London, UK
measures, there is an ironic risk of creating and exacerbating [17] KLEIN, G., & ARMSTRONG, A. A. (2004). Critical decision
human vulnerabilities. method. In N. Stanton, A. Hedge, K. Brookhuis, E. Salas &
H. Hendrick (Eds.), Handbook of human factors and
In order for cybersecurity to meet its aims and objectives, ergonomics methods. CRC Press. London, UK
[18] KOWALSKI, E., CAPPELLI, D. & MOORE, A. (2008) Insider
the design of systems must match the capabilities and Threat Study: Illicit Cyber Activity in the Information
limitations of humans to ensure the highest possible levels Technology and Telecommunications Sector. Carnegi Mellon
of security and safety in the future. Software Engineering Institute.
[19] LEE, J. D. & SEE, K. A. (2004) Trust in Automation:
Designing for Appropriate Reliance. Human Factors, 46, 50-
References 80.
[20] LOWREY, A. (2011). First Impressions Count: The Impact of
Visual Attractiveness on User Trust in Computer Decision
[1] BERR (2008) 2008 Information Security Breaches Survey. Aids. Unpublished MSc Thesis, University of Derby.UK
Technical Report. Department for Business, Enterprise and [21] MCGANNON, M. & HURLEY, D. (2009) The dark side of social
Regulatory Reform, April 2008. networking. IO Sphere, Summer 2009. Joint Information
[2] BOOHER, H.R. (2003). Handbook on Human System Operations Warfare Command (JIOWC), San Antonio,
Integration. NJ: John Wiley & Sons. Texas.
[3] BOYCE, M.W., MUSE-DUMA, K., HETTINGER, L.J., MALONE, [22] MCGUINNESS, B., DAWSON, D. (2005) Assessing situational
T.B., WILSON, D.P., LOCKETT-REYNOLDS, J., (2011). Human awareness in teams. In Bust, P. D. and McCabe, P. T. (Eds)
performance in cybersecurity: A research agenda. In Contemporary Ergonomics. Taylor and Francis. UK.
Proceedings of the Human Factors and Ergonomics Society [23] MCQUEEN, M.A. AND BOYER, W. F. (2009) Deception used
55th Annual Meeting. for cyber defense of control systems In proceedings of
[4] CAMP L J (2000) Trust and Risk in Internet Commerce, MIT Human System Interactions, 21-23 May 2009.
Press,Cambridge, MA, USA. [24] NASS, C., & MOON, Y., FOGG, B. J., REEVES, B., & DRYER, D.
[5] CANNON-BOWERS, J.A, SALAS, E., CONVERSE, S. (1993) C. (1995). Can computer personalities be human
Shared Mental Models in Expert Team decision Making In personalities? International Journal of Human-Computer
Castellan, N.J. Individual and Group Decision Making: Studies, 43, pp. 223-239.
Current Issues, LEA, Hillsdale, NJ [25] NOONAN, T. & ARCHELUTA, E. (2008) The National
[6] CHABROW, E. (2009) Cybersecurity year in review – top 10 Infrastructure Advisory Council’s Final Report and
happenings. Article in Bank Information Security Recommendations on The Insider Threat to Critical
[7] COLWILL, C. & JONES, A. ( 2007). The Importance of Human Infrastructures. NIAC, U.S. Department of Homeland
Factors when Assessing Outsourcing Security Risks. Security.
Proceedings of the 5th Australian Information Security [26] OCKERMAN, J. J. (1999). Over-reliance issues with task-
Management Conference. Edith Cowan University, Perth guidance systems. In Proceedings of the Human Factors and
Western Australia, December 4th Ergonomics Society 43rd Annual Meeting (pp. 1192 – 1196).
[8] CPNI (2009) Outsourcing: Security Governance Framework Santa Monica, CA: Human Factors and Ergonomics Society.
for IT Managed Servive Provision. Good Practice Guide – 2nd [27] OSCARSON, P. (2007) Actual and Perceived Information
Edition. Centre for the Protection of National Infrastructure. Systems Security. Unpublished PhD Thesis, Linköping
[9] DHS (2009) A Roadmap for Cybersecurity Research. U.S. University, Department of Management and Engineering,
Department of Homeland Security, Science and Technology Sweden.
Directorate, November 2009. [28] PARASURAMAN, R., & RILEY, V. A. (1997) Humans and
[10] DUEZ, P. P., ZULIANI, M. J. & JAMIESON, G. A. (2006) Trust by automation: Use, misuse, disuse, abuse. Human Factors,
design: Information requirements for appropriate trust in 39(2), pp. 230–253.
automation. Proceedings of the 2006 conference of the Centre
for Advanced Studies on Collaborative Research.
7
J. Nixon and B. McGuinness
[29] PEW, R. W., & MAVOR, A. S. (2007). Human-System [35] SYMONS, S., FRANCE, M., BELL, J., BENNETT, W. (2006)
Integration in the System Development Process: A New Linking Knowledge and Skills to Mission Essential
Look. Washington, DC: National Academies Press. Competency-Based Syllabus Development for Distributed
[30] SASSE, M. A., ASHENDEN, D., LAWRECE, D., COLES-KEMP, L., Mission Operations Air Force Research Laboratory, Report
FLÉCHAIS, I. & KEARNEY, P. (2007) Human vulnerabilities in AFRL-HE-AZ-TR-2006-0041.
Human Factors Working Group White Paper. DTI Cyber [36] VICENTE, K., J. (1999) Cognitive Work Analysis, LEA,
Security Knowledge Transfer Network London
[31] SHAPPEL, S A & WEIGMANN. D A, (2000) The Human Factors [37] WILDING, R, (2007) Insiders are the biggest enemy, Strategic
Analysis and Classification System – HFACS. US Risk, September 2007.
Department of Transportation, FAA, DOT/FAA/AM-00/7 [38] WILLIAMS, P. (2009) Top 5 Information Security Threats. API
[32] SHAW, E., RUBY, K. G. & POST, J. M. (1998) The insider 4th Annual IT Security Conference, Houston, Texas, 11-12
threat to information systems: The psychology of the Nov 2009.
dangerous insider. Security Awareness Bulletin, 2, 1-10. [39] WILSON, J. R & RUTHERFORD A., (1989). Wilson, J. R., &
[33] STAMMERS, R. B., SHEPHARD, A. (2005) Task Analysis In Rutherford, A. (1989) Mental models: Theory and application
Wilson, J. R. and Corlett, E. N. (Eds) Evaluation of Human in human factors. Human Factors, 31, 6, 617-634.
Work, Taylor and Francis. UK. [40] YEH, M., & WICKENS, C. D. (2001). Display signalling in
[34] STANTON, N., A., SALMON, P. M., WALKER, G. H., BABER, C., augmented reality: Effects of cue reliability and image
JENKINS, D. P. (2005) Human Factors Methods. Ashgate, realism on attention allocation and trust calibration. Human
London UK. Factors, 43, pp. 355 – 365.