Assessment For Inclusion in Higher Education - Promoting Equity and Social Justice in Assessment-Routledge (2022)
Assessment For Inclusion in Higher Education - Promoting Equity and Social Justice in Assessment-Routledge (2022)
IN HIGHER EDUCATION
Joanna Tai is Senior Research Fellow at the Centre for Research in Assessment
and Digital Learning, Deakin University, Australia. She researches student expe-
riences of learning and assessment from university to the workplace, including
feedback and assessment literacy, evaluative judgement, and peer learning.
David Boud is Alfred Deakin Professor and Foundation Director of the Centre
for Research in Assessment and Digital Learning at Deakin University, Australia.
He is also Emeritus Professor at the University of Technology Sydney and Pro-
fessor of Work and Learning at Middlesex University. His current work is in the
areas of assessment for learning in higher education, academic formation, and
workplace learning.
Typeset in Bembo
by KnowledgeWorks Global Ltd.
CONTENTS
Introduction1
Rola Ajjawi
SECTION I
Macro contexts of assessment for inclusion:
Societal and cultural perspectives 7
SECTION II
Meso contexts of assessment for inclusion: Institutional
and community perspectives 85
SECTION III
Micro contexts of assessment for inclusion:
Educators, students, and interpersonal perspectives 165
Index238
FIGURES AND TABLES
Figures
16.1 Bronfenbrenner’s ecological systems 180
18.1 A process for implementing choice of assessment methods 205
Tables
5.1 Culturally Inclusive Assessment model 56
9.1 Equity and inclusion references in assessment policies
of select Victorian universities 103
15.1 Equity factors 169
19.1 Workshop student participants 214
19.2 Analysis focus and related codes 215
19.3 Proposed changes to assessment 216
20.1 Overview of protocols A and B 224
CONTRIBUTORS
David Boud is Alfred Deakin Professor and Foundation Director of the Centre
for Research in Assessment and Digital Learning at Deakin University, Australia.
He is also Emeritus Professor at the University of Technology Sydney and
Professor of Work and Learning at Middlesex University. His current work is
in the areas of assessment for learning in higher education, academic formation,
and workplace learning.
x Contributors
Penny Jane Burke is Professor and Global Innovation Chair of Equity and
Director, Centre of Excellence for Equity in Higher Education, University
of Newcastle, Australia. Widely published across the field, Professor Burke is
co-editor of the Bloomsbury Gender and Education book series and Global
Chair of Social Innovation, University of Bath. She has served as an expert
member of the Australian government’s Equity in Higher Education Panel and
Equity in Research & Innovation Panel.
Nicole Crawford is Senior Research Fellow at the National Centre for Student
Equity in Higher Education, Curtin University, Australia. Her research focuses
on equity and inclusion in higher education, enabling education, and student and
staff mental wellbeing.
Joanne Dargusch is Senior Lecturer at the Centre for Research in Equity and
Advancement of Teaching and Education, Central Queensland University,
Australia. Her current work includes research in the areas of assessment, equity
and participation in higher education and school contexts, and assessment in
initial teacher education.
Phillip Dawson is Professor and Associate Director of the Centre for Research
in Assessment and Digital Learning, Deakin University, Australia. He researches
higher education assessment, with a focus on feedback and cheating.
Johanna Funk completed her PhD in Open Educational Practices with Indigenous
Workforce development. She is applying what she learnt to her work as a research
active lecturer in cultural knowledges in the College of Indigenous Futures,
Education and Arts at CDU on Larrakia Country, Australia’s Northern Territory.
Lois Harris is Senior Lecturer in the School of Education and the Arts at Central
Queensland University, Australia and is affiliated with CQU’s Centre for
Research in Equity and Advancement of Teaching and Education. Her current
research includes work relating to assessment, educational equity, and student
engagement in both compulsory and higher education settings.
Neera R. Jain is Postdoctoral Research Fellow at the Centre for Health Education
Scholarship at the University of British Columbia Faculty of Medicine, Canada.
She researches ableism and disability inclusion, particularly throughout the
trajectory of medical training and practice.
Bret Stephenson is Senior Research Fellow within the Data and Analytics
(Advanced Analytics) team at La Trobe University, Australia. His work pursues
a programme of research and practice in the areas of student equity, success, and
retention, and the ethical application of data analytics throughout the university.
Joanna Tai is Senior Research Fellow at the Centre for Research in Assessment and
Digital Learning, Deakin University Australia. She researches student experiences
of learning and assessment from university to the workplace, including feedback
and assessment literacy, evaluative judgement, and peer learning.
DOI: 10.4324/9781003293101-1
2 Rola Ajjawi
1. Joanna Tai, Rola Ajjawi, Trina Jorre de St Jorre, and David Boud – the
editors – outline the ways in which assessment can exclude students, offering
a conceptualisation of assessment for inclusion and arguing that assessment
needs to be reconsidered to ensure that students are judged on legitimate
criteria.
2. Jan McArthur draws parallels between her theorisation of social justice –
a broader orientation to assessment – and assessment for inclusion, argu-
ing that both share the same commitment to problematise, challenge and
rethink taken-for-granted assessment practices, and assumed guarantees of
quality and fairness.
Introduction 3
3. Neera Jain takes up assessment for inclusion through the lens of critical
disability theory arguing against minor change, seeking instead to disrupt
notions of what is regarded as normal. A crip theory lens calls on assessment
for inclusion to design from disability and considers how the lived experi-
ence of disability can productively inform assessment.
4. Jessamy Gleeson and Gabrielle Fletcher introduce Indigenous perspectives
to trouble assessment and Western ways of knowing arguing for the contin-
ued expansion and development of the cultural interface such that required
structural conditions of assessment can enable collaboration and diverse
standpoints for all.
5. Sarah Lambert, Johanna Funk, and Taskeen Adam argue that the decolo-
nisation of education is necessary for assessment for inclusion to flourish.
The authors propose three dimensions: justice-as-content, justice-as-process, and
justice-as-pedagogy that would prompt the design of assessment for inclusion.
6. Juuso Nieminen critiques the prevalent approaches to promoting inclusion in
assessment, namely those of individual accommodation and universal design
from the point of view of their procedural focus. He argues that assessment
for inclusion needs to look beyond the institution towards authentically
engaging with society through a critical, political stance.
7. Ben Whitburn and Matthew Krehl Edward Thomas adopt an ontologi-
cal framework to assessment for inclusion, rather than the procedural, and
encourage paying attention to the implications of diversity in educational
design. They critique the notion that time manifests equally, reminding us
of the uneven temporal distributions of assessment.
8. Penny Jane Burke highlights how unequal power relations and taken-for-
granted values and practices shape assessment, which makes inclusivity an
ongoing challenge. She introduces the concept of “Communities of Praxis”
as a framework to engage with these challenges and work collaboratively
towards developing possibilities for inclusive assessment.
9. Matt Brett and Andrew Harvey through an analysis of the higher educa-
tion assessment policy landscape, identify misalignments and advocate for
stronger institutional accountability, monitoring, and regulation as well as
education for all staff on the legislative requirements and moral imperatives
of inclusion.
10. Phillip Dawson argues that to be more inclusive in assessment we need to
re-think cheating, anti-cheating approaches, and inclusion in terms of how
they influence validity. If dominant assessment practices entrench exclusion,
they are as much of a threat to validity as cheating is.
4 Rola Ajjawi
11. Bret Stephenson and Andrew Harvey examine several examples of Artificial
Intelligence-enabled assessment and explore the ways in which each may
produce inequitable or exclusionary outcomes for students. They show that
technological solutions to equity and inclusion are often of limited value.
12. Christopher Johnstone, Leanne Ketterlin Geller, and Martha Thurlow:
track the historical landscape of Universal Design in higher education in
the United States. They propose a dialectical approach that considers both
accommodations and universal design of assessment as separate approaches
that are complementary but also could be influential to one another.
13. Trina Jorre de St Jorre and David Boud critique the ways in which assessment
treats students as if they were homogenous and highlight that assessment for
inclusion should provide equal opportunities for students to succeed but it
also needs to be equally meaningful to them.
14. Thanh Pham critiques how current assessment practices which focus on
employability skills disadvantage international students due to taken for
granted assumptions about communication and behaviours. The chapter
calls for legitimatising marginalised knowledge in assessment for inclusion.
15. Sarah O’Shea and Janine Delahunty critique limited notions of success
(as simply passing) through empirical research with first in family students.
Participant rarely focused on grades alone. They show that assessment for
inclusion should reflect varied and relevant notions of success – through
de-emphasising grades and engaging students as partners in design.
16. Nicole Crawford, Sherridan Emery, and Allen Baird draw on 51 interviews
conducted with regional and remote students to show how the multiple eco-
systems of the university serve to exclude students in assessment. Assessment
for inclusion should value and draw upon the numerous assets and expertise
of students.
17. Roseanna Bourke, through a case study of self-assessment from her own
practice, shows that alternative assessment designs are necessary for assess-
ment for inclusion. Whilst these may not be popular initially, they can lead
to a greater focus on learning.
18. Geraldine O’Neill showcases her program of research in assessment choice,
highlighting the challenge of introducing multiple assessment methods for
staff and students, and offering a 7-step process on how to design, imple-
ment, and evaluate this approach in practice.
19. Joanne Dargusch, Lois Harris and Margaret Bearman adopt a students-
as-partners approach to change assessment practices towards greater inclu-
sion for students with disabilities. To achieve more inclusive assessment,
Introduction 5
Finally, in Chapter 21, Rola Ajjawi, David Boud, Joanna Tai, and Trina Jorre de
St Jorre – the editors – close the book with concluding remarks and ways for-
ward, identifying common refrains that persist throughout the chapters, across
their various perspectives and foci.
We hope this book prompts all those who are part of the higher education
sector to engage in more critical conversations about assessment and reflection on
its purposes and designs towards inclusion and social justice, and to make it valid
for all students. We also hope that it speaks to academics and professional staff
responsible for the design and delivery of assessment to prompt ethical reflexiv-
ity, collaboration, and compassion. We hope that any students who are reading
this book can draw on ideas presented here to improve their own assessment
experiences, to get involved as partners and to advocate for others. And finally,
we hope that researchers expand their ways of knowing when researching assess-
ment – taking stronger theoretical understandings and applying critical lenses to
assessment. The editors would like to thank all the authors for their generosity
and careful scholarship in developing their chapters and engaging in open peer
review and revisions.
SECTION I
Macro contexts
of assessment for
inclusion: Societal and
cultural perspectives
1
PROMOTING EQUITY AND SOCIAL
JUSTICE THROUGH ASSESSMENT
FOR INCLUSION
Joanna Tai, Rola Ajjawi, David Boud,
and Trina Jorre de St Jorre
DOI: 10.4324/9781003293101-3
10 J. Tai et al.
its way into national and international legislation and policy (e.g., Convention on
the Rights of Persons with Disabilities (CRPD) 2006; Disability Discrimination Act
1992; Equality Act 2010). Early work in higher education assessment focused on
the logistics and implementation (Waterfield and West 2006). However, prior
to this, the concept of inclusion was already used frequently within the school
sector, representing initially the consideration of special needs students, and had
also already shifted to considering any student who faced barriers to participation
in education (Hockings 2010).
The term “inclusive assessment” has been defined as “the design and use of fair
and effective assessment methods and practices that enable all students to demon-
strate to their full potential what they know, understand and can do” (Hockings
2010, 34) which speaks mainly to the certification aspect of assessment, rather
than considering how assessment interacts and is entangled with curriculum and
learning and how assessment may also contribute to future learner trajectories
and identities. While a good starting place for assessment design work, a more
expansive purpose is required.
McArthur (2016) more recently introduced the concept of assessment for social
justice, which seeks to achieve the broader purposes of “justice of assessment
within higher education, and to the role of assessment in nurturing the forms of
learning that will promote greater social justice within society as a whole” (968).
She argues that considering social justice in assessment is a necessary move, since
previous ideas of justice in assessment focused on fairness of assessment proce-
dure, rather than considering if the outcomes of assessment were just. This con-
strains possibilities for inclusion, since the greater potential for societal impacts,
which are related to just outcomes of assessment, are largely ignored. McArthur
continues this discussion in Chapter 2, identifying synergies and distinguishing
the differences between assessment for social justice and assessment for inclusion.
Similarly, in this chapter, we take assessment for social justice as a broader phi-
losophy and argue that “assessment for inclusion” might be positioned at the
nexus of the procedural and outcome aspects of assessment, through which social
justice might be achieved. This is to say, we are focusing on the specific and
overall design of assessments, albeit framing assessment design more broadly than
just the task, to also consider interactional processes, policy, people, spaces, and
materials (Bearman et al. 2017).
Within the broader philosophical notions of social justice, we already see
two conceptualisations of assessment for inclusion in the literature. Nieminen
(2022) calls for “radical inclusion” of marginalised groups of students. He posi-
tions assessment for inclusion as reflexively drawing on individual accommo-
dations and inclusive assessment design. Assessment for inclusion is positioned
as “a critical and resistive approach to assessment: it recognises the prevalent
socio-cultural, -historical and -political positioning of marginalised students
in assessment and, if needed, explicitly disrupts such positioning by promoting
student agency” (5–6). Nieminen’s conceptualisation comes from a program of
research underpinned by social justice and critical theories (see also Chapter 6).
Our own positioning for assessment for inclusion is more pedagogical in flavour,
12 J. Tai et al.
seeking to mainstream assessment for inclusion for all students, by making inclu-
sion an everyday lens of assessment design. Student agency should certainly be
a key pillar of any assessment design, but we are perhaps more pragmatic. We
suggest “‘assessment for inclusion’ captures the spirit and intention that a diverse
range of students and their strengths and capabilities should be accounted for,
when designing assessment of and for learning, towards the aim of accounting for
and promoting diversity in society” (Tai et al. 2022a, 3). There is room for both
conceptualisation in overcoming the entrenched nature of structural inequality
and traditional practices in our assessment regimes.
We now turn to contemplate how inclusion should be considered. Within the
higher education literature, inclusion can refer to both disability inclusion and
social inclusion. Stentiford and Koutsouris (2021) remind us that “inclusion is an
elusive concept, intertwined with difficult to resolve tensions” (2245). Inclusion
can refer to many equity groups that are usually named in relation to disability
access (including physical disabilities, learning disabilities, and mental and phys-
ical health conditions) and widening participation initiatives (including students
from low socio-economic backgrounds, Indigenous peoples, and mature age
students). Thus, we adopt the word inclusion in all its meanings. While there
may be an ever-growing list of categorisations to consider when thinking about
assessment, students are not just the groups they belong to, and they may con-
sider themselves as belonging to several groups and sub-groups (Willems 2010).
Therefore, we should focus not so much on whether students are members of
any given equity group (which may be a heuristic that deflects attention from
specific structural issues), but on the underlying issues commonly represented
within these groups. That is, assessments as currently constructed do not lead to
equitable assessment processes, experiences, and outcomes.
Being “fair” in assessment might have once been about ensuring that all
students face equal – that is, the same – conditions. However, with an inclusion
and equity lens, what is considered “fair” in assessment is the subject of ongoing
discussion (O’Neill 2017; Riddell and Weedon 2006). Fairness can also depend
significantly on the perceptions of individuals. Even students themselves are
concerned that accommodations or adjustments give students with disabilities
or other conditions some kind of “unfair” advantage (Grimes et al. 2019a).
Addressing one disadvantage might be seen by a different student as inappro-
priately advantaging another. Though accommodations and adjustments are
deliberately made to construct as level a playing field as possible, they can only
respond to existing barriers or impediments which can be readily identified.
An equity and social justice focus calls on us to do more than identify barri-
ers, instead, we should design assessment proactively to enable all students to
demonstrate their learning in suitable ways without the need to reveal personal
characteristics which may not be apparent and gain reactive accommodations.
“Fairness” may then not be enacted through equal treatment – rather, it can
take advantage of and draw strengths from diverse student backgrounds, goals,
and capabilities.
Promoting equity through assessment 13
have been criticised as ableist due to features like eye tracking that expect to see
unobstructed neurotypical eye movements (Logan 2020).
What this brief tour through common assessment practices shows is that
educators and assessment designers need to be more critical of their assessment
practices and see them in a wider context. In turn, universities need to create
critical appraisal mechanisms of common assessment practices, and how they act
to exclude and to identify alternatives. In the next section, we identify current
practices that seek to promote inclusionary practices of assessment.
task. The option to choose the assessment format has been perceived positively
by most students (Chapter 18; Tai, Ajjawi, and Umarova 2021). However, care-
ful consideration of how these options align with learning outcomes is neces-
sary, both within a unit/module of study, and across the entire program/course.
Consideration could also be extended to what types of capabilities students may
require beyond university and this may lead to an emphasis on, for example,
authentic assessments (Chapter 6) or assessments that encourage and celebrate
distinctiveness (Chapter 13).
A programmatic approach to assessment (Schuwirth and Van der Vleuten 2011)
is also likely to be helpful when explicitly used, to establish a shared understand-
ing of when and how learning outcomes will be assessed, across a collection of
assessments which have been subject to wider and deeper scrutiny. Programmatic
assessment design teams should involve those who know about the exclusion-
ary effects of various assessments, so that the needs of all perspectives are met.
When assessment is supported appropriately (i.e. scaffolded tasks with increasing
complexity/difficulty), this certainty may also allay anxiety, stress, and pressure
which many students report (Craddock and Mathias 2009). This may be espe-
cially important in light of the prevalence of mental health conditions amongst
students (Grimes et al. 2017).
However, to genuinely disrupt current notions of assessment, we need to look
to broader theoretical perspectives which interrogate the taken-for-grantedness
of much assessment discussion and the hegemony of ableist, positivist discourses.
Philosophical and sociological examinations of the purposes of assessment for
inclusion may help to open new ways of thinking, for example critical disability
perspectives such as Jain (Chapter 3), and Whitburn and Thomas’s ontological
perspective (Chapter 7), the decolonial approaches posed by Lambert, Funk,
and Adam (Chapter 5), Indigenous ways of knowing by Gleeson and Fletcher
(Chapter 4), or Burke’s invocation of timescapes (Chapter 8). In order to see
how assessment may have inappropriately exclusionary effects, it is useful to
have conceptual and metaphorical levers to draw sharp attention to the effects
of taken-for-granted assessment practices and ways in which alternatives might
be imagined.
Action on inclusion should not be left to individuals and their good will and
commitment. Understanding how policy at different levels shapes the way that
assessment does or does not serve inclusive purposes also sheds light on what
might be refined (Chapter 9). Meanwhile, limited regulatory and ethical frame-
works around artificial intelligence in assessment might be leading to exclusion
and bias (Chapter 11). We also need to privilege research and development with
students to understand their needs and mobilise their agency to effect change.
For example, we need to understand students’ needs and experiences in more
nuanced ways (Chapters 14–16) and as genuine partners in this endeavour of edu-
cation (Chapters 19 and 20). Finally, we need further exploration and evidence
generation in naturalistic settings to consider what works, and what does not
work, how and why, to promote inclusion (Chapters 17 and 18).
16 J. Tai et al.
Conclusion
Inclusion looks different in different contexts, for different people in different
cultures. A constant reminder that there is no “one size fits all” approach is nec-
essary to continue work in this space. Shutting down possibilities, or not explor-
ing potential avenues for inclusion too early, is likely to lead to a similar situation
to that which we find ourselves in currently: where we have settled on one
approach (accommodations and adjustments) which leaves assessment practices
unexamined and unchanged, without seeking alternative paths which may serve
more students – and indeed universities – better. Instead, what we are calling for
with the concept of assessment for inclusion is not just a pragmatic fix. By interro-
gating assessment, we begin to view the whole curriculum differently through
considering what may promote inclusion, equity, and participation. What we
hope to achieve is to open new challenges to ways in which we think about not
just assessment but higher education practices broadly, and the implications that
choices in adopting theory, designs, or practices of assessment have for diverse
learners, both now and into the future.
References
Ashworth, M., Bloxham, S., and Pearce, L. 2010. “Examining the Tension between
Academic Standards and Inclusion for Disabled Students: The Impact on Marking of
Individual Academics’ Frameworks for Assessment.” Studies in Higher Education 35 (2):
209–223. https://doi.org/10.1080/03075070903062864.
Bearman, M., Dawson, P., Bennett, S., Hall, M., Molloy, E., Boud, D., and Joughin, G.
2017. “How University Teachers Design Assessments: A Cross-Disciplinary Study.”
Higher Education 74 (1): 49–64. https://doi.org/10.1007/s10734-016-0027-7.
Boud, D. 1995. “Assessment and Learning: Contradictory or Complementary.” In Assessment
for Learning in Higher Education, edited by Peter Knight, 35–48. London: Kogan Page.
Burke, P. J., Crozier, G., and Misiaszek, L. I. 2016. Changing Pedagogical Spaces in Higher
Education. Abingdon: Routledge. https://doi.org/10.4324/9781315684000.
Convention on the Rights of Persons with Disabilities (CRPD). 2006. United Nations. https://
www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-
with-disabilities.html.
Craddock, D., and Mathias, H. 2009. “Assessment Options in Higher Education.”
Assessment and Evaluation in Higher Education 34 (2): 127–140. https://doi.org/10.1080/
02602930801956026.
Department of Education Skills and Employment. 2020a. “2019 Section 11 Equity Groups.”
https://www.dese.gov.au/higher-education-statistics/resources/2019-section-11-
equity-groups.
Department of Education Skills and Employment. 2020b. “Completion Rates of Higher
Education Students – Cohort Analysis, 2005–2019.” https://www.dese.gov.au/higher-
education-statistics/resources/completion-rates-higher-education-students-cohort-
analysis-2005-2019.
Disability Discrimination Act. 1992. Commonwealth of Australia. https://www.legislation.
gov.au/Details/C2018C00125.
Equality Act 2010. 2010. United Kingdom. https://www.legislation.gov.uk/ukpga/2010/
15/contents.
Promoting equity through assessment 17
Grimes, S., Scevak, J., Southgate, E., and Buchanan, R. 2017. “Non-Disclosing
Students with Disabilities or Learning Challenges: Characteristics and Size of a
Hidden Population.” Australian Educational Researcher 44 (4–5): 425–441. https://doi.
org/10.1007/s13384-017-0242-y.
Grimes, S., Southgate, E., Scevak, J., and Buchanan, R. 2019a. “University Student
Perspectives on Institutional Non-Disclosure of Disability and Learning Challenges:
Reasons for Staying Invisible.” International Journal of Inclusive Education 23 (6): 639–655.
https://doi.org/10.1080/13603116.2018.1442507.
Grimes, S., Southgate, E., Scevak, J., and Buchanan, R. 2019b. “Learning Impacts
Reported by Students Living with Learning Challenges/Disability.” Studies in Higher
Education 46 (6): 1146–1158. https://doi.org/10.1080/03075079.2019.1661986.
Hockings, C. 2010. “Inclusive Learning and Teaching in Higher Education: A Synthesis of
Research.” EvidenceNet, Higher Education Academy. www.heacademy.ac.uk/evidencenet.
Ketterlin Geller, L. R. 2005. “Knowing What All Students Know: Procedures for Develo
ping Universal Design for Assessment.” Journal of Technology, Learning, and Assessment
4 (2). https://ejournals.bc.edu/index.php/jtla/article/view/1649.
Ketterlin Geller, L. R., Johnstone, C. J., and Thurlow, M. L. 2015. “Universal Design of
Assessment.” In Universal Design in Higher Education: From Principles to Practice (Rev. ed.),
edited by Sheryl Burgstahler, 163–175. Cambridge, MA: Harvard Education Press.
Kurth, N., and Mellard, D. 2006. “Student Perceptions of the Accommodation Process
in Postsecondary Education.” Journal of Postsecondary Education and Disability 19 (1):
71–84. http://ahead.org/publications/jped/vol_19.
Li, I. W., and Carroll, D. R. 2019. “Factors Influencing Dropout and Academic
Performance: An Australian Higher Education Equity Perspective.” Journal of Higher
Education Policy and Management 42 (1): 14–30. https://doi.org/10.1080/1360080X.
2019.1649993.
Lipnevich, A., Panadero, E., Gjicali, K., and Fraile, J. 2021. “What’s on the Syllabus?
An Analysis of Assessment Criteria in First Year Courses across US and Spanish
Universities.” Educational Assessment, Evaluation and Accountability 33 (4): 675–699. https://
doi.org/10.1007/s11092-021-09357-9.
Logan, C. 2020. “Refusal, Partnership, and Countering Educational Technology’s Harms.”
Hybrid Pedagogy. https://hybridpedagogy.org/refusal-partnership-countering-harms/.
Marginson, S. 2016. “The Worldwide Trend to High Participation Higher Education:
Dynamics of Social Stratification in Inclusive Systems.” Higher Education 72 (4):
413–434. https://doi.org/10.1007/s10734-016-0016-x.
McArthur, J. 2016. “Assessment for Social Justice: The Role of Assessment in Achieving
Social Justice.” Assessment and Evaluation in Higher Education 41 (7): 967–981. https://
doi.org/10.1080/02602938.2015.1053429.
Nieminen, J. H. 2022. “Assessment for Inclusion: Rethinking Inclusive Assessment in
Higher Education.” Teaching in Higher Education. http://doi.org/10.1080/13562517.
2021.2021395.
O’Neill, G. 2017. “It’s Not Fair! Students and Staff Views on the Equity of the Procedures
and Outcomes of Students’ Choice of Assessment Methods.” Irish Educational Studies
36 (2): 221–236. https://doi.org/10.1080/03323315.2017.1324805.
O’Shea, S., Lysaght, P., Roberts, J., and Harwood, V. 2016. “Shifting the Blame in
Higher Education – Social Inclusion and Deficit Discourses.” Higher Education Research
and Development 35 (2): 322–336. https://doi.org/10.1080/07294360.2015.1087388.
Riddell, S., and Weedon, E. 2006. “What Counts as a Reasonable Adjustment? Dyslexic
Students and the Concept of Fair Assessment.” International Studies in Sociology of
Education 16 (1): 57–73. https://doi.org/10.1080/19620210600804301.
18 J. Tai et al.
Introduction
Assessment for social justice (McArthur 2016, 2018) was conceived as a broad
concept to encapsulate the multi-faceted ways in which assessment attitudes,
values, and practices can nurture greater social justice within and through
higher education. Key to the early development of assessment for social justice
was a challenge to largely procedural views of justice as fairness in assessment.
These deeply ingrained, socially embedded views emphasise fair procedures as
the foundation of just assessment: get the right procedures in place and we can
be assured that our assessment practices are fair. Such thinking underpins many
of the taken-for-granted assessment practices that are still common today:
students must do exams in a time-limited way; students should be assessed
in the same way; assignments should be submitted at the same time; students
should undertake exams at the same time and place; and the same rules should
apply to everyone (albeit with largely charitable exceptions for exceptional
circumstances).
Assessment for social justice does not disregard the importance of fair proce-
dures, nor the importance of equitable treatment and academic integrity, but it
does shift the focus from the procedures to the outcomes of social justice. The
other significant change heralded by assessment for social justice, compared with
traditional notions of assessment fairness, is that the focus encompasses all those
involved in assessment, not simply the students. For assessment to be socially just,
it must not cause injustice or misrecognition to those staff who undertake dif-
ferent assessment tasks. Injustice to assessors is becoming an increasing problem
in highly regulated higher education systems with increasing workload issues
(Shin and Jung 2014). Assessment represents an important moment in the life of
a teacher in two ways. Firstly, it can signify a moment of student achievement,
DOI: 10.4324/9781003293101-4
20 J. McArthur
which should be a joyous event when we see some of the outcomes of our stu-
dents’ learning. Secondly, it can signify the necessity for care and commitment,
which should be joyful in its own way: this is the moment where we see what our
student does not understand, and therefore how we can continue to help them.
When staff are denied these moments of joy, it serves as a form of professional
misrecognition which is unjust. I’ll explain more about this concept of misrecog-
nition later in this chapter.
The third dimension of assessment for social justice is the relationship to
society. Higher education serves several socially important roles, including nur-
turing the professionals who will go on to work in employment/social roles, and
the citizens who will help shape the broader character of society. Assessment can,
and should, intersect with these social roles, and in assessment for social justice,
I argue that this involves enabling positive social change, not just reproducing
the status-quo. The overall purpose of assessment needs to be understood by
staff, and students, through this social lens. Social justice should shape the nature
of assessments and how students are assessed. The goal is for graduates to have
knowledge, skills, and dispositions which are orientated towards contributing to
a more just society.
Assessment for social justice is therefore both a very broad concept and one
which, in my own conceptualisation (McArthur 2018), is framed in a fairly specific
way grounded in Frankfurt School critical theory. Assessment for inclusion is a
welcome initiative to focus on the development of specific aspects of assessment for
social justice. This chapter represents both a reflection on the concept of assessment
for social justice, which I first conceived over eight years ago, and a looking for-
ward to the possibilities of understanding and practicing just and inclusive assess-
ment, which are heralded by the exciting initiative of assessment for inclusion.
These are different concepts but they serve our joint endeavour of better assessment
and a just society.
Rather than vilifying students for being concerned with ‘what was on the
exam’, this interest became recognized as perfectly reasonable. The notion
Reflections on assessment for social justice and inclusion 21
that students should study for a term and then find an exam full of tricks
and surprises was unveiled as pedagogically questionable and ethnically
unsound.
(McArthur 2018, 2)
Assessment for social justice is an idea under which different possible practices,
dispositions and beliefs can coalesce and find meaning. My original explora-
tion (McArthur 2016) provides a rationale for the concept. It was a statement of
intent: how could we think differently about assessment? I made five proposals.
Firstly, that assessment is not only about the procedure of assessing a certain
moment, but about the outcome of engagement with knowledge that lasts. Here
there are clear resonances with Boud’s (2000) sustainable assessment. Secondly,
I argued for a new way of dealing with difference which does not make charita-
ble exceptions and the assumption of a single ideal set of student circumstances.
This is the thread that is most clear in assessment for inclusion (Tai et al. 2022).
Thirdly, I challenged the idea of a perfect mark and the deceptive nature of
highly differentiated grading systems. This was a theme picked up again in my
book, under the idea of assessment honesty (McArthur 2018). Fourthly, I estab-
lished that the purposes of assessment cannot involve a disarticulation of the
social and economic realms: preparing students for life beyond the institution
means more than preparing them for work alone. Finally, I asked, who should
make assessment decisions? This question is about more than students as partners
in assessment, but rather about deep reflection on in whose interests is assessment
undertaken and how do all involved have a voice?
This first exploration of the rationale of assessment for social justice drew on
both the capabilities approach of Sen (2010) and Nussbaum (2011) and critical
theory, including the older work of Adorno (2005) and the current work of
Fraser (2003, 2007). I ended by saying:
I took up my own invitation when I wrote the book Assessment for Social Justice,
and other colleagues are now taking it up in a different way with Assessment for
Inclusion. My book was distinct from the original article in many ways, but most
obviously I chose to narrow my theoretical lens in order to work through the
idea, focusing much more on third generation critical theorist, Axel Honneth.
Does this mean you have to buy in to Honneth’s critical theory in order to buy in
to assessment for social justice? No, but this point does require some explanation.
In Assessment for Social Justice, I aimed to bring together what I consider the
radical pessimism of the early Frankfurt School with the contemporary work
of third generation critical theorist, Axel Honneth. I did so particularly for the
22 J. McArthur
The key to beginning to realise assessment for social justice is being prepared
not only to think differently, but to talk openly in different ways: to bring new
words into faculty meetings, course team meetings or even corridor chats. Words
like joy, compassion, adventure, care and kindness: all of these belong in our
assessment discussions, and in using these to demonstrate our thinking differently,
we can foster change.
achievement and social wellbeing, will take some time and considerable cultural
change to achieve. In this study, we looked for instances where student discussions
of assessment displayed an orientation to self, discipline/profession or society. Out
of 427 interviews, we found only a handful of instances where students articu-
lated a connection between their assessment activities and broader society: and
most of these were “fleeting or tangential” (McArthur et al. 2021, 8). Of those
orientations to society that we did find, most were in South Africa, rather than
our other two locations of England and the USA, possibly reflecting the promi-
nence of social justice issues in South African everyday culture and discourse. On
the other hand, that observation makes our outcome even more disappointing:
why did not more students in South Africa see this social connection?
Two examples from this study exemplify the challenges facing assessment for
social justice. The first is demonstrated by the story of a student with the pseu-
donym Scarlet, who is going to be the focus of further research as we continue
this longitudinal project to its eighth and final year. Scarlet’s first year interview
transcript is a joy to read. It is resounding with quotes about saving the world and
making South Africa a better place. But by second year these thoughts are hard to
find. And by third year they have disappeared altogether, and Scarlet’s only con-
nection between assessment and the wider world is ensuring she gets employed
by a company. Clearly it is not for us to criticise Scarlet’s focus and ambitions,
however, if we return to the interconnection of individual and social wellbeing,
we are potentially seeing a diminution of Scarlet’s individual wellbeing as her
focus on that of others appears to diminish.
A second lesson comes from a cluster of students at another South African
university who provided many of our examples of an orientation to society.
They were part of a cohort of students who did an assessment task exploring
solutions to water shortages (at the time some parts of South Africa were experi-
encing extreme water shortages). Water shortages are closely linked to issues of
social justice, racial justice and poverty in South Africa. But these issues did not
really feature in the “fleeting or tangential” connections these students made
between their assessment and society. Nor did other students undertaking the
same assessment task make any such connection at all. The same phenomenon
was also apparent in the earlier study of first year students, where an assessment
on environmentally sustainable transport did not give rise to any statements of
connection to society (McArthur 2020a). What we learn here is that having
an assessment topic that has a social justice dimension, may not actually ensure
that students make connections between their own assessment achievements and
social wellbeing. Indeed, in this study of Chemistry and Chemical Engineering
the importance of transformative curriculum and assessment design, in con-
junction with one another, becomes clear. In the very crowded curriculum
typical of these disciplines, and the assessment design which emphasises a fast
pace of moving from one assessment to another, there is little time or room for
the reflective space to consider one’s achievements that is needed for assessment
for social justice to get a foothold.
Reflections on assessment for social justice and inclusion 25
The other direction I am taking assessment for social justice involves greater
connection with work on epistemic (in)justice. Here I would very much like to
connect the idea of assessment for social justice with Ashwin’s (2020) recent work
on reclaiming the educational purposes of higher education: namely, transform-
ative engagement with knowledge. This would then extend to consider more
issues of epistemic injustice (Fricker 2007) in an assessment context.
Finally, the implication of assessment for social justice led to my rethinking of
authentic assessment and arguing for a reframing of what authenticity means in
terms of a student as a whole person (McArthur 2020b, 2022). I challenge the con-
flation of the concepts of “real world” and “world of work” which underpins a great
deal of assessment literature. Work is, of course, an important way in which many
people achieve esteem recognition, but this is not necessarily the case and there
are other avenues. Hence, shackling assessment purely to a narrow idea of work
significantly reduces the opportunities for genuine esteem recognition. In addition,
focusing on the task, as a source of authenticity, and not the reason for doing a task
or who the student doing it is, could lead to profoundly unjust outcomes.
The term authentic assessment is very popular at the moment, and there are
some excellent examples of authentic assessment practices (e.g., Sambell and Brown
2021), but this does not deny the necessity to reflect, challenge, and rethink.
Assessment for social justice requires a socially situated approach to assessment
that is prepared to challenge the taken-for-granted and habitual practices, even
those done in good faith. It is just such a challenge that assessment for inclusion
offers, with its focus on diversity and assessment design.
bring to the fore issues of student diversity and the importance of assessment
design that celebrates difference rather than disadvantages students who fall out-
side some fictional norm. The focus therefore is on students, but not in such a
way that is disarticulated from their relationships with assessors and with wider
society. Most importantly, assessment for inclusion shares the same commitment
to problematise, challenge, and rethink taken-for-granted assessment practices
and assumed guarantees of quality and fairness.
The significance of assessment for inclusion, from my perspective, is that it
demonstrates the value of an approach that focuses close-up on particular assess-
ment issues, and which nevertheless has very broad consequences for assessment
integrity, student wellbeing, and broader social justice. Assessment for social
justice was always meant to be an expansive umbrella, and none of us can do
everything all the time. This zooming in and out from broad philosophical per-
spectives to everyday practices is vital, and assessment for inclusion is an important
demonstration of how that can be done. At the same time, it demonstrates how
different lenses and normative values can be brought into a common endeavour.
Those writing on assessment for inclusion do not share a specific lens or indeed
world view: they certainly don’t adhere to the very specific way in which I used
critical theory to work through the possibilities of assessment for social justice.
This is a very good thing, and such diversity is essential.
The challenge we face, however, is to ensure diversity and a plurality of voices
rethinking assessment, without this drifting away from the core goal of thinking
through how assessment should be considered central to achieving the social jus-
tice purposes of higher education. What is important here, I believe, is not that
we all think the same, but that we understand when we are thinking differently.
When we bring our objectives and assumptions to the surface, we move the
conversation on productively and avoid the dangers of hidden forms of distortion
or domination.
Assessment for inclusion also heralds a holistic approach to both inclusion and
diversity, as such it resonates with my own work to rethink inclusion in higher
education (McArthur 2021b). But “holistic” is another one of those buzzwords
that take off in higher education discourse. The challenge in my own work and
for assessment for inclusion is to retain the integrity of what we mean by holistic.
It is a complex word and practice that is too easily peppered through academic
literature without a real examination of what it means and the implications for
practice. To think of our students holistically involves, among other things, tem-
poral, spatial, interpersonal and cultural aspects. We have to not only understand
where our students have come from but also allow them to bring those identities
into university and to flourish not because they have adapted to the prevailing
stereotype, but because they have challenged it.
Thinking differently is at the core of social justice. From a critical theory
perspective, it provides a guard against passively accepting injustices that are not
easily seen, or even hidden in plain sight. For example, the broadly accepted
social norm of past decades where women were expected to remain in the home
Reflections on assessment for social justice and inclusion 27
and perform domestic duties was a case of injustice hidden in plain sight. Many
of the issues raised by assessment for inclusion are the same: injustice hidden in
plain sight. A clear example of this is the one already mentioned; using excep-
tional circumstances to adjust patently unjust traditional assessment systems to
make them seem inclusive.
From a critical theory perspective such as my own, the greatest harm comes
from leaving issues below the surface and unchallenged. The more open our
acknowledgement of issues and problems, and the more open our exchange of
different – even incompatible – views and solutions, the better. The strength
of assessment for inclusion is that, by focusing on a particular dimension of the
broader idea of assessment for social justice, academics can converge in one clear
place to continue this work of rethinking assessment. My hope is that others will
also take up the invitation, focusing on different dimensions that also comple-
ment, but vitally extend, the broader plane of assessment for social justice.
Conclusion
Assessment for social justice began as a challenge to ingrained assumptions about
assessment and as a commitment to realise the social justice potential of assess-
ment that was inherent in the work of early pioneers of assessment for learning.
It was a concept developed on the foundations of many other higher education
scholars, and yet it was also something that emerged in relative isolation for me
personally. The purpose was always for other scholars, researchers, and teachers
to take it up in their own ways. In assessment for inclusion, colleagues have
done just this with their focus on diversity and assessment design. The important
challenges inherent in the emerging work on assessment for inclusion more than
meet the call to action in assessment for social justice.
Note
1 See https://www.researchcghe.org/research/2015-2020/local-higher-education-
engagement/project/knowledge-curriculum-and-student-agency/
References
Adorno, T. W. 2005. Critical Models. New York: Columbia University Press.
Ashwin, P. 2020. Transforming University Education: A Manifesto. London: Bloomsbury
Publishing.
Boud, D. 2000. “Sustainable Assessment: Rethinking Assessment for the Learning Society.”
Studies in Continuing Education 22 (2): 151–167. https://doi.org/10.1080/713695728.
Boud, D., and Falchikov, N. 2006. “Aligning Assessment with Long-term Learning.”
Assessment and Evaluation in Higher Education 31 (4): 399–413. https://doi.org/10.1080/
02602930600679050.
Boud, D., and Falchikov, N. 2007. “Introduction: Assessment for the Longer Term.” In
Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by David
Boud, and Nancy Falchikov, 3–25. London: Routledge.
28 J. McArthur
Introduction
Theory offers a strong starting place to develop assessment for inclusion. Theory
unveils current ways of thinking and doing, examines them, and identifies alter-
natives. Freire’s (2000) call to praxis for social change puts theory to work in
academic spaces. Praxis requires critical reflection on current conditions and
prompts transformative action, through theory. Theory that reveals taken for
granted power dynamics offers academic changemakers a starting place to inter-
rogate and revise practice to move towards inclusion.
In this chapter, I argue that critical disability theory is a necessary lens to
develop assessment for inclusion. Disability is frequently overlooked in liberatory
pedagogies and associated assessment theory (Kryger and Zimmerman 2020;
Waitoller and Thorius 2016). When disability is included, such as in Universal
Design for Learning research, it often fails to disrupt “the desirability of the
normate1 or normative curriculum itself ” (Baglieri 2020, 63). That is, traditional
efforts towards inclusive practice often seek to include disabled people into exist-
ing systems with minor changes. In contrast, critical disability praxis demands
fundamental transformation that disrupts notions of normalcy to create more
just worlds through and with disability. Any approach to assessment for inclu-
sion must seek to disrupt notions of normal and, therefore, requires engagement
with critical disability theory. To this end, I offer three interconnected theoret-
ical movements from critical disability studies that are necessary to problematise
and reframe assessment for inclusion: studies in ableism, crip theory, and critical
universal design. Pollinated with principles from disability justice (Sins Invalid
2019), these movements advance ways of thinking from disability that help to
develop assessment for inclusion and build its case.
DOI: 10.4324/9781003293101-5
Why crip assessment? 31
A critical disability studies lens begins from “the vantage point of the atypical”
(Linton 1998, 5) to identify how assessments exclude and how such exclusion could
be addressed. This way of looking assumes that disability can be desirable and
creates productive friction to imagine assessment anew (McRuer 2006). Critical
disability studies, however, does not stop with a disability-focused analysis; it goes
further by engaging intersectionality, identifying linkages across axes of margin-
alisation, and challenging normalcy (Goodley 2017; McRuer 2006). Critical dis-
ability studies theories, then, offer assessment for inclusion a lens that begins from
disabled peoples’ experiences to broadly question the assumptions built into assess-
ments and their impacts. These tools demand reaching beyond mere inclusion to
cripping (McRuer 2006), a creative disability-led approach that dismantles exclu-
sionary arrangements. In the following sections, I introduce studies in ableism,
crip theory, and critical universal design. From each theoretical move, I identify
provocative questions to advance assessment for inclusion. These critical disability
lenses aid reconsideration of factors that construct assessment practices at multiple
levels: from university structures (e.g., semester timescape, rigid assessment word-
lengths by course level), to program-level expectations (e.g., uniform assessment
across all program courses), to individual course design. Thus, readers who occupy
different university roles (leadership, learning designers, course leaders) will find
examples that activate critical disability principles within their spheres of influence.
I invite readers to activate provoking questions in their own work and bring them
to collegial discussions to spark collective contemplation.
Studies in ableism
Studies in ableism (Campbell 2009, 2017) conceptualise the foundational problem
of social exclusion as a system that continually (re)instantiates a false dis/ability
binary wherein those coded as “disabled” are excludable and those that approx-
imate hegemonic norms of physical and mental ability are privileged. Campbell
(2017) explains that this hierarchical system is formulated and upheld through
dividing practices, which she outlines as differentiation, ranking, negation, noti-
fication, and prioritisation. Scholars and activists have demonstrated that ableism
is intwined with other marginalising systems, such as white supremacy, capital-
ism, and cis/hetero/patriarchy, which inform and reproduce norms of physical
and mental ability (Annamma, Connor, and Ferri 2013; Lewis 2022). Bailey
and Mobley (2019), for example, explain that “Notions of disability inform how
theories of race were formed, and theories of racial embodiment and inferiority
(racism) formed the ways in which we conceptualize disability” (27). To undo
this damaging system of ableism, the false binary of abled/disabled must be dis-
mantled. With notions of intersectionality and co-constitution in mind, ableism
must be dismantled in concert with other marginalising forces.
The university is deeply rooted in ableist practices. Dolmage (2017) explains
that academia, figuratively and literally, maintains “steep steps” to enter, succeed
in, and exit that persist despite claims of widening participation, access, and
32 N. R. Jain
equity. In fact, Mitchell (2016) argues that maintaining ableism appears funda-
mental to the business of the academy. Assessing ability and certifying mastery
are core functions of the university as we know it. Assessment can be understood
as a chief dividing practice of academic ableism. Differentiating and ranking
students by their ability to meet markers of academic success creates insiders and
outsiders. In this sense, the notion of “assessment for inclusion” creates a paradox:
because assessment is a central feature of an ableist system it precludes inclusion.
If we want to undo damaging systems of exclusion, ought we not dispense with
assessment altogether? Are anti-ableist assessments even possible in the academy
as it currently operates? Further work to explore these questions is necessary, in
concert with a larger examination of academic ableism, to interrogate the pur-
pose and mechanisms of assessment.
Undoing academic ableism requires a reckoning with the academy’s purpose in
modern life. Studies in ableism demands, first, a critical examination of the pur-
pose of assessments and what is deemed necessary to assess. To begin, we might
consider the following questions:
Crip theory
Crip theory (McRuer 2006) offers a route to rethink the academy and assess-
ment, to dismantle ableism. Building from queer theory’s foundations, crip
theory declares that disability is a desirable force to disrupt taken-for-granted
notions of ability and normality demanded by neoliberal capitalism. This poten-
tial, McRuer (2006) argues, exists when we call out, fail, or refuse to meet
ableism’s demands for compulsory ablebodiedness and mindedness. Crip the-
ory centres disability, critiques dominant formulations of it, and asserts libera-
tory ways to be and do through and with disability. The theoretical orientation
Why crip assessment? 33
towards desiring disability, rather than seeking to normalise or erase it, calls on
us to imagine radical futures with disability that reconceptualise seemingly fixed
presents (Kafer 2013). By insisting on radically inclusive futures, possibilities for
disabled peoples’ presents expand. Never ending with a static notion of disability,
a crip theory analysis leads to interconnected critiques of debilitating ideolo-
gies (e.g., capitalism, colonialism, hetero/cis/sexism, and white supremacy) and
invokes possible worlds that lay beyond (McRuer 2006). Crip theory suggests
that in assessment we must bring forth an understanding of ability and quality
that assumes and values all kinds of bodies and minds.
A crip theory lens calls on assessment for inclusion to design from disability,
to look for ways assessment can resist compulsory ablebodiedness and mind-
edness. To do so, we must search for existing knowledge that identifies prob-
lems and possible solutions, what Johnson and McRuer (2014) call cripistemologies,
lived knowledge from the critical, social, sensory, political, and personal position
of disability. Put more simply, Lau (2021) defines cripistemologies as “ways of
knowing that are shaped by the ways disabled people inhabit a world not made
for them” (3). Seeking cripistemologies of assessment might begin with consider-
ing ways disabled people fail to fit current assessment expectations and redesign
from these “failures” (Mitchell, Snyder, and Ware 2014). Crip time and interde-
pendence offer two illustrative examples.
Crip time concerns temporality. It is built through experiences such as pain,
differing forms of cognition, communicating with sign language (and through
interpreters, assistive technology, and so on), and navigating medical and social
systems (Kafer 2013; Price 2011; Samuels 2017; Zola 1993). Disabled students reg-
ularly face university expectations that temporally misalign with their embodied
experience, resulting in what one disabled medical student described as con-
stantly “battling time” ( Jain 2020, 127). Miller (2020) exposed the power of
neoliberal temporality to marginalise students who are LGBTQ+ and disabled,
including through assessment mechanisms such as attendance, participation, and
rigid deadlines that did not account for experiences of disability and regular
experiences of anti-LGBTQ+ bias. Such assessment regimes affected students
academically and tended to limit their ability to engage in activist work and
other community spaces (Miller 2020). Crip time suggests not just a need for
more time, but an exploded concept of time that is flexibly managed, negotiated,
and experienced (Kafer 2013; Price 2011; Samuels 2017; Wood 2017).
Engaging the notion of crip time requires that assessment assumes learners
will operate on varied temporalities. Therefore, we must seek to explode notions
of linear, normative time and tempo in assessment design. Beyond those with a
formal disability label, assessments built on crip time would produce allied ben-
efits, for example, for learners who are carers, who must work, and for whom
English is not a first language. Lau (2021), for example, describes alternative
strategies built through an understanding of crip and pandemic time that move
away from time-sensitive assessments towards alternative mechanisms such as
asynchronous discussion boards, cumulative and semester-long reflective journal
34 N. R. Jain
A crip theory lens on assessment for inclusion re-centres disabled students and
considers how their lived experience can productively inform assessment. To
begin rethinking assessment with crip theory, we might consider the following
questions:
Then, to shift away from ableist assessments that enforce compulsory ablebodied-
ness and mindedness, we must seek to understand disabled peoples’ work-arounds,
resistances, or failures to meet current expectations.
• How and why do learners struggle to perform (or fail) on current assessments?
• How do learners work around, or ask for exceptions to, current assessments?
How might this inform redesign?
notion of “universal” (Hamraie 2017). That is, rather than a diffused understanding
of universal, critical universal design demands attention to particularity, working
with those most marginalised in current systems to design anew. This approach to
universal design attends to root causes of disabled peoples’ marginalisation in edu-
cational environments, taking ableism seriously, in contrast to more “pragmatic”,
partial approaches that seek to de-centre disability (e.g., Tobin and Behling 2018).
Taking a critical universal design approach to assessment for inclusion would
begin prior to developing assessments. The questions posed throughout this
chapter provide productive starting points to think about the intention of assess-
ments and their impacts. Stepping back to think about what must be assessed,
why, and the potential consequences in the context of a broad conception of
the universe of potential learners, forces deliberate contemplation towards
inclusive assessment practices. The conceptualisation of potential learners must
undergo critique to ensure a bold outlook that seeks to expand the learner pro-
file and engages intersectionality. For example, this must include a broad group
of students with disabilities, including those who are also Black, Indigenous,
queer, and people of colour. From this intentionally broad base, design would
incorporate, from the earliest stages, ongoing consultation with those learners
most marginalised by current arrangements to consider pitfalls and possibilities
in assessment and build more flexible and inclusive design. Such an approach
would also require deep, ongoing work with academic staff to develop a critical
universal design habitus, recognise the historical roots of educational exclusion
and their contemporary echoes, and cultivate a critical universal design stance
towards education, including in assessment. Ensuring that the process is open-
ended would build in flexibility and ongoing review on multiple levels: within a
single class to a program, school, and university level.
Scholars from disability studies seek more inclusive assessments through prac-
tices that align with critical universal design. Their accounts focus on thoughtful
design that anticipates heterogeneous disabled students will inhabit the class-
room, infuses flexibility as a matter of course, and promotes co-construction
such that universal design is treated as a verb (Dolmage 2017). For example,
Polish (2017) engages multimodal discussions of assessments via Google doc,
in course blogs, or on paper, where students pose questions, note what they
would like to change, and indicate aspects they are excited about, offering a
route towards further assessment customisation. Others describe similar efforts
that engage with students to actively (re)formulate assessments that amplify their
strengths and interests (Castrodale 2018; Kryger and Zimmerman 2020; Lau
2021). These negotiations are conducted with all students and without the need
to substantiate or justify the desire for change. Another common strategy is to
build flexibility into set assessment modes. Castrodale (2018) designs assessment
rubrics flexible enough to account for multiple forms of engagement, allow-
ing students to choose the best mode to express their learning, from a written
essay to a podcast, video, student-instructor conference, or poster, among other
options. Bones and Evans (2021) build in dropped assignments and late passes
Why crip assessment? 37
that may be used without negotiation, as well as a list of assessments students can
choose from. Others outline the myriad ways they assess participation beyond
speaking in class (McKinney 2016; Stanback 2015).
While our focus here is assessment, it is important to note that stories of larger-
scale implementation of critical universal design that move beyond a single course
to a program, school, or university remain thin in the literature. Though assess-
ment is a crucial site requiring change, without larger-scale attention, ableist forces
will remain central in academic environments and constrain inclusive innovation.
For example, Castrodale (2018) indicates the need to query departmental or pro-
gram grading expectations such as expected averages, curriculum prerequisites,
and reporting timelines that may impact what is possible within a classroom.
A critical universal design praxis for assessment reactivates disability politics
in design from the start. We might begin with fundamental questions about our
learning environments:
We seek to understand ways of being, doing, and knowing that are not currently
assumed in educational design to consider how current practices might shift. To
do so, we might pursue the following lines of inquiry:
• What do learners (in particular, those with disabilities and others most mar-
ginalised by educational and social systems) tell us about how they could best
demonstrate their learning?
• How can assessments assume diverse bodies and minds from the outset?
• How will we know our assumptions are sufficiently broad?
Embracing intersectionality and crip theory, the practice is alive and iterative.
We must consider:
Conclusion
While developed from a disability perspective, the theoretical tools introduced
here broadly question how learners and learning have been conceptualised and
are critical to furthering assessment for inclusion. Because assessment is rooted
in hierarchies of value among minds, critical evaluation of its purpose, form,
and function is needed. Examining notions of ability, how they are coded and
produced in assessments and more broadly within educational environments, is
38 N. R. Jain
Acknowledgements
The author would like to acknowledge the support of Professor Missy Morton,
Professor Christine Woods, and the Imagining the Anti-ableist University pro-
ject, as well as postdoctoral fellowship funding from Waipapa Taumata Rau the
University of Auckland and the Business School’s Equity Committee, which
contributed to the completion of this chapter.
Note
1 Garland-Thomson (1997, 8) explains that the normate is “the constructed identity of
those who, by way of the bodily configurations and cultural capital they assume can
step into the position of authority and wield the power it grants them”. Similar to, and
bound up in, whiteness, the normate is a figure often made invisible that nonetheless
dominates the workings of our social worlds. Adopting Price’s (2015) argument for
bodymind, I consider the normate to include mental configurations.
References
Annamma, S. A., Connor, D., and Ferri, B. 2013. “Dis/Ability Critical Race Studies
(DisCrit): Theorizing at the Intersections of Race and Dis/Ability.” Race Ethnicity and
Education 16 (1): 1–31. https://doi.org/10.1080/13613324.2012.730511.
Baglieri, S. 2020. “Toward Inclusive Education? Focusing a Critical Lens on Universal
Design for Learning.” Canadian Journal of Disability Studies 9 (5): 42–74. http://doi.
org/10.15353/cjds.v9i5.690.
Bailey, M., and Mobley, I. A. 2019. “Work in the Intersections: A Black Feminist
Disability Framework.” Gender and Society 33 (1): 19–40. http://doi.org/10.1177/
0891243218801523.
Why crip assessment? 39
McRuer, R. 2006. Crip Theory: Cultural Signs of Queerness and Disability. New York:
NYU Press.
Miller, R. 2020. “Out of (Queer/Disabled) Time: Temporal Experiences of Disability
and LGBTQ+ Identities in U.S. Higher Education.” Critical Education 11 (16): 1–20.
http://doi.org/10.14288/ce.v11i16.186495.
Mingus, M. 2017. “Access Intimacy, Interdependence, and Disability Justice.” Leaving
Evidence (blog). April 12, 2017. https://leavingevidence.wordpress.com/2017/04/12/
access-intimacy-interdependence-and-disability-justice/
Mitchell, D. T. 2016. “Disability, Diversity, and Diversion.” In Disability, Avoidance, and
the Academy, edited by David Bolt, and Claire Penketh, 9–20. Oxon: Routledge.
Mitchell, D. T., Snyder, S. L., and Ware, L. 2014. ““[Every] Child Left Behind”:
Curricular Cripistemologies and the Crip/Queer Art of Failure.” Journal of Literary
and Cultural Disability Studies 8 (3): 295–313. http://doi.org/10.3828/jlcds.2014.24.
Polish, J. 2017. “Final Projects and Research Papers: On Anti-Ableist Assignment Design.”
Visible Pedagogy (blog). July 12, 2017. https://vp.commons.gc.cuny.edu/2017/07/12/
final-projects-and-research-papers-on-anti-ableist-assignment-design/
Price, M. 2011. Mad at School. Ann Arbor: University of Michigan Press.
Price, M. 2015. “The Bodymind Problem and the Possibilities of Pain.” Hypatia (30) 1:
268–284. http://doi.org/10.1111/hypa.12127.
Reeve, D. 2012. “Cyborgs, Cripples and iCrip: Reflections on the Contribution of
Haraway to Disability Studies.” In Disability and Social Theory: New Developments and
Directions, edited by Dan Goodley, Bill Hughes, and Lennard Davis, 91–111. London:
Palgrave Macmillan.
Samuels, E. 2017. “Six Ways of Looking at Crip Time.” Disability Studies Quarterly 37 (3).
http://doi.org/10.18061/dsq.v37i3.5824.
Sebok-Syer, S. S., Chahine, S., Watling, C. J., Goldszmidt, M., Cristancho, S., and
Lingard, L., 2018. “Considering the Interdependence of Clinical Performance:
Implications for Assessment and Entrustment.” Medical Education 52 (9): 970–980.
http://doi.org/10.1111/medu.13588.
Sins Invalid. 2019. Skin, Tooth, and Bone. Rev. ed. Berkeley: Sins Invalid.
Stanback, E. B. 2015. “The Borderlands of Articulation.” Pedagogy 15 (3): 421–440.
https://doi.org/10.1215/15314200-2917009.
Tobin, T. J., and Behling, K. 2018. Reach Everyone, Teach Everyone: Universal Design for
Learning in Higher Education. Morgantown: West Virginia University Press.
Waiari, D. A. K., Lim, W. T., Thomson-Baker, A. P., Freestone, M. K., Thompson,
S., Manuela, S., Mayeda, D., Purdy, S. P., and Le Grice, J. 2021. “Stoking the Fires
for Māori and Pacific Student Success in Psychology.” Higher Education Research and
Development 40 (1): 117–131. https://doi.org/10.1080/07294360.2020.1852186.
Waitoller, F. R., and Thorius, K. A. K. 2016. “Cross-Pollinating Culturally Sustaining
Pedagogy and Universal Design for Learning: Toward an Inclusive Pedagogy that
Accounts for Dis/Ability.” Harvard Educational Review 86 (3): 366–389. http://doi.
org/10.17763/1943-5045-86.3.366.
Wong, A. 2020. “I’m disabled and need a ventilator to live. Am I expendable during
this pandemic?” Vox, April 4, 2020. https://www.vox.com/first-person/2020/4/4/
21204261/coronavirus-covid-19-disabled-people-disabilities-triage
Wood, T. 2017. “Cripping Time in the College Composition Classroom.” College
Composition and Communication 69 (2): 260–286.
Zola, I. K. 1993. “Self, Identity and the Naming Question: Reflections on the Language
of Disability.” Social Science and Medicine 36 (2): 167–173. https://doi.org/10.1016/
0277-9536(93)90208-L.
4
INDIGENOUS PERSPECTIVES
ON INCLUSIVE ASSESSMENT
Knowledge, knowing and the relational
What does it mean to “assess” a person’s learning? The common answer might
appear to be quite clear: a student is “taught” some “thing” and is then required
to demonstrate that they “understand” what they have been taught. The way this
“demonstration of understanding” takes place may be scaffolded: at first, an expla-
nation of knowledge; followed by an application of knowledge, and so it goes.
But these concepts – “assess”, “knowledge”, “understand”, and so on – do not
fully capture Indigenous Ways of Valuing, Knowing, Being, and Doing (Arbon
2008; Martin and Mirraboopa 2003). There is not always one right way, and
the existence of many viewpoints, standpoints, and knowledges can sit uncom-
fortably within wider institutions. In short, a “major challenge for academics is
decision-making around what students need to know, and how to get them ‘to
know it’ and ‘accept it’” (Nakata 2017, 3).
As two First Nations academics, this chapter evolved from us coming together
to share and narrate our insights and experience across notions of inclusive edu-
cation and provide a reflection upon focused Indigenous perspectives in assess-
ment contexts. This chapter moves between two sections: the first, provided
by co-author Gleeson, considers First Nations learning spaces in the context
of Indigenous and non-Indigenous students; and the second, from co-author
Fletcher, examines the tensions of assessment within a more specific First Nations
context. Drawing upon these apertures, we consider how inclusive assessment
may be enacted, and finally offer some thoughts on how these perspectives and
understandings may be further developed.
In providing these perspectives, we acknowledge our standpoints in doing so:
not only as First Nations women but also as academics that sit at, and at times,
within, the “cultural interface” (Nakata 2002, 2006, 2017). The challenge for us –
and our students, both Indigenous and non-Indigenous – exists in drawing on
Indigenous perspectives in negotiating this “cultural interface”. This interface is
DOI: 10.4324/9781003293101-6
42 J. Gleeson and G. Fletcher
one in which we may be free to assess students on what they have learned, but the
ways in which we do so are encompassed by wider structures. These structures
prescribe the methods and approaches of assessment to ensure a quantified, con-
sistent result: all students gain a comparable and acceptable level of knowledge
and skills upon completion of their degree. We acknowledge it is a space where
the particularities of what and how we know is embedded in our subjectivities,
our locations and, as we will show, the “self ” as a site of knowledge contestation
and nuanced tension that requires ongoing negotiation. These encounters can be
constraining and sites of collision in many ways, but as we argue, they offer pow-
erful transformative opportunities in revealing inclusive assessment approaches
that can dimension understanding and practice as the cultural interface is viewed
in relational terms. And as such is an enabler for students and teachers alike.
the wider systemic issues that First Nations scholars are often faced with when
designing units, and how universities can move to address these hurdles to ensure
a consistent embedding of First Nations knowledges.
Perspectives on assessment
How do we, as First Nations people, assess knowledge? As oral storytellers, the
question of how knowledge is passed down whilst being “accurate” is not a new
one. Rather, our knowledges have required these “assessments” for thousands
of years, in the form of various checks and balances that each system permits.
A “story” could have embedded in it layers of learning, and may be accompanied
by dance, music, or told as a part of a wider ceremony depending on its purpose.
Sveiby and Skuthorpe (2006) provide the example of the crane and the crow: a
story of the Nhunggabarra people, in which the crane and the crow are at odds
with one another regarding a piece of fish. The subsequent discussion on layered
learning provides an illustrative example of how one story may hold many hid-
den and deeper meanings, and can therefore contain a community’s “archives,
law book, educational textbooks, country maps, and Bible – in short the whole
framework for generating and maintaining the knowledge base of the people”
(Sveiby and Skuthorpe 2006, 42).
In our units, we adopt an approach of layered learning: we return to the same
questions, topics, and prompts across units and apply a series of “layers” in doing
so. In some ways, this echoes mainstream approaches of scaffolded learning: stu-
dents are equipped with increasingly complex forms of knowledge, and in turn
apply these (Cho and Cho 2016). But the process of learning is also reiterative: in
discussing the impacts of colonisation, we turn in one unit to the loss of knowl-
edge, in another the effect on Country, and in a third to the ongoing consequences
on community health. Accordingly, the assessment tasks for each of these units and
topics must build on, and re-use, the knowledge gathered in previous units.
Each of the methods of assessment used exist within wider structures: a three-
year degree, a 1000-word essay, a 12-week semester, and so on. Consequently,
when we set about the process of assessment, we rapidly arrive at the cultural
interface between Western structures, and Indigenous Knowledges. For exam-
ple, despite thousands of years of oral storytelling practices, if we built a series
of units that relied only on oral presentations, we would quickly find ourselves
needing to justify to wider university committees how these assessment tasks
captured a student’s knowledge.
These structures can still serve a purpose for us, as First Nations staff – they
allow us to change the curricula, and change the teaching approaches, so that
we can “do our job more effectively” (Nakata 2013, 298). But these improved
outcomes are still dependent on the context and specificities of each university
and its associated “Whitefella” practices – those methods and structures of assess-
ment we need to work alongside, to assist the professionalisation and systemisation
of our practices of teaching (Nakata 2013). In short, the outcomes for teaching
Indigenous Knowledges are only as good as the system they are embedded within.
How we achieve these outcomes, and reconcile these Whitefella practices
with our own, is a continuing, collaborative process. Working alongside and
within these practices requires knowledge of the right conditions: who to talk to
for support, when to submit changes to assessment, and what words to use within
the submission. In much the same way that Country has indicators of seasonal
changes, the university curriculum environment has its own. The right person
needs to be in the right place at the right time of year. The right words need to be
used on the right form. The right committee members need to be told in advance
of the submission, and their support needs to be gained. And finally, the right
meeting needs to be attended, and approval granted. These practices – forms,
committees, and emails – are not a unique challenge for First Nations staff. But
how we reconcile and “style” our knowledges to sit within these Whitefella
practices is one of the difficulties faced by First Nations teaching staff. Broader
understandings within the university of culturally appropriate assessment are a
useful start for respecting (and ideally, embedding) Indigenous Knowledges; but
beyond this there are hurdles built within the system itself that cannot be over-
come without significant collaboration and partnership from others within the
same environment.
terrain for “deeper understanding” of that nexus and exchange. The first part of
this element required students to post reflections under four distinct headings in
an online discussion board:
What facilitated your learning; What impeded your learning; How might
your learning be enhanced; and commenting upon other people’s experience.
This was on an ongoing task, with one reflective post required weekly. The
second part of the assessment, examined here, was an extension of the Learner’s
Biography, where students narrated (Indigenous voice) their learning experi-
ence in a Knowledge Circle. Students were asked to draw from their weekly
postings, including concepts and literature they had been exposed to. They
were encouraged to bring “artefacts” that may have represented anchorage or
a sense of meaning to their learning experience, to extend their own personal
subjectivity and identity, and their particularity of experience. Each student
was allocated ten minutes for their “presentations” in the Knowledge Circle,
and the assessment was marked against a rubric that we had developed with key
assessment criteria being:
Performance of task
The assessment session began with students volunteering the ordering of their
presentation in the Knowledge Circle. As each student spoke, it became increas-
ingly evident that their reflective narratives and the concept of the Learning
Biography itself as a broader task relating to education were transformed with
each recitation. There was a clear and ongoing departure from the “marking
criteria”, despite students’ previous briefing and circulation of all relevant infor-
mation. What emerged were narratives clearly embedded within personal his-
toriographies, with references to family, community, the Stolen Generations,
and policies and practices of ongoing colonisation. Resultantly, the space was
transformed to a shared arena of personal and cultural decompressing, and for
some students, an exposure of ongoing wounding.
Students focused on their experiences of “being” Indigenous, and their story,
rather than the experience of being “learners” as a compartmentalised aspect of
self, and clearly this demarcation of particularities of “multiple identities” was
indivisible with the experience of being constructed as “Indigenous” and the cul-
tural aspects and responsibilities of their identities and Indigenous stand-points.
From an assessment perspective, the rubric became a problematic tool. We
found attempting to fit each historiography into neatly delineated criteria either
did not apply, or there was such significant departure that it was impossible to
mark according to the measures and criteria before us, that we had devised.
48 J. Gleeson and G. Fletcher
Students began to extend their storied responses and texture these around the
growing thematic articulations and collapsing of the strictures of the assessment.
And whilst we struggled with the measures of assessment we had carefully devel-
oped, we found ourselves equally immersed within a cultural collective focused
on the importance of the student’s vocality and sensitivity to the emotional dif-
ficulty of this “closed but public” discourse.
Over the 90 minutes allocated for the “assessment”, it was evident that the
culture of the space, and its spatiality had changed significantly. And yet it was
an organic, unconscious shift that seemed guided by needs beyond the students
and staff present and the learning context. This “Knowledge Circle” became a
cultural location and an explicitly Indigenous social context. A locality where
all that the students brought and represented not just themselves but their home
communities, their histories, and experiences, against an historic backdrop of
exclusion and marginalisation – an imagined and real community sharing both
similar and different experiences in an inclusive remaking of place that enabled
each member to speak, spill, and explore beyond the frame of the dominant
knowledge system and its measures.
And we have made clear our own reflexivity to own inclusion in assessing lev-
els of knowledge. But this space is not ours alone. As Nakata (2002, 285) has
suggested:
the intersection of the Western and Indigenous domains … the place where
we live and learn, the place that conditions our lives, the place that shapes
our futures and, more to the point, the place where we are active agents in
our own lives – where we make our decisions – our lifeworld.
Conclusion
This chapter has its focality in the interface of the transformation of assessment
to be more inclusive by creating sets of structural conditions that can enable
collaboration and diverse standpoints for all. For us as First Nations teachers, we
seek to teach and evaluate in ways that are socially explicit and culturally viable,
within a theoretic and practical model that can assess according to the value of
social justice, Indigenous meaning, relationality, and the whole person.
Reflexively and collaboratively with our non-Indigenous colleagues, we seek
to share our insights and perspectives to co-create, explore, and expand the cul-
tural interface as a space of transformation and the new. These examples narra-
tivise the ongoing tensions of the cultural interface – and find ways to liberate
an embedded otherness and the ongoing discursive terrain that needs to be con-
tinually theorised in finding equitable domains and the enabling points that can
resist and register according to the implicit need for emergence and liberation.
Inclusive assessment is an ongoing process, and one that must be lived to be
enacted upon and alongside. Our accounts in this chapter emphasise the need
to “read” the Country of curriculum design, and understand how and where
Indigenous perspectives on inclusive assessment 51
to intercede and change, and then reflexively, change again. We therefore argue
for the continued expansion and development of the cultural interface to facili-
tate opportunities for curriculum refinement and change. Finally, we also note
that our experiences outlined here are just two amongst many. We therefore
emphasise the need for, and invite, additional accounts and contributions of our
peers’ insights to provide further standpoints and perspectives in the ongoing and
reflexive process of inclusive assessment design.
References
Arbon, V. 2008. Arlathirnda Ngurkarnda Ityirnda: Being-Knowing-Doing, De-Colonising Indigenous
Tertiary Education. Teneriffe, QLD: Post Press.
Biermann, S., and Townsend-Cross, M. 2008. “Indigenous Pedagogy as a Force for
Change.” The Australian Journal of Indigenous Education 37 (S1): 146–154. http://doi.org/
10.1375/S132601110000048X.
Cho, M.-H., and Cho, Y. 2016. “Online Instructors” Use of Scaffolding Strategies to
Promote Interactions: A Scale Development Study.” The International Review of Research
in Open and Distributed Learning 17 (6). https://doi.org/10.19173/irrodl.v17i6.2816.
Donovan, M. 2015. “Aboriginal Student Stories, the Missing Voice to Guide us towards
Change.” Australian Educational Researcher 42: 613–625. https://doi.org/10.1007/s13384-
015-0182-3.
Lowitja Institute. 2012. “Using Yarning as Research Methodology.” Accessed March 12,
2022. https://www.lowitja.org.au/using-yarning-research-methodology.
Martin, K., and Mirraboopa, B. 2003. “Ways of Knowing, Being and Doing: A Theoretical
Framework and Methods for Indigenous and Indigenist Research.” Journal of Australian
Studies 27 (76): 203–214. http://doi.org/10.1080/14443050309387838.
Nakata, M. 2002. “Indigenous Knowledge and the Cultural Interface: Underlying Issues
at the Intersection of Knowledge and Information Systems.” IFLA Journal 28 (5–6):
281–291. http://doi.org/10.1177/034003520202800513.
Nakata, M. 2006. “Australian Indigenous Studies: A Question of Discipline.” The Australian
Journal of Anthropology 17 (3). 265–275. https://doi.org/10.1111/j.1835-9310.2006.
tb00063.x.
Nakata, M. 2013. “The Rights and Blights of Politics in Indigenous Higher Education.”
Anthropological Forum 23 (3): 289–303. https://doi.org/10.1080/00664677.2013.803457.
Nakata, M. 2017. Disciplining the Savages — Savaging the Disciplines. Canberra: Aboriginal
Studies Press.
Nakata, M., Nakata, V., Keech, S., and Bolt, R. 2012. “Decolonial Goals and Pedagogies
for Indigenous Studies.” Decolonization: Indigeneity, Education and Society 1 (1): 120–140.
Santoro, N., Reid, J., Crawford, L., and Simpson, L. 2011. “Teaching Indigenous
Children: Listening to and Learning from Indigenous Teachers.” Australian Journal of
Teacher Education 36 (10). http://dx.doi.org/10.14221/ajte.2011v36n10.2.
Sharmil, H., Kelly, J., Bowden, M., Galletly, C., Cairney, I., Wilson, C., and Hahn,
L. et al. 2021. “Participatory Action Research-Dadirri-Ganma, Using Yarning:
Methodology Co-design with Aboriginal Community Members.” International
Journal of Equity Health 20: 160–171. https://doi.org/10.1186/s12939-021-01493-4.
Sveiby, K. E., and Skuthorpe, T. 2006. Treading Lightly: The Hidden Wisdom of the World’s
Oldest People. Sydney: Allen and Unwin.
5
WHAT CAN DECOLONISATION
OF CURRICULUM TELL US ABOUT
INCLUSIVE ASSESSMENT?
Sarah Lambert, Johanna Funk, and Taskeen Adam
Introduction
One of the strengths of an inclusive approach to education is that all students
benefit. It’s not just about accommodating and improving education for students
with diverse abilities and cultures. Inclusive education that models respectful and
productive relationships between students with diverse knowledges, cultures,
histories, and identities also shows majority or privileged students the strength
and contribution made by those with different backgrounds.
From the perspective of cultural inclusion, inclusive assessment as a sub-set
of inclusive education can aim to: provide justice for Indigenous, international
and students from minority cultural backgrounds; and cultivate in all students an
understanding of the need for cultural justice and the value of multiple cultural
knowledge perspectives. Inclusive assessment – particularly if part of inclusive
curriculum – has the potential to provide all students with greater graduate out-
comes than assessment that draws on only the Western cannon of ideas. The idea
is that all students should graduate with multiple kinds of knowledges and leave
better prepared to negotiate different worldviews and cultures in their lives.
However, this vision for inclusive education and assessment has not yet
generally arrived in practice. Higher Education tends to consider students who
are not from White, English-speaking middle-class backgrounds as “disadvan-
taged”, less-capable students who lack the “cultural capital” needed to navigate
university terminology and processes. Students from Indigenous, international,
or migrant backgrounds are often considered doubly disadvantaged for having
to study in a second or third language and for being first in family to go to
university.
Our work has been informed by theories of social justice and decolonisation
which reject these narratives of underperformance for the way they focus on
DOI: 10.4324/9781003293101-7
Decolonisation of curriculum and inclusive assessment 53
what a student lacks (i.e., “proper English”) instead of the abilities they possess
such as learning across multiple languages and cultures. Focussing on lacks rather
than embracing diverse motivations for study is known as “deficit discourse” and
higher education is awash with it (Burke 2012). The problem of deficit discourse
is that it leads us to want to mould students who are not like us to be more like
us. Our assessments and their grading criteria often ask students to think like us,
speak like us, and write like us (where the majority of “us” in Western higher
education are White) and be rewarded with good marks and university success.
Students may accept, reject, or mediate the need to assimilate to succeed. One
mediating response is the contemporary cultural practice of “code-switching”.
Code-switching is where students who speak different forms of English such as
Black English, Aboriginal English, or African-American Vernacular English to
learn to switch between their local English and the English required of them
at university and beyond. A similar process happens when it comes to writ-
ing in English too. Code-switching requires additional cognitive effort but it
does allow students to move between two similar but distinctly separate worlds.
Rather than making an effort to incorporate the actual English of millions of
students into Western education, the sub-text of our learning outcomes is clear:
we do not recognise your own English as legitimate, work harder to change.
In addition to our previous understanding of higher education as exclusionary
to working-class students’ values and language (O’Shea 2016), current approaches
to students from different cultural-linguistic backgrounds can be seen as contem-
porary expressions of racist White assimilationist or White Supremacist policies
(Baker-Bell 2020). But what are the alternatives? Social justice and decolonial
approaches are an alternative that we explore for assessment for inclusion in the
next section before introducing a Culturally Inclusive Assessment model developed
from a range of empirical and theoretical sources.
Justice-as-content opportunities
• Whose cultural knowledges are the focus of assessment questions; is there a
rationale for this? How might students use more diversified cultural exam-
ples or options?
• Whose knowledges and perspectives might be missing from reading lists and
assessment resources? To what extent are, for example, women and authors
of colour cited in practical examples and theoretical frameworks?
• How frequently do staff review essential and assessment related readings
and examples to weed out deficit language which might unintentionally
reinforce exclusionary stereotypes? Libraries and/or teaching and Learning
60 S. Lambert, J. Funk, and T. Adam
Justice-as-process opportunities
• How can assessments be designed to allow students to situate their culture
at the centre of their learning, while still recognising and appreciating other
cultures?
• How can students be supported to develop skills in learning about all
cultures and their entanglements within particular fields of study?
• How might students’ high impact contributions to their socio-cultural
communities be recognised as knowledge in pre-admission assessments of
students’ capability?
• How can a recognition of prior learning approach be brought into classroom
conversations to recognise students’ existing cultural knowledge within
examples and assessment conversations?
• How can two-way learning and dialogue be modelled rather than one-way
“inputs” provided in feedback and assessment?
Justice-as-pedagogy opportunities
• How can assessments that foster “unlearning” be introduced in early classes
to explicitly address students’ pre-existing assumptions and language of
difference as a foundational learning activity for the discipline?
• In upper-level classes, how can students be engaged in a process of address-
ing under- and misrepresentation in curriculum materials by assessing the
research and development of newly decolonised learning materials?
• How and when can students be scaffolded to critically read new material,
including materials they source as part of assessment work, to avoid reinforc-
ing stereotypes or misrepresentations in the field?
• What kinds of assessment items could be modified to include reflections or
measures of students’ development of critical consciousness from the begin-
ning of their learning journey?
• When should questions be added to course feedback surveys asking students
how assessments could be more inclusive?
Decolonisation of curriculum and inclusive assessment 61
While the main aim of this chapter has been to focus on diversifying and
decolonising learning and assessment, it is important to recognise that the
broader educational landscape is founded on many colonial logics. Drawing on
Bali (2018, 305), “[a]ttempts at inclusion can only be authentic and meaningful
when we make the content, process, and outcome of education more egalitarian,
open, and inclusive”. Decolonising assessment practices will be more effective
when coupled with decolonising content and curriculum, embracing criti-
cal pedagogy and praxis, diversifying staff, encouraging interdisciplinary and
cross-disciplinary collaboration, questioning academic processes that determine
what counts as knowledge and what doesn’t, and questioning power structures
within our institutions. These diverse angles on knowledge practices can offer
justice as content, process, and pedagogy at the level of the institution, which can
better lead to more just and inclusive assessment.
References
Adam, T. 2020a. Addressing Injustices through MOOCs: A Study among Peri-Urban, Marginalised,
South African Youth. University of Cambridge. https://doi.org/10.17863/CAM.56608.
Adam, T. 2020b. “Between Social Justice and Decolonisation: Exploring South African
MOOC Designers’ Conceptualisations and Approaches to Addressing Injustices.”
Journal of Interactive Media in Education 2020 (1). https://doi.org/10.5334/jime.557.
Baker-Bell, A. 2020. Linguistic Justice: Black Language, Literacy, Identity, and Pedagogy.
New York: Taylor and Francis. https://doi.org/10.4324/9781315147383.
Bali, M. 2018. “The ‘Unbearable’ Exclusion of the Digital.” In Disrupting the Digital Humanities,
edited by Dorothy Kim and Jesse Stommel, 295–319. Brooklyn: Punctumbooks. https://
doi.org/10.21983/P3.0230.1.00.
Bates, P., Chiba, M., Kube, S., and Nakashima, D., eds. 2009. Learning and Knowing in
Indigenous Societies Today. Paris: UNESCO Publishing.
Bhabha, H. K. 2004. The Location of Culture. 2nd ed. London: Routledge. https://doi.
org/10.4324/9780203820551.
Burke, P. J. 2012. The Right to Higher Education: Beyond Widening Participation. London and
New York: Routledge.
Cross, B. E. 2003. “Learning or Unlearning Racism: Transferring Teacher Education
Curriculum to Classroom Practices.” Theory into Practice 42 (3): 203–209. https://doi.
org/10.1207/s15430421tip4203_6.
Devlin, M. 2013. “Bridging Socio-Cultural Incongruity: Conceptualising the Success
of Students from Low Socio-Economic Status Backgrounds in Australian Higher
Education.” Studies in Higher Education 38 (6): 939–949. https://doi.org/10.1080/
03075079.2011.613991.
Fraser, N., Dahl, H. M., Stoltz, P., and Willig, R. 2004. “Recognition, Redistribution
and Representation in Capitalist Global Society: An Interview with Nancy Fraser.”
Acta Sociologica 46 (4): 374–382. https://doi.org/10.1177%2F0001699304048671.
Freire, P. 1970. Pedagogy of the Oppressed. London and New York: Penguin Books.
Funk, J. 2021. “Caring in Practice, Caring for Knowledge.” Journal of Interactive Media in
Education 2021 (1): 1–14. https://doi.org/10.5334/jime.648.
Gonzalez, J. M. 2011. “Suspending the Desire for Recognition: Coloniality of Being, the
Dialectics of Death, and Chicana/o Literature.” UC Berkeley.
62 S. Lambert, J. Funk, and T. Adam
DOI: 10.4324/9781003293101-8
64 J. H. Nieminen
“reading comprehension skills”. By offering enough support (but not too much)
on the disability-specific hindrances (but not for anything else), it is possible to
provide a fair access to assessment for disabled students (see e.g., Holmes and
Silvestri 2019; Lovett and Lewandowski 2020).
This approach to “inclusive assessment” reflects the medical model of disability
which understands disabilities mainly as personal deficits that need to be cured,
fixed, and accommodated. This model sees disabilities as something to be miti-
gated in assessment, rather than something that enriches it (Nieminen 2021). As
the support mechanisms of higher education rely on the medical model, disabled
students might be further marginalised and excluded in academic communities
(Nieminen and Pesonen 2022).
Within such a medical model, assessment accommodations are likened to med-
ical treatment. Just as a certain illness is cured with certain medicine, a certain dis-
ability type (e.g., dyscalculia) should be paired with an adequate type of assessment
accommodations (e.g., a possibility for a calculator). Assessment accommodations
should, then, be based on objective psycho-cognitive knowledge. Accommodation
literature is dominated by psychometrics, leaving the approaches of ethics, care,
and social justice in the margins of the literature. For example, advocacy roles are
commonly portrayed as risks in assessment:
only 16% are integrated into the society through work, and only 2% have a full-
time job. It is reported that many people with intellectual disabilities work for free
as they are not told their rights for salary. According to Väylä, the average pay for
people with intellectual disabilities is 7 euros per day on average, from which a
lunch fee (€4.90) is often deducted. In fact, Väylä was founded to ensure that “in
the future, every person with intellectual disabilities receives an appropriate salary
for their work” (Väylä 2022, my translation). These shameful statistics remind us of
the inability of Finland to include people with intellectual disabilities in society –
and definitely not in universities.
How could “inclusive assessment” challenge ableism on a broader societal
level? One possible answer can be found in Finnish legislation for universities.
Finnish universities have a three-fold mission of 1) independent academic research,
2) research-based education, and 3) the promotion of socially impactful research
(Universities Act 2009). However, academic funding models consistently prior-
itise the first two missions, while the third has remained non-implemented and
without support (Heinonen and Raevaara 2012). Behold, the measured univer-
sity! Inclusive assessment, understood through an anti-ableist stance, brings all
these three missions together.
Through such a stance, assessment is harnessed as a vehicle for creating more inclu-
sive futures in higher education.
How could such an idea be put into practise? One concrete example is the
study by Thompson (2009) who introduced community action projects as a form
of authentic assessment in statistics education. Students took part in authentic
projects in which they provided statistical analyses for the needs of blind adoles-
cents and adults for independent living, and to create multi-sensory education
environments for disabled students. This study is an inspiring example as the
students worked in close collaboration with the communities and with the end
users of the statistical tools in particular. Other examples of authentic assessment
projects might be collaborations with disability organisations and other relevant
stakeholders whose voice is rarely heard when developing inclusive higher edu-
cation. Moreover, such projects might include activism and campaigns for more
inclusive teaching and assessment policies in higher education. In social sciences,
students might help to organise a system-wide professional development pro-
gram for staff on diversity and disabilities. These are of course just a few exam-
ples. Below, I outline some guidelines for authentic assessment for inclusion.
References
Bearman, M., and Ajjawi, R. 2021. “Can a Rubric Do More than Be Transparent?
Invitation as a New Metaphor for Assessment Criteria.” Studies in Higher Education 46
(2): 359–368. https://doi.org/10.1080/03075079.2019.1637842.
72 J. H. Nieminen
Orientation
Concerned with the design and implementation of assessment for inclusion, in
this conceptual chapter we discuss sustainable orientations towards equitable
ways of working by adopting a theory that embraces the ontological turn. What
we mean by this is that we want to use theory to think with ( Jackson and Mazzei
2011) which concentrates on the “nature of being and the basic categories of
existence” (St. Pierre, Jackson, and Mazzei 2016, 99) as an ethical project. This
contrasts with what assessment tends to emphasise; a constructivist approach to
evidencing understanding and knowing against preconceived learning outcomes
(Sadler and Reimann 2017). Our reason for taking this conceptual pathway will
become clearer as this chapter unfolds; though to briefly introduce it here, a
push for evidence-informed practices in education tends to obfuscate context
and circumstance, ignoring complex structural and social impacts on student
achievement. As Spina observes, “arguments in favour of standardised testing
and evidence-informed decision making are frequently framed around the need
for evidence as a means of increasing achievement and equity” (Spina 2018,
335). However, as she and others (e.g., McArthur 2016) have forcefully argued,
approaches to equity in education that start with evidence-based “best practices”,
and that espouse equity in so doing, tend to be framed by a determination to
set a level playing field, whereby difference among student groups is minimised.
Consequently, social justice in education through these practices remains elusive.
In this chapter, then, we build a case for centring ontology in assessment for
inclusion and social justice by paying attention to the implications of diversity in
educational design. The discussion takes place in two interrelated movements.
In the first, we explore ways inclusive education has been differentially framed in
the tertiary sector across 40 years, and correspondingly, how educational design
DOI: 10.4324/9781003293101-9
Ontological assessment decisions 75
Inclusive times
We live in a fascinating period of educational and social history, in which matters
of equity underpin policies and practices in higher education. Indeed, widen-
ing participation in higher education has been a prominent policy strategy in
Australia since the late 1980s for students whose profile and/or living conditions
are not reflective of the mainstream (Bennett and Burke 2017). This has not been
straightforward, with divergent priorities taken over this period. For instance,
whereas once heightening participation for student diversity in higher education
was initially taken to mean ensuring that institutions are more representative of
their populations, at present this concept has been expanded: broader inclusion in
higher education is prioritised for its contribution to a more functional economy
(Adam 2003). Rights-based arguments have also been prevalent internationally,
although these centre on a liberal humanist universal norm to which to aspire,
and in so doing, they have tended to favour inclusion for discrete categories
of identity, such as people with disabilities, cultural diversities, and sexualities
(Whitburn and Thomas 2021).
Indeed, whichever mast we nail our colours to, the underlying premise
behind contemporary inclusive education discourse across the sectors is that all
individuals can take part on the basis that they are equal stakeholders in the
marketplaces which dominate our lives (Simons and Masschelein 2015). Though
as scholars of inclusive education have pointed out (Dolmage 2017; Whitburn
and Thomas 2021), interventions targeting specific student identities do little to
address entrenched barriers to inclusion in education. Here we want to take the
notions of equity and social justice further, to consider how they shape teaching
practices in higher education, and more specifically how they influence what
Dawson et al. (2013) refer to as assessment decisions. That is, how conditions in
the higher education sector lead to making particular decisions about the role
and purpose of assessment in educational design. Indeed, these concerns pertain
both to the “what” and “how” of assessment – both in the ways assessments are
designed and implemented, and “the role of assessment in nurturing the forms
of learning that will promote greater social justice within society as a whole”
76 B. Whitburn and M. K. E. Thomas
(McArthur 2016, 968), which together form the root of the present discussion
across both movements of the chapter.
Supplying fair opportunities to local and global communities through
inclusive teaching and learning features highly on university strategic mission
statements internationally. However, rigid practice standards, academic integ-
rity, and the development of individual students’ core skills to increase employ-
ability are often given centre stage, leading McArthur (2016) to consider that
institutional concerns for procedural fairness overtake aspirations for increasing
and responding to student diversity. Regulatory compliance is at the fore when
compelling students to disclose disabilities to institutions, as a way to ensure
that they can then expect reasonable adjustments to be made to their programs
of learning, rather than to consider the inclusiveness and accessibility of courses
writ large (Bunbury 2020). We suggest that educators would do well to con-
sider what is reasonable, and inclusive, about adjustments, and further, how,
and why assessment decisions are made that foster learning conditions through
which adjustments are necessitated for designated student groups. In noting that
extensions to time are a core means by which universities adjust programs of
assessment for particular students (Dolmage 2017), rather than engage with them
to demonstrate learning development (McArthur 2016), we acknowledge that
assessment in higher education is inescapably temporal.
Consider how time mediates learning design in higher education. Courses of
higher education are designed according to pre-conceived temporal milestones,
cast against national benchmarks of duration, be they 3-year undergraduate
courses or 2-year Masters programs. Years are typically divided into semesters,
splitting the year into teaching periods framed within pockets of time. Each
bi- or trifurcated measure is replete with regularly established pauses that stu-
dents must utilise to catch up should they fall foul of the predetermined pace of
learning progression; or perhaps if they can, to push forward in time, gaining the
elusive edge over their fellow students in a race against the clock to demonstrate
fledgling competency. Summative tasks in such programs are simultaneously
mediated by time, for “educational attainment targets and assessment apply the
invariable norm as measure” (Adam 2003, 63). Students are expected to turn in
assessment tasks on specific dates corresponding with their contractual agree-
ments as stipulated in course outlines (McArthur 2016), or else produce knowl-
edge on cue under timed examination conditions (Gilovich, Kerr, and Medvec
1993). Adjustments to such temporal expectations can be made, but only to those
who have verifiable reasons to make such interruptions, and only if those adap-
tations are considered reasonable (Bunbury 2020).
For committed students and engaged educators alike, these conditions to study
and receive judgement on submitted evidence of learning (Dawson et al. 2013)
may seem entirely feasible, and unassailable. Yet, these approaches to assessment
favour a normative, top-down approach to working with difference, in which
disruptions to the temporal order of teaching and learning are sanctioned on
the basis that they are documented as reasonable adjustments. Put differently,
Ontological assessment decisions 77
students’ propensities for learning are associated with assumptions of being able to
comply with timed deadlines. Time pressures are moreover a principal reason that
underrepresented students exclude themselves from higher education (Bennett
and Burke 2017), giving little heed to the ways that normative frameworks of
hegemonic time affect student engagement. There are two points of significance
worthy of consideration, related to matters of assessment procedures in higher
education. Firstly, as they are easier and quicker to control than the ways students
engage with relevant and professional knowledge, procedural concerns are given
primacy in assessment over ontological ones (Bennett and Burke 2017; McArthur
2016). As McArthur (2016) notes, a “focus on procedure in assessment thus leads
students away from the most important aspect of what they should be doing –
critical engagement with complex knowledge” (972). Secondly, these ways of
working with assessment and the design of education programs more broadly are
predicated on linear, neoliberal-driven notions of learning progression (Lingard
and Thompson 2017), which emphasise individualised skill development in sup-
port of economies. Theorists have surmised that we live in a period of sped up
and individualised psychology, and that higher education has consequently never
been as hyper-accelerated as in the present (Vostal 2014), wherein temporal com-
pressions, such as shortened teaching periods containing tight assessment dead-
lines have become de rigueur. As we have foreshadowed, while many can thrive
in fast-paced and self-driven environments of learning, left behind are those stu-
dents who are unable to conform to linear, normative progression, and institu-
tions of higher education risk marginalising these students further (Bennett and
Burke 2017; Whitburn and Thomas 2021). In the next movement of this chapter,
we turn to ontology, and consider its productive possibilities for assessment, and
making evidence of learning.
Turning to ontology
To recap, what we have argued for is to recognise how higher education institutions
invoke assessment in their course designs to privilege particular ways of being and
engaging with knowledge; ways that evoke universalist ideals that everybody can
be equally included in a classroom and that balancing fairness in assessment by way
of procedural means to attempt achieving a level playing field stifles critical engage-
ment with knowledge for students. This approach neglects to account for diversity,
and how time – “the way it is lived, experienced and (re)constructed through our
location, positionality and experience – is gendered, classed and racialised and tied
to unequal power relations and socio-cultural differences” (Bennett and Burke
2017, 2). To that end, the extent to which assessment can be meaningfully under-
stood as a hallmark of inclusive practice is contingent, in our view, on how it can
go beyond epistemological limitations – ways of knowing or not knowing – to
incorporate ontological awareness: the ways that knowledge affects co-existence.
As we briefly presaged at the start, the ontological turn in social science inquiry
is concerned with the nature of being (St. Pierre, Jackson, and Mazzei 2016).
78 B. Whitburn and M. K. E. Thomas
It primarily shifts focus away from knowledge as fixed, infallible, and separate to
bodies, and thereby to be learned, held, and applied incontrovertibly, to an alter-
native point of departure that instead emphasises matter and meaning-making.
We draw here from the new materialism (St. Pierre, Jackson, and Mazzei 2016),
which is an orientation to social science inquiry that emphasises ontology to chal-
lenge categorical assumptions, including that which is material such as objects,
texts, and buildings and that which is non-material such as mood, time, and
intention. To consider inclusion through assessment in higher education gives us
scope to draw students’ attention towards the interconnections between things
that affect their experiences while engaging in the processes of meaning-making
about and for their chosen course of study. It supplies conditions for contexts of
learning in which students are made aware that educational programs and assess-
ment procedures are constructed (McArthur 2016), and that the knowledge that
is produced through learning is co-created, contingent on other variables, tem-
porary and forever changeable.
The co-creation of knowledge is of particular significance to an ontological
orientation to assessment in higher education. Similarly compelled to engage
ontology in approaches to assessment, Bourke (2017) observes that unnatural
divisions take shape through assessment practices in higher education: ones that
prevent teachers from forming legitimate partnerships with students, and that
also functions to detach students from their learning. As she writes, “students
take less responsibility for their own assessment because they have learned to rely
on assessments that tell them that they had learned, and by how much” (Bourke
2017, 829), or perhaps, how little. Bourke (2017) advocates instead for self-
assessment approaches, which, in co-production with teachers and peers, allow
students to identify questions for investigation, and grow professionally through
their inquiries. Significant to ensuring this approach led to strong outcomes for
students, teaching staff were themselves made to justify the decisions they made
about the types of assessment tasks set, and their purpose in supporting profes-
sional development. Assessment in use, then, is always changeable, being contin-
gent on the profile of learners and teachers in context, and they have their utility
in showing student learning aligned to these contexts.
Let us now discuss each of these tenets in turn, for how they set the groundwork
for an ontological orientation to assessment, drawing on an example applicable to
each to invite others to pursue a similar orientation in their assessment decisions.
outcomes are thereby formed to not assume static indicators of knowledge or skill
acquisition, but on the realities (evidence) created through relational interconnec-
tion. In an example of such an approach to learning design in inclusive education
for preservice teachers (Whitburn and Corcoran 2019), students are set summative
assessment tasks in which they are asked to articulate their conceptualisations of
heterogeneous learning environments, while decentring focus away from diagnos-
tic categories in favour of inclusive pedagogical approaches and accessibility consid-
erations. In so doing, they are assigned assessment partners and asked to reflect on
their interactions with one another in the development of their knowledge. What
is assessed, then, is how students come to recognise the ways that an ontological
orientation affects their understanding about diversity, and how they will use this
approach to knowledge making in their practices as school-based educators.
Conclusion
This chapter has sought to centre ontological awareness in assessment deci-
sions as the means to develop inclusiveness. Drawing on evidence using an EMI
framework it engages with relational and temporal concepts to orientate towards
assessment for inclusion, providing examples of how these principles have been
used to develop assessment tasks in the scholarship of inclusive education. By
designing assessment activities that attentively engage students in assessing their
ongoing development, that encourage them to identify and work within the
parameters of their strengths and those of their peers, and applying these skills
to the context of the profession in which they are studying, educators can move
focus away from quantifying knowledge and shifting conceptual focus towards
assessment for inclusion. We optimistically predict wider acceptance of onto-
logical orientations in the field, for escaping the clutches of constructivism and
giving educators the necessary theoretical resources to think with that promote
affirmative ways of engaging difference.
References
Adam, B. 2003. “Reflexive Modernization Temporalized.” Theory, Culture and Society 20
(2): 59–78. https://doi.org/10.1177/0263276403020002004.
Bennett, A., and Burke, P. J. 2017. “Re/Conceptualising Time and Temporality: An
Exploration of Time in Higher Education.” Discourse: Studies in the Cultural Politics of
Education 39 (6): 913–925. https://doi.org/10.1080/01596306.2017.1312285.
Bourke, R. 2017. “Self-Assessment to Incite Learning in Higher Education: Developing
Ontological Awareness.” Assessment and Evaluation in Higher Education 43 (5): 827–839.
https://doi.org/10.1080/02602938.2017.1411881.
Bunbury, S. 2020. “Disability in Higher Education – Do Reasonable Adjustments
Contribute to an Inclusive Curriculum?” International Journal of Inclusive Education 24
(9): 964–979. https://doi.org/10.1080/13603116.2018.1503347.
Ontological assessment decisions 83
St. Pierre, E. A., Jackson, A. Y., and Mazzei, L. A. 2016. “New Empiricisms and New
Materialisms.” Cultural Studies ↔ Critical Methodologies 16 (2): 99–110. https://doi.
org/10.1177/1532708616638694.
Vostal, F. 2014. “Academic Life in the Fast Lane: The Experience of Time and Speed in
British Academia.” Time and Society 24 (1): 71–95. https://doi.org/10.1177/0961463x
13517537.
Whitburn, B., and Corcoran, T. 2019. “Ontologies of Inclusion and Teacher Education.”
In Global Perspectives on Inclusive Teacher Education, edited by Bethany M. Rice, 1–15.
Hershley, PA: IGI Global.
Whitburn, B., and Thomas, M. K. E. 2021. “A Right to Be Included: The Best and
Worst of Times for Learners with Disabilities.” Scandinavian Journal of Disability
Research 23(1): 104–113. https://doi.org/10.16993/sjdr.772.
SECTION II
Meso contexts of
assessment for inclusion:
Institutional and
community perspectives
8
INCLUSIVE ASSESSMENT
Recognising difference through
communities of praxis
DOI: 10.4324/9781003293101-11
88 P. J. Burke
that we draw from Paulo Freire’s concept of the circle of knowledge to do this
deep, reflexive, praxis-oriented work. This circle of knowledge brings together
knowledge emerging from across difference, including the knowledge of the
assessor but also knowledge within community contexts, to transform the lim-
itations of how knowledge is legitimated. Such approaches strive to open time
and space for the cyclical and reciprocal reformation of knowledge and know-
ing in the commitment to inclusion and diversity. This is a process of bringing
together disciplinary knowledge with the heterogeneous knowledge of those
groups, communities, and societies that have often been denied representation in
higher education curricula. This is demonstrated clearly in the example provided
above of the Fine Art team’s sustained commitment to bringing together in new
ways marginalised bodies of art with hegemonic bodies of art, in a project of
transformative equity.
participation, a student must have access to the resources and high-quality ped-
agogical opportunities that enable access to esoteric practices and institutionally
legitimised epistemologies. The students must also though be recognised and have
access to representation as a fully valued member of the community (Burke, Crozier,
and Misiaszek 2017) which means the inclusion of their experiences, knowledge,
and ways of knowing.
This requires valuing different ways of knowing and being and addressing
the historical inequalities that have shaped processes of institutional legitimacy. It
is also important to capture the affective, emotional, subjective, and lived expe-
riences of misrecognition and misrepresentation, that are felt in and through the
body as forms of symbolic violence and injury on the self (Burke 2012; McNay
2008, 150). This often leads to feelings of shame and fear (Ahmed 2004; Burke
2017) and is not a matter of lack of confidence but of sustained experiences of
symbolic violence over time.
Success in higher education depends on navigating assessment practices that
operate to recognise a student as “successful” or not. The student must decode
(often esoteric) forms of academic practice that are granted legitimacy through the
community of practice in which the assessment is located. Students from socially
privileged contexts often have access to a range of resources that enable them
to decode how to demonstrate academic capability through assessment practices.
The successful student must first understand how to write, speak, construct an
argument, hypothesis, and read (and so forth) in ways that is recognised as insti-
tutionally legitimate forms of practice within a particular community of practice.
These academic practices are highly contextual, requiring students to develop
complex skills of decoding expectations and conventions across the different com-
munities of practice in which they are studying. Academic practices (e.g., con-
structing an argument, debating, formulating a problem, presenting with clarity
and coherence, bring critical, etc.) tend to be misrepresented as neutral, decontex-
tualised sets of technical skills and literacy that can be straightforwardly assessed
(Lillis 2001) and that students from disadvantaged social contexts simply lack. That
these academic practices are historically embedded in classed, racialised, and gen-
dered ways of doing is erased from view thus perpetuating exclusive forms of what
is named “inclusion”.
Final reflections
I have argued that resituating assessment through the lens of difference, and
through building communities of praxis, we might generate new timescapes to
consider what assessment-for-inclusion means in the context of our fields and
how we might affect social justice transformation. I suggest this requires orienta-
tions towards social justice praxis; the bringing together of theoretical insights on
the working of knowledge, power, and inequality with commitments to trans-
formative forms of practice. I have also suggested that moving towards com-
munities of praxis might open counter-hegemonic timescapes to grapple with
Inclusive assessment through communities of praxis 95
• How might we create the time and space to interrogate the values and
assumptions about the purpose(s) of HE and the right to higher education in
relation to assessment structures, practices, and inclusion?
• How might we more clearly articulate – and question – how we understand
potential, capability, and success? Who judges, how and with what implications?
• What might it mean to work with rather than against difference? How does
this translate to assessment practices?
• What are the opportunities to build on our communities of practice to
recreate these as communities of praxis? In what ways and contexts could these
be of value?
• How might we cultivate counter-hegemonic timescapes and collectively challenge
the hegemonic timescapes of contemporary higher education? What are the
possibilities? What are the challenges?
References
Adam, B. 1998. Timescapes of Modernity. London: Routledge.
Ahmed, S. 2004. The Cultural Politics of Emotion. New York, NY: Routledge.
Archer, L. 2003. Race, Masculinity and Schooling: Muslim Boys and Education. Berkshire,
MA: Open University Press.
Bennett, A., and Burke, P. J. 2018. “Re/conceptualising Time and Temporality: An
Exploration of Time in Higher Education.” Discourse: Studies in the Cultural Politics of
Education 39 (6): 913–925. https://doi.org/10.1080/01596306.2017.1312285.
Burke, P. J. 2002. Accessing Education Effectively Widening Participation. Stoke-on-Trent:
Trentham Books.
Burke, P. J. 2012. The Right to Higher Education: Beyond Widening Participation. London:
Routledge.
Burke, P. J. 2015. “Re/imagining Higher Education Pedagogies: Gender, Emotion and
Difference.” Teaching in Higher Education 20 (4): 388–401. https://doi.org/10.1080/
13562517.2015.1020782.
Burke, P. J. 2017. “Difference in Higher Education Pedagogies. Gender, Emotion and
Shame.” Gender and Education 29 (4): 430–444. https://doi.org/10.1080/09540253.
2017.1308471.
Inclusive assessment through communities of praxis 97
Introduction
Equity is a concept that is interwoven with concepts of underrepresentation,
fairness, diversity, and inequality, defined, and measured by groups that are more
likely to participate (or are included) in higher education, and groups that are
less likely to participate (thereby facing exclusion). The ways in which inclusion
and exclusion are understood and defined in Australian society vary over time.
This book’s focus on assessment for inclusion can be considered a contemporary and
specific manifestation of concern for equity and fairness that spans the history of
Australian higher education from its genesis.
To understand the possibilities of assessment for inclusion in Australian higher
education it is important to understand the dominant paradigm for inclusion
and exclusion, and how this is sustained through policy and practice. The chap-
ter undertakes a policy analysis of “assessment for inclusion” drawing upon the
social and legal frameworks of the policy analysis toolkit (Althaus, Bridgman,
and Davis 2018) to understand how assessment for inclusion is framed, aligned,
implemented, and evaluated within higher education policy.
Australia introduced an equity policy framework in the 1990s that priori-
tised access to higher education for designated equity groups, a framework that
remains intact 30 years later (Harvey, Burnheim, and Brett 2016). A major fea-
ture of the equity framework is an equity performance indicator that captures
and reports data on designated equity group access, participation, success, and
retention (Martin 2016a).
The framework and complementary subsequent policies such as creation of
the demand-driven system have been remarkably successful in expanding access
(Zacharias 2017). Notwithstanding successes in increasing participation, there
are nevertheless enduring challenges with the equity framework. Equitable
DOI: 10.4324/9781003293101-12
Inclusive assessment and policy 99
Indigenous Disability
reference in reference in
Victorian Assessment Adjustment policy assessment assessment
university policy title title policy policy Inclusion-related policy statement within assessment policy
Comprehensive Assessment and results policy No Yes Assessment must be fair, equitable, inclusive, objective, and auditable and
orientation accessible by, and meet the needs of a diverse student population
Disadvantaged Assessment policy No No Assessment is designed to be fair and equitable and is designed to ensure
learner that students have an opportunity to demonstrate their achievement of
orientation learning outcomes… adjustments to assessment tasks are available to
students experiencing significant disadvantage.
International Assessment and academic No Yes Assessment is designed to be fair and equitable and is designed to ensure
orientation integrity policy that students have an opportunity to demonstrate their achievement of
learning outcomes.
Local Assessment Assessment for No No Assessment is fair and equitable … Assessment variations/adjustments and
orientation for learning – processes for allowing and recording any variations/adjustments to the
learning adjustments to stated conditions of assessment, submission and grading rules are in line
policy assessment with the University Student Equity and Social Inclusion Policy
procedure
103
Source: Victorian university policy library websites as of 6 March 2022.
104 M. Brett and A. Harvey
Governments can exert influence over equity, inclusion, and assessment through
embedding Martin’s assessment performance indicator framework in reporting
requirements to State parliaments.
There are opportunities for higher education financing to exert a stronger influ-
ence on inclusive assessment. Equity grants made under HESA currently include
reporting requirements that dilute their impact. PhillipsKPA (2012) found the cost
of reporting for each $1,000 of funding was higher for equity grants compared to
core operating grants, as high as 49 times higher for disability grants. PhillipsKPA
conclude that reporting on equity activities should be consolidated with a focus on
accountability. We suggest there is an opportunity to purposefully link funding
for equity and inclusion with teaching and assessment policy that embed stronger
accountability requirements.
Finally, there are opportunities to build a strong evidence base and culture
of evaluation for reasonable adjustments. A recent review of the Disability
Standards for Education (DESE 2021) was concerned with optimising access to
reasonable adjustments. The review recommended a more proactive approach to
making reasonable adjustments available, reducing the need for complaints. The
review recognised issues of alignment across various State and Commonwealth
policy frameworks.
Conclusion
We have highlighted in this chapter the misalignment of policies relating to
equity, inclusion and assessment across Commonwealth, State, and institutional
policies. Across each level there are shortcomings in how policy requirements
are implemented and/or upheld. We suggest there is potential for greater under-
standing of practices influencing inclusion and exclusion, including better trans-
parency and monitoring of grading practices and assessment outcomes within
and across institutions. Moving from a reliance on individual complaints towards
stronger institutional accountability is also critical, as is better regulation and
enforcement of existing codes and legislation. The explicit inclusion of equity
accountabilities within institutional re-registration and related requirements
would also elevate the priority of inclusion. Finally, there remains a need for
better education of institutional staff, from academics, support staff, and web
designers through to senior managers, on the legislative requirements and moral
imperatives of inclusion. This educative approach is central to recent research
into disability in higher education (Pitman 2022). Much current research and
practice considers specific types of assessment, technology, and activities, and
their potential inclusiveness for students with a disability. However, attention is
also required to understand why inclusiveness, including in assessment, appears
to remain a relatively low priority for institutions and the sector at large, not-
withstanding existing legislative and policy requirements. While individual
assessment practices and innovations provide cause for some optimism, institu-
tions also need to focus on the elevation of inclusion as an urgent policy priority.
Inclusive assessment and policy 107
References
AdvanceHE. 2017. Equality in Higher Education: Statistical Report 2017. https://www.
advance-he.ac.uk/knowledge-hub/equality-higher-education-statistical-report-2017.
Akinbosede, D. 2019. “The BAME Attainment Gap Is Not the Fault of BAME Students.”
Times Higher Education, 5 December 2019. https://www.timeshighereducation.com/
opinion/bame-attainment-gap-not-fault-bame-students.
Alahmadi, T., and Drew, S. 2017. “An Evaluation of the Accessibility of Top-Ranking
University Websites: Accessibility Rates from 2005 to 2015.” Journal of Open, Flexible,
and Distance Learning 21 (1): 7–24.
Althaus, C., Bridgman, P., and Davis, G. 2018. The Australian Policy Handbook: A Practical
Guide to the Policy-Making Process. Rev. ed. Abingdon: Allen and Unwin.
Amendment No. 3 to the Other Grants Guidelines. 2006. Commonwealth of Australia.
https://www.legislation.gov.au/Details/F2007L01158.
Anderson, L., Singh, M., Stehbens, C., and Ryerson, L. 1998. Equity Issues: Every University’s
Concern, Whose Business?: An Exploration of Universities’ Inclusion of Indigenous Peoples’
Rights and Interests. Evaluations and Investigations Programme. Higher Education
Division. Department of Employment, Education, Training and Youth Affairs.
Australian Bureau of Statistics. 2021. TableBuilder, Disability, Ageing and Carers, 2018.
Australian Human Rights Commission. 2021. Conciliation – How It Works. https://
humanrights.gov.au/complaints/complaint-guides/conciliation-how-it-works.
Australian Legal Information Institute. 2021. Federal Court of Australia Decisions. http://
classic.austlii.edu.au/au/cases/cth/FCA/.
Boucher, S. 2021. “Inherent Requirements and Social Work Education: Issues of Access
and Equity.” In Research Anthology on Mental Health Stigma, Education, and Treatment,
edited by Information Resources Management Association 681–697. Hershey, PA,
US: IGI Global.
Brett, M. 2018. Equity Performance and Accountability. National Centre for Student
Equity in Higher Education and La Trobe University. https://www.ncsehe.edu.au/
publications/equity-performance-accountability.
Cramer, L. 2021. “Equity, Diversity and Inclusion: Alternative Strategies for Closing
the Award Gap between White and Minority Ethnic Students.” eLife, 4 August 2021.
https://doi.org/10.7554/eLife.58971.
Department of Education Skills and Employment. 2021. Final Report of the 2020. Review
of the Disability Standards for Education 2005. https://www.dese.gov.au/disability-
standards-education-2005/resources/final-report-2020-review-disability-standards-
education-2005.
Department of Education Skills and Employment. 2022. Higher Education Statistics –
2020 Section 16 Equity Performance Data. https://www.dese.gov.au/higher-education-
statistics/resources/2020-section-16-equity-performance-data.
Disability Discrimination Act. 1992. Commonwealth of Australia. https://www.legislation.
gov.au/Details/C2022C00087.
Equal Opportunity Act. 2010. Victorian Legislation. https://www.legislation.vic.gov.au/
in-force/acts/equal-opportunity-act-2010/029.
Grant-Smith, D., Irmer, B., and Mayes, R. 2020. Equity in Postgraduate Education in Australia:
Widening Participation or Widening the Gap? Perth: National Centre for Student Equity
in Higher Education, Curtin University. https://www.ncsehe.edu.au/wp-content/
uploads/2020/09/GrantSmith_2020_FINAL_Web.pdf.
Harvey, A., Burnheim, C., and Brett, M. 2016. Student Equity in Australian Higher Education:
Twenty-Five Years of a Fair Chance for All. Singapore: Springer.
108 M. Brett and A. Harvey
Harvey, A., Cakitaki, B., and Brett, M. 2018. Principles for Equity in Higher Education Performance
Funding. Report for the National Centre for Student Equity in Higher Education
Research. Melbourne: Centre for Higher Education Equity and Diversity Research,
La Trobe University. https://www.ncsehe.edu.au/wp-content/uploads/2020/04/La-
Trobe-University-Performance-Funding-REPORT-28-April-2020.pdf.
Harvey, A., and Russell-Mundine, G. 2019. “Decolonising the Curriculum: Using
Graduate Qualities to Embed Indigenous Knowledges at the Academic Cultural
Interface.” Teaching in Higher Education 24 (6), 789–808. http://doi.org/10.1080/
13562517.2018.1508131.
Harvey, A., Szalkowicz, G., and Luckman, M. 2020. Improving Employment and Education
Outcomes for Somali Australians. Melbourne: Centre for Higher Education Equity and
Diversity Research, La Trobe University.
Higher Education Standards Framework. 2011. TEQSA. https://www.teqsa.gov.au/higher-
education-standards-framework-2011.
Higher Education Support Act. 2003. Australian Government. https://www.legislation.gov.
au/Details/C2022C00005.
Indigenous Student Assistance Grants Guidelines. 2017. Commonwealth of Australia. https://
www.legislation.gov.au/Details/F2017L00036.
Kang, H., Cheng, M., and Gray, S. J. 2007. “Corporate Governance and Board Composition:
Diversity and Independence of Australian Boards.” Corporate Governance: An International
Review 15 (2): 194–207. http://doi.org/10.1111/j.1467-8683.2007.00554.x.
Martin, L. 2016a. “Framing the Framework: The Origins of a Fair Chance for All.”
In Student Equity in Australian Higher Education: Twenty-Five Years of a Fair Chance for
All, edited by Andrew Harvey, Catherine Burnheim, and Matthew Brett, 21–37.
Singapore: Springer.
Martin, L. M. 2016b. “Using Assessment of Student Learning Outcomes to Measure
University Performance: Towards a Viable Model.” Ph.D., University of Melbourne.
Orr, J. L. 2012. “Australian Corporate Universities and the Corporations Act.”
International Journal of Law and Education 17 (2): 123–148.
Other Grants Guidelines (Education). 2012. Commonwealth of Australia. https://www.
legislation.gov.au/Details/F2013C00350.
PhillipsKPA. 2012. Review of Reporting Requirements for Universities. Final Report. Melbourne: LH
Martin Institute. https://www.phillipskpa.com.au/dreamcms/app/webroot/files/files/
PhillipsKPA%20Review%20of%20University%20Reporting%20Requirements.pdf.
Pitman, T. 2022. Supporting Persons with Disabilities to Succeed in Higher Education: Final
Report. Perth: National Centre for Student Equity in Higher Education, Curtin
University.
Pitman, T., Edwards, D., Zhang, L. C., Koshy, P., and McMillan, J. 2020. “Constructing a
Ranking of Higher Education Institutions Based on Equity: Is It Possible or Desirable?”
Higher Education 80 (4): 605–624. http://doi.org/10.1007/s10734-019-00487-0.
Tertiary Education Quality and Standards Agency. 2017a. Guidance Note: Course Design
(including Learning Outcomes and Assessment) Version 1.3. https://www.teqsa.gov.au/
latest-news/publications/guidance-note-course-design-including-learning-outcomes-
and-assessment.
Tertiary Education Quality and Standards Agency. 2017b. Application Guide for Registered
Higher Education Providers – Renewal of Registration for Existing Providers. Version 2.7.
https://www.teqsa.gov.au/sites/default/files/application-guide-for-re-registration-v2.
7.pdf?v=1535500067.
Inclusive assessment and policy 109
Universities UK. 2019. “Black, Asian and Minority Ethnic Student Attainment at UK
Universities: Closing the Gap.” Last modified 9 September 2021. https://www.
universitiesuk.ac.uk/what-we-do/policy-and-research/publications/black-asian-and-
minority-ethnic-student.
University of Southern California. 2017. Campus Climate Literature Review and Report. https://
academicsenate.usc.edu/files/2017/12/CampusClimateReport_FINAL121117.pdf.
Weerasinghe, L. A. A. P. 2021. “Cultural Diversity and Indigenous Participation on
Australian Corporate Boards.” Ph.D., Queensland University of Technology.
Zacharias, N. 2017. The Australian Student Equity Programme and Institutional Change:
Paradigm Shift or Business as Usual. https://www.ncsehe.edu.au/wp-content/uploads/
2017/06/Nadine-ZachariasThe-Australian-Student-Equity-Program.pdf.
10
INCLUSION, CHEATING,
AND ACADEMIC INTEGRITY
Validity as a goal and a mediating concept
Phillip Dawson
DOI: 10.4324/9781003293101-13
Inclusion, cheating, and academic integrity 111
surveillance of students while they complete online tasks. Debate in this space
can become polarised and dichotomous at its extremes, but in practice most
institutions deploy some combination of the two approaches. If you attempt to
educate students about referencing and ethical scholarship, while also checking
that they have not submitted the same assignment as a peer, you are adopting a
mix of academic integrity and assessment security approaches.
This chapter focuses specifically on assessment security approaches, as most
of the harms to inclusion that have occurred in addressing cheating have been
due to these attempts to police and surveil students. Academic integrity does not
punish you for needing to use the bathroom, having an atypical gaze pattern or
living in insecure housing, but assessment security might. To further focus the
chapter, it is largely interested in assessment of learning rather than assessment
for learning; as I have argued elsewhere, assessment security does not matter
as much (or sometimes, at all) in assessment for learning, and we should focus
on positive academic integrity instead in those assessments (Dawson 2021). The
chapter proposes that validity can act as a mediating concept that inserts inclusion
as a necessary part of any conversation about cheating.
In this view, remote proctoring is akin to “cop shit” (Moro 2020): “any pedagog-
ical technique or technology that presumes an adversarial relationship between
students and teachers”. Given this definition, assessment security and cop shit are
arguably synonymous terms, the former carrying a more sanitised or euphemistic
tone, and the latter a sense of activism and disgust. But whatever term you ascribe
to it, there are inclusion consequences for the rapidly developing set of assessment
security/cop shit technologies.
While there is a significant body of work critiquing assessment security as a
problem for inclusion, the inclusion problems of cheating itself have been dis-
cussed much less. Modern cheating has become commercialised, with multi-
billion dollar publicly traded companies offering cheating services to paying cus-
tomers (Lancaster and Cotarlan 2021). There are financial barriers to cheating,
with wealthier students able to purchase better quality bespoke assignments from
contract cheating services. Less well-off students might turn to assignment swap-
ping services (Rogerson 2014), which expose them to a much greater risk of
getting caught when someone else submits their assignment. There are language
barriers to cheating, with students who have more capability in the language of
instruction being better able to engage in sham paraphrasing. And there are tech-
nological barriers to cheating, with those students who have better technological
Inclusion, cheating, and academic integrity 113
feedback seeking and peer learning, and the forbidden practices of cheating.
Heavy-handed anti-cheating messages to students’ risk hindering effective
independent learning strategies. Thirdly, for cheating’s wrongness to be based
in the harms it does to learning, other activities that hurt learning need to
be punished just as much as cheating. This would include punishing students
for not completing tasks, as non-completion poses at least the same threat to
learning that cheating does, or even going as far as punishing students for hav-
ing hobbies or jobs which could distract from their studies (Bouville 2009).
Punishing students for activities in general that harm their learning like hob-
bies or jobs is not just ridiculous, but more importantly it goes against student
academic freedom to learn how they want to learn (Macfarlane 2016). Based
on these three criteria, harms to learning do not appear to be solid ground on
which to stake the wrongness of cheating.
Sidestepping the ethical wrongness of cheating, Cizek and Wollack (2017)
argue that cheating should be viewed as a threat to validity:
Here, cheating is wrong not based on an ethical argument but a pragmatic argu-
ment about the problems it creates for assessment in terms of validity. Cizek and
Wollack (2017) come at validity and cheating from a psychometric perspective,
which conjures up images of testing and statistics. But validity is a core concern
for any assessment: assessment for learning, assessment of learning, high-stakes
assessment, low-stakes assessment, self-assessment, and peer assessment. At its
most basic level, validity is a concern that an act of assessment assesses what it is
supposed to assess – that claims made about a student from what they have done
reflect what they are capable of. Assessment validity is ultimately what allows
institutions to fulfil their contract with society to graduate students who are
capable of what is written on their testamurs. Assessment validity is how a maths
educator knows that students who pass mathematics 1 are ready to study math-
ematics 2, and that students who fail mathematics 1 are not. Assessment validity
is why I feel confident that a newly-qualified teacher can teach my children. I
acknowledge that there are problems with validity in higher education, including
the extent to which claims made about performance in one context are mean-
ingful in predicting performance in another (Tummons 2020). But regardless of
Inclusion, cheating, and academic integrity 115
how imperfect assessment validity in higher education already is, cheating makes
assessor judgements less valid, and it is this threat to validity that I find resonates
most with a concern for the impacts of (anti-)cheating on inclusion.
problem. For example, if in remote proctored exams, students with trait test
anxiety are affected more negatively than other students (Woldeab and Brothen
2019), we create a new validity problem trying to fix an old one. The problem
is, comparing the validity effects of assessment security and exclusion requires
some complex qualitative calculus. There are no easy metrics. And straightfor-
ward notions of validity, about the accuracy of a judgement with respect to some
standard, are only the beginning of this complexity.
While this chapter has largely focused on traditional notions of validity, sim-
ilar arguments can be made with respect to broader understandings of validity.
Consequential validity is an extension of validity to include effects beyond the
immediate act of assessment (Sambell, McDowell, and Brown 1997). For exam-
ple, when students sit a multiple-choice exam focused on lower-level knowledge
they may choose to cram right before the test rather than space their study out,
choosing an effective short-term strategy, with consequential validity effects in
the form of poorer longer-term learning. Assessment security has a variety of
consequential validity effects. These include exclusionary effects, which can be
viewed as both validity threats in the traditional sense as discussed previously,
as well as consequential validity threats. For example, when a remote proctored
exam is set without allowance for a bathroom break, it risks impairing tradi-
tional validity (examinees may perform worse due to no bathroom break, which
misrepresents their capabilities). But it also presents risks to consequential valid-
ity (e.g., discomfort, pain, exclusion, and ultimately barriers in place for cer-
tain categories of people into particular professions). Consequential validity has
been criticised as conflating too many ideas underneath the banner of validity
(Mehrens 1997). But regardless of whether other harms to inclusion done by
assessment security fall inside the concept of validity or outside it, they remain
important counterbalances to any validity gains made by assessment security.
Validity holds significant promise as a mediating concept that can insert inclu-
sion as a necessary component in any conversation about cheating, academic
integrity or assessment security. Cheating is wrong, and measures need to be
taken to both detect it and deter it; our need to only graduate students who can
do what we say they can do requires us to. But those anti-cheating measures need
to be weighed against their unintended consequences. The razor to be applied to
any changes in the name of assessment security is this: at an absolute minimum,
assessment security needs to help validity more than it hurts it. A similar standard
should be applied to existing, dominant practices as well; if they entrench exclu-
sion, they may be as much of a threat to validity as cheating is.
References
Akaaboune, O., Blix, L. H., Carrington, L., and Henderson, C. 2022. “Accountability
in Distance Learning: The Effect of Remote Proctoring on Performance in Online
Accounting Courses.” Journal of Emerging Technologies in Accounting 19 (1): 121–131.
https://doi.org/doi:10.2308/JETA-2020-040.
118 P. Dawson
Alessio, H. M., Malay, N., Maurer, K., Bailer, A. J., and Rubin, B. 2017. “Examining
the Effect of Proctoring on Online Test Scores.” Online Learning 21 (1): 146–161.
https://doi.org/10.24059/olj.v21i1.885.
American Educational Research Association, American Psychological Association, and
National Council on Measurement in Education. 2014. Standards for Educational and
Psychological Testing. Washington, DC: American Educational Research Association.
Bearman, M., Dawson, P., Bennett, S., Hall, M., Molloy, E., Boud, D., and Joughin, G.
2017. “How University Teachers Design Assessments: A Cross-Disciplinary Study.”
Higher Education 74 (1): 49–64. https://doi.org/10.1007/s10734-016-0027-7.
Bouville, M. 2009. “Why Is Cheating Wrong?” Studies in Philosophy and Education 29 (1):
67. https://doi.org/10.1007/s11217-009-9148-0.
Carless, D. 2009. “Trust, Distrust and Their Impact on Assessment Reform.” Assessment and
Evaluation in Higher Education 34 (1): 79–89. https://doi.org/10.1080/02602930801895786.
Carless, D. 2017. “Scaling Up Assessment for Learning: Progress and Prospects.” In
Scaling up Assessment for Learning in Higher Education, edited by David Carless, Susan
M. Bridges, Cecilia Ka Yuk Chan, and Rick Glofcheski, 3–17. Singapore: Springer
Singapore.
Cizek, G. J., and Wollack, J. A. 2017. “Exploring Cheating on Tests: The Context,
the Concern, and the Challenges.” In Handbook of Quantitative Methods for Detecting
Cheating on Tests, edited by Gregory J. Cizek, and James A. Wollack, 3–19. New York:
Routledge.
Davis, A. B., Rand, R., and Seay, R. 2016. “Remote Proctoring: The Effect of Proctoring
on Grades.” Advances in Accounting Education: Teaching and Curriculum Innovations
18: 23–50. Bingley: Emerald Group Publishing Limited. https://doi.org/10.1108/
S1085-462220160000018002.
Dawson, P. 2021. Defending Assessment Security in a Digital World: Preventing E-Cheating and
Supporting Academic Integrity in Higher Education. Abingdon, Oxon: Routledge.
Dendir, S., and Maxwell, R. S. 2020. “Cheating in Online Courses: Evidence from
Online Proctoring.” Computers in Human Behavior Reports 2: 100033. https://doi.org/
10.1016/j.chbr.2020.100033.
Fishman, T. 2014. The Fundamental Values of Academic Integrity. Rev. ed. International
Center for Academic Integrity: Clemson University.
Harper, R., Bretag, T., and Rundle, K. 2021. “Detecting Contract Cheating: Examining
the Role of Assessment Type.” Higher Education Research & Development 40 (2):
263–278. https://doi.org/10.1080/07294360.2020.1724899.
Iliescu, D., and Greiff, S. 2021. “On Consequential Validity.” European Journal of
Psychological Assessment 37 (3): 163–166. https://doi.org/10.1027/1015-5759/a000664.
Lancaster, T., and Cotarlan, C. 2021. “Contract Cheating by STEM Students through
a File Sharing Website: A Covid-19 Pandemic Perspective.” International Journal for
Educational Integrity 17 (1): 3. https://doi.org/10.1007/s40979-021-00070-0.
Logan, C. 2020. “Refusal, Partnership, and Countering Educational Technology’s
Harms. Hybrid Pedagogy.” Accessed 19 November 2021. https://hybridpedagogy.
org/refusal-partnership-countering-harms/.
Macfarlane, B. 2016. Freedom to Learn: The Threat to Student Academic Freedom and Why It
Needs to Be Reclaimed. London and New York: Routledge.
Madriaga, M., Hanson, K., Kay, H., and Walker, A. 2011. “Marking-Out Normalcy
and Disability in Higher Education.” British Journal of Sociology of Education 32 (6):
901–920. https://doi.org/10.1080/01425692.2011.596380.
McCabe, D. L., Treviño, L. K., and Butterfield, K. D. 1999. “Academic Integrity in
Honor Code and Non-Honor Code Environments.” The Journal of Higher Education 70
(2): 211–234. https://doi.org/10.1080/00221546.1999.11780762.
Inclusion, cheating, and academic integrity 119
Introduction
Artificial intelligence (AI) in education is now prevalent, and as Cope, Kalantzis,
and Searsmith (2020) have argued, “assessment is perhaps the most significant
area of opportunity offered by artificial intelligence for transformative change in
education” (5). For both good and ill, this wave of change is occurring as count-
less AI-enabled assessment products, and eager commercial edtech vendors, make
their way into schools and universities globally (González-Calatayud, Prendes-
Espinosa, and Roig-Vila 2021; Williamson and Eynon 2020). The deployment
of AI-enabled assessments within the university landscape extends to assessments
of all kinds, including formative and summative (Gardner, O’Leary, and Yuan
2021), as well as high-stakes and low-stakes assessment. In addition, it incorpo-
rates AI “solutions” aimed at addressing perceived threats to academic integrity
(Coghlan, Miller, and Paterson 2020).
Critically, however, this rapid proliferation comes at a time when the computer
and data sciences are undergoing a significant reckoning with their own com-
plicity in perpetuating social discrimination and disadvantage through features
inherent to AI and machine learning (ML) techniques and practices (Barocas,
Hardt, and Narayanan 2020). AI-enabled assessment practices can contribute
to this disadvantage by introducing, and often concealing, inequitable and dis-
criminatory outcomes. After defining AI and questioning its often-triumphalist
narrative, in this chapter we examine several examples of AI-enabled assessment
and explore the ways in which each may produce inequitable or exclusionary
outcomes for students.
We further aim to problematise recent attempts to utilise AI and ML techniques
themselves to minimise or detect inequitable or unfair outcomes through the
largely technological and statistical focus of the growing fairness, accountability,
DOI: 10.4324/9781003293101-14
Student equity in AI-enabled assessment 121
and transparency movement in the data sciences. Our central argument is that
technological solutions to equity and inclusion are of limited value, but particu-
larly when educational institutions fail to engage in genuine political negotiation
with a range of stakeholders and domain experts. Universities, we argue, should
not cede their ethical and legal responsibility for ensuring inclusive AI-enabled
assessment practices to third-party vendors, ill-equipped teaching staff, or to
technological “solutions” such as algorithmic tests for “fairness”. We conclude by
outlining how, in the rapidly evolving age of AI-enabled education, universities
can begin to engage in a politics of inclusion that rests upon robust democratic
and ethical decision-making architectures.
These two key stages of automation are central to the brief description of AI
provided by Gardner, O’Leary, and Yuan (2021): “The essence of artificial intel-
ligence (AI) in both summative and formative [assessment] contexts is the con-
cept of machine “learning” – where the computer is “taught” how to interpret
patterns in data and “trained” to undertake predetermined actions according
to those interpretations” (1207–1208). Even within this truncated sketch of the
typical AI/ML lifecycle, we find three important points where it is widely recog-
nised that biased outcomes may be unintentionally introduced into the process:
(1) with the underlying training data that may lack diversity thereby causing
representational harms; (2) at the stage of algorithmic “training” which can be
122 B. Stephenson and A. Harvey
a notoriously opaque and uninterpretable process; and (3) during the real-world
model deployment stage (Barocas, Hardt, and Narayanan 2020).
A definition which focuses on these two fundamental stages of automation also
works to take some of the unrealistic “magic” out of the AI mythology that is
currently working to inflate public perceptions of AI’s capabilities. We argue that
demystifying AI is an important task, but particularly in educational environments
where there is a need to relieve ourselves of notions that AI can currently “think”,
“understand”, or deploy “knowledge” as humans do (Smith 2018). The often overly
idealistic tone of AI’s current wave of hype has been widely adopted by a university
sector that has long been searching for cost-cutting measures, while also working to
improve student success, retention, and completion outcomes. This drive for tech-
nologically powered “efficiencies” and “solutions” has been made much more acute
by the COVID-19 crisis and the global emergency shift to computer-mediated
“pandemic pedagogies” (Williamson, Eynon, and Potter 2020).
We should not lose sight, however, of the immense pressures of privatisation
and profit that drives the rapid proliferation of educational technology (edtech)
companies and the “Silicon Valley narrative” that they bring to higher education
(Weller 2015). Matched with what Morozow (2013) describes as Silicon Valley’s
“technological solutionism”, universities are in the midst of a sweeping move-
ment of “edtech market-making” and a “private re-infrastructuring of public
education” (Williamson and Hogan 2020). In the midst of this milieu of “Digital
Enchantment” (Yeung 2022), it is important that we adopt a critical mindset
in response to the triumphalist narrative of AI’s remaking of higher education
learning and teaching practice. The overtly techno-optimistic tone of public and
academic discourse concerning AI’s advancements – in the absence of critical
evaluation – too easily serves to obscure the potential for AI to perpetuate disad-
vantage and exclusion based on, for example: disability (Lillywhite and Wolbring
2020), race and ethnicity (Leavy, Siapera, and O’Sullivan 2021), sex and gender
(D’Ignazio and Klein 2020), economic status (O’Neil 2016), or linguistic back-
ground (Mayfield et al. 2019).
algorithmic test proctoring can disadvantage, or simply not work, for people
who lack access to suitable testing spaces, safe home environments, or who lack
the necessary computer equipment. Critically, people who do not present as
able-bodied or neurotypical to the OP algrithms are also likely to be flagged as
cheating threats. For instance, many “robo-proctors” utilise eye-tracking soft-
ware that may flag as suspicous test takers who are blind or express atypical eye
movements due to a range of conditions. Facial recognition AI systems have also
become notorious for their bias and inacuracy in relation to particular demo-
graphics, namely, people of colour and women (Lohr 2018). (See Chapter 10 for
an account of inclusion, cheating, and academic integrity.)
Finally, we must also recognise that students and institutions who are already
well-resourced stand to benefit most from AI-enabled assessments and educational
technologies. These are also the institutions that are most likely to have the greatest
agency in making unconstrained decisions between which computer and human
labours they wish to deploy. For instance, the utilisation of less advantageous AI
technologies may be forced upon some students or institutions as a means of cost
savings. As Selwyn (2019) has warned, “AI will impact on an Ivy League univer-
sity such as Harvard in very different ways to a community college in Hudson
County. In all these ways, then, we need to remain mindful of the politics of tech-
nology” (23). For this, and other reasons, we should be highly sceptical of edtech
marketing claims concerning “equitable” AI technologies that provide greater
“access” through scalability alone. As Selwyn et al. (2020) have argued, digital in/
exclusion is not simply a matter of creating access to digital learning technologies.
Uncritically accepting what we might call the “equality = access” narrative, they
argue, problematically accepts educational technologies “as an inherently “good
thing” that merely offers educational opportunities” to those in need. A focus on
access, they argue, remains “an “easy” way for policy makers to signal that they
are “dealing with” inequality” (Selwyn et al. 2020, 2).
The past ten years have seen an explosion of research in new fields such as
Fairness, Accountability and Transparency in Machine Learning (FATML),
Explainable Artificial Intelligence (XAI), and what is broadly called Responsible
Artificial Intelligence (RAI) (Barocas, Hardt, and Narayanan 2020; Gilpin et al.
2018). There have been tremendous advancements made in these fields towards
producing technical tools and strategies aimed at, for example, calculating the
statistical “fairness” of AI systems or adding “explainability” outputs to oth-
erwise inscrutable algorithmic decisions. As we have argued elsewhere, these
technical advancements in AI fairness monitoring and transparency are neces-
sary, but ultimately not sufficient, in our effort to maintain equitable and inclu-
sive deployments of AI in educational settings (Stephenson, Harvey, and Huang
2022, 28–30).
For example, mathematical tests for “algorithmic fairness”, of which there are
now dozens to choose from, are frequently contradictory and fundamentally fail
to be instructive in the absence of applied domain expertise, or where ethical and
political negotiations of “the good” are not confronted (Green and Hu 2018). For
example, an AI cannot tell us which fairness test, or which fairness definition, is
ethically superior in each use case. Nor can an AI process tell us which course of
action to take if an agreed principle of fairness is found to be violated. Equally,
the goal of bringing interpretability and transparency to extremely complex
algorithmic processes – that is, to open the “black box” of AI inscrutability –
has also come under considerable critique (Gilpin et al. 2018). While the crea-
tion of tools which seek to make algorithmic decisions more comprehensible to
human users is a necessary pursuit, it is not sufficient to guaranteeing equitable
outcomes. There are ethical and political questions that must first be negoti-
ated. To whom should the algorithm, or the student’s AI-assessed feedback or
grade, be interpretable to? Who will be the human-in-the-loop who possesses
the necessary disciplinary content specialisation, coupled with AI understand-
ing? Ultimately, we argue that determining the shape of fairness and inclusivity
in educational contexts requires in-house ethical human judgements that should
be made through rigorous political negotiation, not algorithmic quantification.
Chaudhry and Kazim (2021) have claimed that teachers can play the role of
human-in-the-loop, and that “the final decision makers are teachers” (1). But
will all teachers be able to understand the algorithmic decision making of the AI?
Teachers are, of course, content specialists, but few are specialists in AI and ML
or familiar with its potential shortcomings. In a systematic review of research
relating to data literacies and the training of university faculty, Raffaghelli and
Stewart (2020) found that where training was taking place it was largely con-
cerned with mastery of accepted, and unproblematised, technical practices. This
finding indicates that many university teachers are unlikely to be prepared to
be the critical-minded “human-in-the-loop”, but particularly when commercial
and proprietary AI are deployed within a veil of opacity and secrecy.
Ultimately, we agree with Elish and Boyd (2018) who have argued that
“[a]cknowledging the limits of Big Data and AI should not result in their dis-
missal, but rather enable a more grounded and ultimately more useful set of
conversations about the appropriate and effective design of such technologies” (73).
But to protect equity and inclusivity goals, we must have transparency in uni-
versity technology procurement processes (Zeide 2020), we should normalise
the production of AI or algorithm impact assessments (Reisman et al. 2018),
and we need to engage in a discussion about the outsourcing or insourcing of
AI oversight. For instance, some have argued for the creation of a new industry
of external algorithmic auditing professionals who might ensure legal protection
for companies and institutions (Koshiyama et al. 2021).
We argue, however, that responsibility for AI oversight and monitoring should
be made a standard part of internal institutional policy and practice. One approach
could involve the creation of specialised institutional review boards that are of
a similar composition to human research ethics committees. Such institutional
governance bodies could draw on the evolving “responsible innovation” research
literature (Jarmai 2020). In a university context, something like a Responsible
Innovation Committee would require broad representation from equity cohorts,
computer science/analytics experts, teaching and learning experts, ethicists, legal
professionals, and student representatives. In this way universities may be able to
better guarantee that the adoption of AI/ML processes will undergo a full engage-
ment with the politics of technology and inclusion in a transparent and democratic
manner. More broadly, the rise of AI-enabled assessment highlights the need for
broad professional development that can then form a critical bulwark against many
forms of digital enchantment (Yeung 2022) that put inclusivity at risk.
References
Barocas, S., Hardt, M., and Narayanan, A. 2020. Fairness in Machine Learning: Limitations
and Opportunities. https://fairmlbook.org/.
Chaudhry, M. A., and Kazim, E. 2021. “Artificial Intelligence in Education (AIEd): A
High-Level Academic and Industry Note 2021.” AI and Ethics 2: 157–165. https://doi.
org/10.1007/s43681-021-00074-z.
128 B. Stephenson and A. Harvey
Coghlan, S., Miller, T., and Paterson, J. 2020. “Good Proctor or “Big Brother”?”
AI Ethics and Online Exam Supervision Technologies.” ArXiv, abs/2011.07647.
http://doi.org/10.48550/arXiv.2011.07647.
Cope, B., Kalantzis, M., and Searsmith, D. 2020. “Artificial Intelligence for Education:
Knowledge and Its Assessment in AI-Enabled Learning Ecologies.” Educational Philosophy
and Theory 53 (12): 1229–1245. https://doi.org/10.1080/00131857.2020.1728732.
D’Ignazio, C., and Klein, L. F. 2020. Data Feminism. Cambridge, MA: MIT Press.
Elish, M. C., and Boyd, D. 2018. “Situating Methods in the Magic of Big Data and AI.”
Communication Monographs 85 (1): 57–80.
Gardner, J., O’Leary, M., and Yuan, L. 2021. “Artificial Intelligence in Educational
Assessment: “Breakthrough? or Buncombe and Ballyhoo?”” Journal of Computer
Assisted Learning 37 (5): 1207–1216. https://doi.org/10.1111/jcal.12577.
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., and Kagal, L. 2018.
“Explaining Explanations: An Overview of Interpretability of Machine Learning.”
In The 5th International Conference on Data Science and Advanced Analytics (DSAA 2018).
http://doi.org/10.48550/arXiv.1806.00069.
González-Calatayud, V., Prendes-Espinosa, P., and Roig-Vila, R. 2021. “Artificial
Intelligence for Student Assessment: A Systematic Review.” Applied Sciences 11 (12):
5467. https://doi.org/10.3390/app11125467.
Green, B., and Hu, L. 2018. “The Myth in the Methodology: Towards a Recontextualization
of Fairness in Machine Learning.” In The Debates workshop at the 35th International
Conference on Machine Learning. Stockholm, Sweden.
Jarmai, K., ed. 2020. Responsible Innovation: Business Opportunities and Strategies for Implementation.
Dordrecht, The Netherlands: Springer Open.
Ke, Z., and Ng, V. 2019. “Automated Essay Scoring: A Survey of the State of the Art.”
In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
(IJCAI-19).
Kelly, A. 2021. “A Tale of Two Algorithms: The Appeal and Repeal of Calculated Grades
Systems in England and Ireland in 2020.” British Educational Research Journal 47 (3):
725–741. https://doi.org/10.1002/berj.3705.
Kippin, S., and Cairney, P. 2021. “The COVID-19 Exams Fiasco across the UK: Four
Nations and Two Windows of Opportunity.” British Politics 17: 1–23. https://doi.
org/10.1057/s41293-021-00162-y.
Koshiyama, A., Kazim, E., Treleaven, P., Rai, P., Szpruch, L., Pavey, G., Ahamat, G.,
Leutner, F., Goebel, R., Knight, A., et al. 2021. “Towards Algorithm Auditing: A
Survey on Managing Legal, Ethical and Technological Risks of AI, ML and Associated
Algorithms.” SSRN. http://dx.doi.org/10.2139/ssrn.3778998.
Lazendic, G., Justus, J., and Rabinowitz, S. 2018. NAPLAN Online Automated
Scoring Research Program: Research Report. Australian Curriculum, Assessment and
Reporting Authority (ACARA), Tech. Rep. https://nap.edu.au/docs/default-source/
default-document-library/naplan-online-aes-research-report-final.pdf?sfvrsn=0.
Leavy, S., Siapera, E., and O’Sullivan, B. 2021. “Ethical Data Curation for AI: An Approach
Based on Feminist Epistemology and Critical Theories of Race.” In Proceedings of the
2021 AAAI/ACM Conference on AI, Ethics, and Society.
Lillywhite, A., and Wolbring, G. 2020. “Coverage of Artificial Intelligence and
Machine Learning within Academic Literature, Canadian Newspapers, and Twitter
Tweets: The Case of Disabled People.” Societies 10 (1): 23. https://www.mdpi.
com/2075-4698/10/1/23.
Lohr, S. 2018. “Facial Recognition Is Accurate, If You’re a White Guy.” In Ethics
of Data and Analytics, edited by Kirsten Martin, 143–147. New York: Auerbach
Publications.
Student equity in AI-enabled assessment 129
Mayfield, E., Madaio, M., Prabhumoye, S., Gerritsen, D., McLaughlin, B., Dixon-
Román, E., and Black, A. W. 2019. “Equity beyond Bias in Language Technologies
for Education.” In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for
Building Educational Applications.
McCarthy, J., Minsky, M. L., Rochester, N., and Shannon, C. E. 2006 [1955]. “A Proposal
for the Dartmouth Summer Research Project on Artificial Intelligence, August 31,
1955.” AI Magazine 27 (4): 12. https://doi.org/10.1609/aimag.v27i4.1904.
Morozow, E. 2013. To Save Everything, Click Here: Technology, Solutionism, and the Urge to
Fix Problems that Don’t Exist. London: Penguin Press.
Mubin, O., Cappuccio, M., Alnajjar, F., Ahmad, M. I., and Shahid, S. 2020. “Can a
Robot Invigilator Prevent Cheating?” AI & Society 35 (4). http://doi.org/10.1007/
s00146-020-00954-8.
O’Neil, C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy. London: Penguin Press.
Perelman, L. 2018. “Robot Marking: Automated Essay Scoring and NAPLAN – A
Summary Report.” Journal of Professional Learning, Semester 1. https://cpl.asn.au/
journal/semester-1-2018/robot-marking-automated-essay-scoring-and-naplan-a-
summary-report.
Raffaghelli, J. E., and Stewart, B. 2020. “Centering Complexity in “Educators” Data
Literacy” to Support Future Practices in Faculty Development: A Systematic Review
of the Literature.” Teaching in Higher Education 25 (4): 435–455. https://doi.org/10.1080/
13562517.2019.1696301.
Reisman, D., Schultz, J., Crawford, K., and Whittaker, M. 2018. Algorithmic Impact
Assessments: A Practical Framework for Public Agency Accountability. AI Now. https://
ainowinstitute.org/aiareport2018.pdf.
Selwyn, N. 2019. Should Robots Replace Teachers? Cambridge, UK: Polity Press.
Selwyn, N., Hillman, T., Eynon, R., Ferreira, G., Knox, J., Macgilchrist, F., and Sancho-
Gil, J. M. 2020. “What’s Next for Ed-Tech? Critical Hopes and Concerns for the
2020s.” Learning, Media and Technology 45 (1): 1–6. https://doi.org/10.1080/17439884.
2020.1694945.
Smith, G. 2018. The AI Delusion. Oxford: Oxford University Press.
Stephenson, B., Harvey, A., and Huang, Q. 2022. Towards an Inclusive Analytics for Australian
Higher Education. https://www.ncsehe.edu.au/publications/inclusive-analytics-australian-
higher-education/.
Swauger, S. 2020. “Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education.”
In Critical Digital Pedagogy: A Collection, edited by Jesse Stommel, Chris Friend, and Sean
M. Morris. Washington, DC: Pressbooks.
Wachter, S., Mittelstadt, B., and Russell, C. 2021. “Bias Preservation in Machine
Learning: The Legality of Fairness Metrics under EU Non-Discrimination Law.”
West Virginia Law Review 123 (3): 735–790. https://dx.doi.org/10.2139/ssrn.3792772.
Wang, P. 2019. “On Defining Artificial Intelligence.” Journal of Artificial General Intelligence
10 (2): 1–37. http://doi.org/10.2478/jagi-2019-0002.
Weller, M. 2015. “MOOCs and the Silicon Valley Narrative.” Journal of Interactive Media
in Education 1 (5): 1–7. http://dx.doi.org/10.5334/jime.am.
Williamson, B., and Eynon, R. 2020. “Historical Threads, Missing Links, and Future
Directions in AI in Education.” Learning, Media and Technology 45 (3): 223–235.
https://doi.org/10.1080/17439884.2020.1798995.
Williamson, B., Eynon, R., and Potter, J. 2020. “Pandemic Politics, Pedagogies and
Practices: Digital Technologies and Distance Education during the Coronavirus
Emergency.” Learning, Media and Technology 45 (2): 107–114. https://doi.org/10.1080/
17439884.2020.1761641.
130 B. Stephenson and A. Harvey
Williamson, B., and Hogan, A. 2020. Commercialisation and Privatisation in/of Education
in the Context of Covid-19. Brussels, Belgium: Education International. https://www.
ei-ie.org/file/130.
Yeung, K. 2022. “Dispelling the Digital Enchantment.” Futures Lecture, Edinburgh
Futures Institute, the University of Edinburgh. https://www.birmingham.ac.uk/news/
2022/cal-law-digital-enchantment.
Zeide, E. 2020. “Robot Teaching, Pedagogy, and Policy.” In The Oxford Handbook of Ethics
of AI (Online Edition), edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, 1–17.
Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.51.
Zimitat, C. 2019. “Scalable English Language Screening in a Multi-Campus University.”
In TEQSA 4th Annual Conference. Melbourne, Victoria http://teqsa-2019.p.asnevents.
com.au/days/2019-11-28/abstract/66733.
Zuboff, S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power. New York: Public Affairs.
12
OPPORTUNITIES AND
LIMITATIONS OF
ACCOMMODATIONS AND
ACCESSIBILITY IN HIGHER
EDUCATION ASSESSMENT
Christopher Johnstone, Leanne R. Ketterlin Geller,
and Martha Thurlow
Introduction
A mechanism for understanding if students are learning in universities is class-
room assessments. The practice of assessment has wide-ranging understandings
that vary from classroom to classroom and instructor to instructor. This chap-
ter considers assessment within the context of the United States, but lessons
may be applicable in other parts of the world. In the United States, entrance
examinations (those that are designed to inform admissions decisions in higher
education institutions) are highly standardised, but beyond this there is little
to no standardisation of assessment practice in higher education settings in the
United States. For example, two instructors in the same department may take
two entirely different approaches to assessment. Instructor A may define the goal
of their class as factual or procedural knowledge, and thus rely heavily on quizzes
and exams as mechanisms for students to demonstrate knowledge. Instructor B
may be more concerned about applications and use written papers or authentic
application activities (i.e., projects) to examine student course outputs.
The lack of consistency surrounding assessment in the United States can be
challenging for students to navigate. A typical semester course load requires
students to take 4–5 classes with instructors who each have their own vision of
assessment. Students in higher education settings, then, are required to navi-
gate due dates, instructors’ assessment expectations, and (for many undergrad-
uates) independent living skills during their university experience. The level
of support that students receive for navigating higher education expectations
also varies widely. In the next section, we will focus on support mechanisms
for students with disabilities as an example of how assessment practice is con-
structed for a particular population within a larger decentralised higher educa-
tion ecosystem.
DOI: 10.4324/9781003293101-15
132 C. Johnstone, L. R. Ketterlin Geller, and M. Thurlow
purposes” (US Department of Justice 2014, 1) yet had implications for higher
education in general. The following are among some of the many points made
in the guidance:
course assessments. We will revisit this limitation in our final section but will
now introduce a second mechanism for increasing accessibility of assessments -
Universal Design for Assessment (UDA).
that test formats did not introduce barriers for students or assess skills that were
outside of the intended construct. Both Ketterlin Geller et al. (2012) and Kettler
(2012) warned that sometimes assessments had access requirements (i.e., decoding
the words of a math word problem) that inhibited students from demonstrating
knowledge of a construct (i.e., mathematical reasoning and problem solving).
Early twenty-first century UDA research focused heavily on assessment the-
ory and understanding the interactions between assessment barriers and stu-
dent abilities, capabilities, and disabilities. Around the same time, the Center
for Applied Special Technologies (CAST) began conceptualising Universal
Design for Learning (UDL) (Rose and Meyer 2002) as learning opportunities
that included (1) multiple means of engagement, (2) multiple means of response/
action, and (3) multiple means of representation. UDL became an important
concept in the United States and beyond and was acknowledged in the 2008 US
Higher Education Act to promote accessibility in higher education classrooms.
Specifically, the guidance notes important features of UDL as: “(A) provides
flexibility in the ways information is presented, in the ways students respond or
demonstrate knowledge and skills, and in the ways students are engaged; and
(B) reduces barriers in instruction, provides appropriate accommodations, sup-
ports, and challenges, and maintains high achievement expectations for all stu-
dents, including students with disabilities and students who are limited English
proficient” (US Department of Education 2008, sec. 103). Scholars also posited
that UDL guidelines could improve the assessment experience for all students
and allow for “built-in” accommodations that all students could access (Dolan
et al. 2013). Sheryl Burgstahler and colleagues at the University of Washington
Disabilities, Opportunities, Internetworking, and Technologies (DO-IT) Center
have been at the forefront of promoting and conducting UDL research in higher
education settings (see https://www.washington.edu/doit/).
The framing of UDA as an extension of UDL is an intuitive linkage for assess-
ment in higher education. Although much of the early UDA research focused on
paper- and later technology-based tests, in higher education contexts UDA can be
applied to a variety of assessment approaches. The overall consideration for “mul-
tiple means” has provided instructors in higher education settings with a degree
of freedom to allow their students to engage with material in ways that are most
accessible. However, identifying ways to improve the accessibility of assessments
may be difficult for some instructors. These instructors may be unaware of the
barriers that students face, likely because they did not have similar experiences or
receive training on assessments. As such, specific assessment practices are often per-
petuated by instructors themselves or within disciplines with little understanding
of student needs. Focusing on providing “multiple means” can help faculty recog-
nise that students may express their knowledge in a variety of ways and varying
their approach to assessment may draw out different levels of understanding.
Although the CAST guidelines associated with providing multiple means of
action and expression tend to naturally apply to assessments, drawing on the
range of modalities can enhance accessibility of assessments in higher education.
Accommodations and accessibility in assessment 137
A proposal
Evidence from around the world indicates that accommodations provide a path-
way for individual changes to higher education assessments. These changes are
helpful to receiving students, and if they are administered via a disability services
138 C. Johnstone, L. R. Ketterlin Geller, and M. Thurlow
part of “quality teaching” expectations, for example, would mandate greater use
in higher education classrooms.
In summary, there are two tools that are currently used to increase accessibil-
ity in assessments - whether those assessments are tests, authentic expressions of
knowledge, or applied activities. Accommodations and UDA each provide useful
pathways to accessibility, but also have limitations. One way to address these lim-
itations is to better understand and articulate how accommodations and UDA can
be used to inform one another. By drawing on the strengths of accommodations,
institutions may become more accountable to their students by making accessi-
bility of assessments an indicator of quality against which instructors’ efforts are
judged. At the same time, UDA can inform accommodations practice, unlocking
greater potential for instructors to utilise and experiment with accommodations,
rather than rely solely on minimum requirements outlined in accommodations
letters. In the latter case, transparency by instructors on the decisions they make
will be critical so that students can track instructor expectations of students, as
well as what students can expect of their instructors, across multiple courses.
Further research in these areas is needed, both at the policy and pedagog-
ical level. Little is known about how flexible administration of accommoda-
tions impacts the learning and assessment experiences of students, but we suspect
such administration might improve students’ motivation, decrease anxieties, and
allow for greater focus on the core constructs of courses. At the same time, there
are few case studies about the impact of institutional reform efforts focused on
accessible learning and assessments. We suspect, however, that such changes may
improve students’ ability to demonstrate how they have understood and reflected
on the material in their courses. To this end, we conclude by broadly arguing
that enhanced accessibility of assessments aligns with the public good mission of
higher education. By removing the access barriers students face in learning and
assessment, students may have enhanced opportunity to enjoy all that higher
education may provide for them as individuals and enhance their potential for
making an impact on their communities and world.
Note
1 IEPs and Section 504 plans are educational planning documents required by law for
students in primary and secondary schools in the United States. IEPs are generated by
multidisciplinary teams and not used in postsecondary settings, but Section 504 Plans
(which identify student accommodation needs based on their disability) are applicable
in postsecondary settings).
References
Americans with Disabilities Act. 1990. 28 C.F.R. §§ 36.303(b), 36.309(b)(3).
Center for Universal Design. 1997. The Principles of Universal Design. Center for Universal
Design, North Carolina State University. https://projects.ncsu.edu/ncsu/design/cud/
pubs_p/docs/poster.pdf.
140 C. Johnstone, L. R. Ketterlin Geller, and M. Thurlow
Dolan, R. P., Burling, K., Harms, M., Strain-Seymour, E., Way, W., and Rose, D. H. 2013.
A Universal Design for Learning-Based Framework for Designing Accessible Technology-Enhanced
Assessments. New Jersey: Pearson. http://doi.org/10.13140/RG.2.2.16823.85922.
Griful-Freixenet, J., Struyven, K., Verstichele, M., and Andries, C. 2017. “Higher Education
Students with Disabilities Speaking Out: Perceived Barriers and Opportunities of the
Universal Design for Learning Framework.” Disability and Society 32 (10): 1627–1649.
http://doi.org/10.1080/09687599.2017.1365695.
Hanafin, J., Shevlin, M., Kenny, M., and McNeela, E. 2007. “Including Young People
with Disabilities. Assessment Challenges in Higher Education.” Higher Education 54
(3): 435–488. http://doi.org/10.1007/s10734-006-9005-9.
Ketterlin Geller, L. R., Jamgochian, E. M., Nelson-Walker, N. J., and Geller, J. P. 2012.
“Disentangling Mathematics Target and Access Skills: Implications for Accommodation
Assignment Practices.” Learning Disabilities Research and Practice 27 (4): 178–188. http://
doi.org/10.1111/j.1540-5826.2012.00365.x.
Kettler, R. J. 2012. “Testing Accommodations: Theory and Research to Inform Practice.”
International Journal of Disability, Development, and Education 5 (1): 53–66. http://doi.
org/10.1080/1034912X.2012.654952.
Kim, W. H., and Lee, J. 2016. “The Effect of Accommodations on Academic Performance
of College Students with Disabilities.” Rehabilitation Counseling Bulletin 60 (1): 40–50.
http://doi.org/10.1177/0034355215605259.
Lewandowski, L., Wood, W, and Lambert, T. 2015. “Private Room as a Test Accommodation.”
Assessment and Evaluation in Higher Education 40 (2): 279–285. https://doi.org/10.1080/
02602938.2014.911243.
Lombardi, A. R., Murray, C., and Gerdes, H. 2012. “Academic Performance of First
Generation College Students with Disabilities.” Journal of College Student Development
53 (6): 811–826. http://doi.org/10.1353/csd.2012.0082.
Luttenberger, S., Wimmer, S., and Paechter, M. 2018. “Spotlight on Math Anxiety.”
Psychology Research and Behavior Management 11: 311–322. http://doi.org/10.2147/
PRBM.S141421.
Nieminen, J. H. 2022. “Governing the ‘Disabled Assessee’: A Critical Reframing of
Assessment Accommodations as Sociocultural Practices.” Disability and Society 37(8):
1293–1320. https://doi.org/10.1080/09687599.2021.1874304.
Rose, D. H., and Meyer, A. 2002. Teaching Every Student in the Digital Age: Universal Design
for Learning. Alexandria, VA: Association for Supervision and Curriculum Development.
Scott, G. A. 2011. Higher Education and Disability: Improved Federal Enforcement Needed to
Better Protect Students’ Rights to Testing Accommodations. Report to Congressional Requesters.
GAO-12-40. Washington, DC: US Government Accountability Office.
Thompson, S. J., Johnstone, C. J., and Thurlow, M. L. 2002. Universal Design Applied to
Large Scale Assessments. Synthesis Report 44. Minneapolis, MN: National Center on
Educational Outcomes.
Thurlow, M., and Bolt, S. 2001. Empirical Support for Accommodations Most Often Allowed
in State Policy (Synthesis Report 41). Minneapolis, MN: University of Minnesota,
National Center on Educational Outcomes.
US Department of Education, National Center for Education Statistics. 2021. Digest of
Education Statistics, 2019 (NCES 2021-009). https://nces.ed.gov/programs/digest/
d19/tables/dt19_311.10.asp.
US Department of Education. 2008, sec. 103. The Higher Education Opportunity Act.
Universal Design for Learning. 20 USC § 1003(24).
US Department of Justice. 2014. ADA Requirements: Testing Accommodations. https://
www.ada.gov/regs2014/testing-accommodations.pdf.
Accommodations and accessibility in assessment 141
Weis, R., and Beauchemin, E. L. 2019. “Are Separate Room Accommodations Effective
for College Students with Disabilities? Assessment and Evaluation in Higher Education 45
(5): 794–809. https://doi.org/10.1080/02602938.2019.1702922.
Wong, A., ed. 2020. Disability Visibility: First-Person Stories from the Twenty-First Century.
New York: Vintage Books.
Wu, Y.-C., Liu, K. K., Lazarus, S. S., and Thurlow, M. L. 2021. 2018–2019 APR Snapshot
#26: Students in Special Education Assigned Assessment Accommodations. National Center on
Educational Outcomes. https://ici.umn.edu/products/ODcf UaAbRUm1nIr5nftRaQ.
13
MORE THAN ASSESSMENT
TASK DESIGN
Promoting equity for students from low
socio-economic status backgrounds
DOI: 10.4324/9781003293101-16
More than assessment task design 143
challenges for all graduates, because they need to look to experiences beyond
what is assessed to convey their capabilities to employers (Jorre de St Jorre, Boud,
and Johnson 2021). However, the shortcomings of assessment pose a greater
problem for students from low SES backgrounds because they tend to be less
aware of opportunities to improve their employability (Doyle 2011; Greenbank
and Hepworth 2008; Harvey et al. 2017), and this contributes to disadvantage
in the graduate labour market (Li and Carroll 2019; O’Shea 2016; QILT 2019;
Richardson, Bennett, and Roberts 2016; Tomaszewski et al. 2019). Equitable
employment opportunities are essential to improving social mobility and stopping
cycles of intergenerational disadvantage for students from low SES backgrounds,
so this aspect of assessment needs to be addressed urgently.
because they more often lack awareness of the skills and experiences employers
value, or networks that can provide careers advice or connect them with relevant
opportunities (Doyle 2011; Richardson, Bennett, and Roberts 2016). Thus, it is
especially important that assessment is designed to direct this vulnerable cohort
to learning of importance to careers. Unfortunately, research has also shown
that students rarely link assessment to employability (Ajjawi et al. 2020; Kinash,
McGillivray, and Crane 2017).
As more students graduate from large cohorts, assessment that fails to capture
unique achievements becomes increasingly questionable. In addition to failing to
account for differences in opportunity and expression, homogenised assessment
that involves identical tasks for all, provides students with poor opportunities to
demonstrate achievements that distinguish them from peers or predecessors with
the same or similar qualifications ( Jorre de St Jorre, Boud, and Johnson 2021;
Jorre de St Jorre and Oliver 2018). Instead of providing opportunities for distinc-
tive achievement, common assessment practices encourage “sameness” which,
beyond the necessary purpose of assuring threshold achievements, has little addi-
tional value to students, employers, or society.
Graduates with the same or similar qualifications do not all need to have
the same strengths, because they will inevitably gain different roles in which
different subsets of skills and personal attributes are most valued. Unlike assess-
ment, employers judge graduates based on different characteristics and standards,
because their preferences and the requirements of different job roles and organ-
isations are highly variable. Thus, the ideal candidate for one employer will not
necessarily be the best candidate for another.
Given that assessment signals that which is important, what does assessment
that values sameness, say about the value of diversity in the workplace, our soci-
ety and our learning environments? In requiring that students perform the same
tasks and be judged against the same standards, homogenised assessment fails to
acknowledge the value of different perspectives, skills, personal attributes and
experience. This is in direct contrast with professional contexts in which indi-
vidual differences can be a valuable source of competitive advantage, and diverse
collaborations can be leveraged to solve complex problems, drive innovation and
build new knowledge (Adams et al. 2011; Brown, Hesketh, and Williams 2004).
To enable students to utilise assessment for distinctiveness, we also need to
rethink the ways in which we enable students to verify and portray their per-
sonal achievements to different audiences, for different purposes ( Jorre St Jorre,
Boud, and Johnson 2021). For example, representation of achievement through
academic transcripts provides insufficient detail to enable identification of what a
graduate can do. Likewise, where university awards are solely grades based (e.g.,
based on a Grade Point Average), they provide no context for what was achieved,
and only recognise a small number of students, rather than all of those who meet
a specific standard. Digital credentials can, however, be constructed to convey
the context of achievement, including the standards assessed, and rich artefacts
curated by students to evidence their achievements, such as portfolios or videos
148 T. Jorre de St Jorre and D. Boud
(Miller et al. 2017). Valuing distinctiveness may require students from non-
traditional backgrounds to be reassured that they do not need to always conform
to the norm.
that experiences in the workplace can change how students approach learning on
campus, because they help students to understand the relevance of their skills and
knowledge, and orientate them to careers (Johnson and Rice 2016). Other research
examining students experience of extra-curricular strategies designed to recog-
nise and engage students in articulating and evidencing capabilities of importance
to employability (i.e. video pitches and digital credentials requiring students to
curate portfolios) has shown that students can gain confidence – in themselves,
their employability and in their ability to articulate themselves to employers –
and greater appreciation for learning throughout their degree (Jorre de St Jorre,
Johnson, and O’Dea 2017). While the majority of students enrol in higher edu-
cation for employment related reasons, employment outcomes are particularly
important to students from low SES backgrounds (Raciti 2019).
Assessment that emphasises the relevance of learning outcomes to careers may
also contribute to students’ sense of belonging. Students have been shown to
perceive teachers who emphasise employability as caring ( Jorre de St Jorre and
Oliver 2018). Positive correlations have been observed between students’ per-
ceptions of their employability, and their perception of their employability skills,
knowledge and attitudes acquired through completing their degree (de Oliveira
Silva et al. 2019). Thus, in addition to ensuring that students from low SES back-
grounds proactively engage in activities that are important to expanding their
understanding and development of employability, assessment which develops
students’ professional identity, such as through simulation or modelling activi-
ties, will likely also contribute to how they value and engage with their broader
learning experience and with the assessment itself.
Conclusion
Assessment needs to ensure that all students meet appropriate high standards.
However, it must do so in ways that do not provide additional privilege to cer-
tain social groups, or which place unnecessary barriers in the way of students
meeting these standards. Inclusive assessment means not giving hidden advantage
to those who have already benefited. Consideration of assessment for inclusion
also provides an opportunity to rethink what is needed to motivate students and
engage them in activities which aid their employability.
References
Adams, R., Evangelou, D., English, L., Fugueiredo, A., Mousoulides, N., Pawley,
A. L., and Schiefellite, C. et al. 2011. “Multiple Perspectives on Engaging Future
Engineers.” Journal of Engineering Education 100 (1): 48–88. https://doi.org/10.1002/
j.2168-9830.2011.tb00004.x.
Ajjawi, R., Tai, J., Huu Nghia, T. L., Boud, D., Johnson, L., and Patrick, C.-J. 2020.
“Aligning Assessment with the Needs of Work-Integrated Learning: The Challenges
of Authentic Assessment in a Complex Context.” Assessment and Evaluation in Higher
Education 45 (2): 304–316. http://doi.org/10.1080/02602938.2019.1639613.
150 T. Jorre de St Jorre and D. Boud
Bearman, M., Dawson, P., Bennett, S., Hall, M., Molloy, E., Boud, D., and Joughin, G.
2017. “How University Teachers Design Assessments: A Cross-Disciplinary Study.”
Higher Education (1): 49–64. http://doi.org/10.1007/s10734-016-0027-7.
Brown, P., Hesketh, A., and Williams, S. 2004. The Mismanagement of Talent: Employability
and Jobs in the Knowledge Economy. New York: Oxford University Press.
Bullen, J., and Flavell, H. 2022. “Decolonising the Indigenised Curricula: Preparing
Australian Graduates for a Workplace and World in Flux.” Higher Education Research
and Development 41(5): 1402–1416. http://doi.org/10.1080/07294360.2021.1927998.
Burke, P. J., Bennett, A., Burgess, C., Gray, K., and Southgate, E. 2016. Capability,
Belonging and Equity in Higher Education: Developing Inclusive Approaches. Accessed 23
September 2021. https://www.ncsehe.edu.au/publications/capability-belonging-and-
equity-in-higher-education-developing-inclusive-approaches/.
de Oliveira Silva, J. H., de Sousa Mendes, G. H., Ganga, G. M. D., Mergulhão, R. C.,
and Lizarelli, F. L. 2019. “Antecedents and Consequents of Student Satisfaction in
Higher Technical-Vocational Education: Evidence from Brazil.” International Journal
for Educational and Vocational Guidance. http://doi.org/10.1007/s10775-019-09407-1.
Devlin, M., Kift, S., Nelson, K., Smith, L., and McKay, J. 2012. Effective Teaching and
Support of Students from Low Socioeconomic Status Backgrounds: Practical Advice for Teaching
Staff. Accessed September 23, 2021. https://www.lowses.edu.au/assets/Practical%20
Advice%20for%20Teaching%20Staff.pdf.
Devlin, M., and McKay, J. 2017. Facilitating Success for Students from Low Socioeconomic Status
Backgrounds at Regional Universities. Ballarat: Federation University Australia. https://
www.ncsehe.edu.au/wp-content/uploads/2018/05/55_Federation_MarciaDevlin_
Accessible_PDF.pdf.
Doyle, E. 2011. “Career Development Needs of Low Socio-Economic Status University
Students.” Australian Journal of Career Development 20 (3): 56–65. http://doi.org/10.1177/
103841621102000309.
Eccles, J. 2009. “Who Am I and What Am I Going to Do with My Life? Personal and
Collective Identities as Motivators of Action.” Educational Psychologist 44 (2): 78–89.
http://doi.org/10.1080/00461520902832368.
Greenbank, P., and Hepworth, S. 2008. Working Class Students and the Career Decision-
Making Process: A Qualitative Study. Great Britain: Higher Education Careers Services
Unit (HECSU). http://hdl.voced.edu.au/10707/195946.
Hailikari, T., Postareff, L., Tuonone, T., Räisänen, M., and Lindblom-Ylänne, S. 2014.
“Students’ and Teachers’ Perceptions of Fairness of Assessment.” In Advances and
Innovations in University Assessment and Feedback: A Festchrift in Honour of Professor Dai
Hounsell, edited by Caroline Kreber, Charles Anderson, Noel Entwistle, and Jan
McArthur, 99–113. Edinburgh: Edinburgh University Press. http://doi.org/10.3366/
edinburgh/9780748694549.003.0006.
Harvey, A., Andrewartha, L., Edwards, D., Clarke, J., and Reyes, K. 2017. Student
Equity and Employability in Higher Education. Melbourne: Centre for Higher Education
Equity and Diversity Research. https://www.ncsehe.edu.au/wp-content/uploads/2018/
06/73_LaTrobe_AndrewHarvey_Accessible_PDF.pdf.
HEFCE. 2015. Causes of Differences in Student Outcomes. Report to HEFCE by Kings
College London, ARC Network and The University of Manchester. https://dera.ioe.
ac.uk/23653/1/HEFCE2015_diffout.pdf.
Johnson, E., and Rice, J. 2016. WIL in Science: Leadership for WIL Final Report. https://
www.chiefscientist.gov.au/2017/05/report-work-integrated-learning-in-science-
leadership-for-wil/.
More than assessment task design 151
Jorre de St Jorre, T., Boud, D., and Johnson, E. D. 2021. “Assessment for Distinctiveness:
Recognising Diversity of Accomplishments.” Studies in Higher Education 46 (7):
1371–1382. http://doi.org/:10.1080/03075079.2019.1689385.
Jorre de St Jorre, T., Johnson, L., and O’Dea, G. 2017. “Me in a Minute: A Simple Strategy
for Developing and Showcasing Personal Employability.” In The Me, Us, IT! Proceedings
ASCILITE2017: 34th International Conference on Innovation, Practice and Research in the Use
of Educational Technologies in Tertiary Education. Toowomba.
Jorre de St Jorre, T., and Oliver, B. 2018. “Want Students to Engage? Contextualise
Graduate Learning Outcomes and Assess for Employability.” Higher Education Research
and Development 37 (1): 44–57. https://doi.org/10.1080/07294360.2017.1339183.
Kahu, E. R. 2013. “Framing Student Engagement in Higher Education.” Studies in Higher
Education 38 (5): 758–773. https://doi.org/10.1080/03075079.2011.598505.
Kift, S. 2015. “A Decade of Transition Pedagogy: A Quantum Leap in Conceptualising
the First Year Experience.” HERDSA Review of Higher Education 2: 51–86. www.
herdsa.org.au/publications/journals/herdsa-review-higher-education-vol-2.
Kinash, S., McGillivray, L., and Crane, L. 2017. “Do University Students, Alumni,
Educators and Employers Link Assessment and Graduate Employability?” Higher
Education Research and Development 37 (2): 301–315. http://doi.org/10.1080/07294360.
2017.1370439.
Knight, P. T. 2002. “The Achilles Heel of Quality: The Assessment of Student Learning.”
Quality in Higher Education 8 (1): 107–116. https://doi.org/10.1080/13538320220127506.
Leathwood, C. 2005. “Assessment Policy and Practice in Higher Education: Purpose,
Standards and Equity.” Assessment and Evaluation in Higher Education 30 (3): 307–324.
https://doi.org/10.1080/02602930500063876.
Li, I. W., and Carroll, D. R. 2019. “Factors Influencing Dropout and Academic Performance:
An Australian Higher Education Equity Perspective.” Journal of Higher Education Policy
and Management 42 (1): 14–30. https://doi.org/10.1080/1360080X.2019.1649993.
Lucey, C. R., Hauer, K. E., Boatright, D., and Fernandez, A. 2020. “Medical Education’s
Wicked Problem: Achieving Equity in Assessment for Medical Learners.” Academic
Medicine 95 (12S): S98–S108. http://doi.org/10.1097/ACM.0000000000003717.
Miller, K. K., Jorre de St Jorre, T., West, J. M., and Elizabeth, D. J. 2017. “The Potential
of Digital Credentials to Engage Students with Capabilities of Importance to Scholars
and Citizens.” Active Learning in Higher Education 21 (1): 11–22. http://doi.org/10.1177/
1469787417742021.
Naylor, R., Baik, C., and Arkoudis, S. 2018. “Identifying Attrition Risk Based on the
First Year Experience.” Higher Education Research and Development 37 (2): 328–342.
http://doi.org/10.1080/07294360.2017.1370438.
Nguyen, H.-H. D., and Ryan, A. M. 2008. “Does Stereotype Threat Affect Test
Performance of Minorities and Women? A Meta-Analysis of Experimental Evidence.”
Journal of Applied Psychology 93 (6): 1314–1334. http://doi.org/10.1037/a0012702.
O’Shea, S. 2016. ‘Mind the Gap!’ Exploring the Postgraduation Outcomes and Employment
Mobility of Individuals Who Are First in Their Family to Complete a University Degree. Perth:
National Centre for Student Equity in Higher Education. https://www.ncsehe.edu.au/
wp-content/uploads/2020/01/OShea_ResearchFellowship_FINALREPORT_.pdf.
QILT. 2019. 2018 Graduate Outcomes Survey. National Report. https://www.qilt.edu.au/
resources?survey=GOS&type=Reports.
Raciti, M. 2019. Career Construction, Future Work and the Perceived Risks of Going to University
for Young People from Low SES Backgrounds. Research Fellowship Final Report. Perth:
National Centre for Student Equity in Higher Education (NCSEHE).
152 T. Jorre de St Jorre and D. Boud
Richardson, S., Bennett, D., and Roberts, L. 2016. Investigating the Relationship between
Equity and Graduate Outcomes in Australia. Perth: National Centre for Student Equity
in Higher Education.
Sadler, D. R. 2009a. “Grade Integrity and the Representation of Academic Achievement.”
Studies in Higher Education 34 (7): 807–826. http://doi.org/10.1080/03075070802706553.
Sadler, D. R. 2009b. “Indeterminacy in the Use of Preset Criteria for Assessment and
Grading.” Assessment and Evaluation in Higher Education 34 (2): 159–179. http://doi.
org/10.1080/02602930801956059.
Shay, S. 2008. “Beyond Social Constructivist Perspectives on Assessment: The Centring
of Knowledge.” Teaching in Higher Education 13 (5): 595–605. http://doi.org/10.1080/
13562510802334970.
Stowell, M. 2004. “Equity, Justice and Standards: Assessment Decision Making in Higher
Education.” Assessment and Evaluation in Higher Education 29 (4): 495–510. http://doi.
org/10.1080/02602930310001689055.
Tomaszewski, W., Perales, F., Xiang, N., and Kubler, M. 2019. Beyond Graduation: Long-
Term Socioeconomic Outcomes Amongst Equity Students. https://www.ncsehe.edu.au/wp-
content/uploads/2019/08/Tomaszewski_UQ_Final_Accessible_9_8.pdf.
Woolf, H. 2004. “Assessment Criteria: Reflections on Current Practices.” Assessment and
Evaluation in Higher Education 29 (4): 479–493. http://doi.org/10.1080/026029303100
01689046.
Yorke, M. 2011. “Summative Assessment: Dealing with the ‘Measurement Fallacy.’” Studies
in Higher Education 36 (3): 251–273. https://doi.org/10.1080/03075070903545082.
14
ASSESSING EMPLOYABILITY SKILLS
How are current assessment practices
“fair” for international students?
Thanh Pham
Introduction
International education plays a significant role in Australia’s economy. The sec-
tor contributed A$40.3 billion to Australia’s economy in 2019 (UK Department
of Education 2019). However, Australia’s position in the international educa-
tion market has been threatened because both traditional (e.g., the UK) and
non-traditional (e.g., Asian countries) immigration countries have actively
launched policies to attract and retain highly skilled migrants (Czaika 2018).
Post-study career prospects are a key goal for many international students.
Therefore, to become more competitive in the international education mar-
ket, Australian higher education needs to better ensure international students’
employability outcomes.
In fact, the Australian Government has recently targeted graduate employ-
ability as its key priority by linking university performance-based funding
directly to employment outcomes (Wellings et al. 2019). Universities have taken
the skills-based approach that emphasises the need for students to learn a range
of professional skills (e.g., communication, teamwork) as a “solution” to enhance
students’ education-to-work transition. Although this approach has been widely
applied, professional skills are still perceived as supplementary to the curriculum
or part of work-integrated learning units. Consequently, insufficient attention
has been paid to how students’ professional skills could be assessed properly.
Importantly, current practices designed to assess students’ professional skills
disadvantage international students in various ways.
This chapter aims to critically discuss how current assessment practices of
employability skills disadvantage international students. The chapter has three
main parts. It starts with a discussion about how employability skills have
been implemented and assessed in higher education. It then discusses common
DOI: 10.4324/9781003293101-17
154 T. Pham
skill depending on the context, background, expertise and position of the inter-
preter. In the workplace, the process of matching expert knowledge with occu-
pational recruitment and roles does not take place in a social vacuum, with all
skills and attributes heavily raced, classed, and gendered (Brown 1995; Morrison
2014; Tholen 2015). This means the skills that students learn in higher education
are often interpreted and used differently in the workplace although the name
might be the same.
In the case of international students, a range of studies exploring interna-
tional students’ employability have reported a wide range of problems related to
employability skills facing this cohort. They have been evidenced to have limited
English proficiency, low-level communication skills and limitations in a range of
Western personal values like being proactive, critical, innovative, and independ-
ent (Blackmore, Gribble, and Rahimi 2017). They therefore need additional
assistance in order to excel in their studies and to gain the most of their over-
seas study experiences (Briguglio and Smith 2012). When international gradu-
ates enter the workforce, they have been described as having similar problems.
For instance, common comments about international, especially Asian, students
are that they are “not active”, “unconfident”, and “not critical”. Specifically,
Howells, Westerveld, and Garvis (2017) found that workplace supervisors com-
plained that international students, particularly those from Asia, were disengaged
because they did not ask questions.
Amongst employability skills, the skill that international students have received
the most complaints about is communication (Pham 2021a). Communication skills
are often interpreted as linguistic skills, so understood as cognitive dispositions
(Blomeke, Gustafsson, and Shavelson 2015). Communication skills are, therefore,
measured using standardised written and oral tests. In these tests, common prob-
lems facing international students are their “heavy” accents and limited terminolo-
gies. They often cannot pronounce sounds and phonemes that do not exist in their
language accurately. For instance, Asian students from certain regions often have
difficulty with, and inaccurate pronunciation of “r”, “th”, and “w” sounds. Some
students are notedto have an “awkward” accent which can be hard to understand
(Barton et al. 2017). This is because British and American English are the most
preferable. These accents are reported as “clear”, “intelligible”, and representative
of “world standards” (Ngoc 2016). Those who do not possess an accent familiar to
British and American English speakers often experience difficulties in interacting
with other people. The second problem often facing international students is their
limited writing and technical terminologies. Consequently, it is hard for them to
write and communicate in a natural way. In daily practices, their difficulties are
amplified because the native English speakers often use slang terms such as “grab a
cuppa”, “calling the roll”, and “put your hands up” (Barton et al. 2017), which are
not taught in official teaching and learning programs.
Another line of research argues that communication competencies should
include a range of factors including discourse (capacity to speak and write in a suit-
able context), actional (capacity to convey communicative intent), sociocultural
156 T. Pham
I do not know why, it was the same “me” who had the same level of English,
but sometimes I could talk and sometimes I could not. I always got tongue-
tied when meeting my colleagues. They never tried to understand me, but
kept saying “pardon”, which made me lose confidence in my English.
Therefore, Littlewood (2000) claims that Asian classrooms may indeed appear
relatively “static” in comparison to those of the Anglophone West. However,
just because the students operate in a receptive mode does not imply that they
are any less engaged. Conversely, just because students in Anglophone Western
classrooms are seen to be verbally participatory, this does not necessarily guaran-
tee that learning is actually taking place. For instance, in her study, Pham (2014)
reported that Asian students found it astonishing and culturally inappropriate
when Australian students interrupted someone who was talking to make a point
or ask a very simple question when they could just have kept quiet and found out
from their classmates at a later time. As such, it appears that each specific learn-
ing context has its own explicit and tacit rules to define what should be called
“active”, “critical”, and “confident”.
When competence-oriented examinations are applied, these assessment prac-
tices disadvantage international students because they rarely embrace deep cul-
tural values of international students but spotlight their limited understanding
of multiple aspects of what Bourdieu (1986) calls “cultural capital”. Specifically,
Bourdieu (1986) discusses two aspects of cultural capital. He claims that cultural
capital carries standardised values, which are legalised and institutionalised, and
embodied values, which refer to one’s preference or perception of the “correct”
way of doing things. While people may possess the same standardised values, it is
often the case that only the dominant groups’ embodied values are acknowledged
and validated. Regarding communication, Bourdieu (1992) highlighted two
components: linguistic skill, which refers to the use of standardised grammatical
structure, and legitimate language skills, which describes “the social capacity
to use the linguistic capacity adequately in a determinate situation” (as cited
in Cederberg 2015, 41). International students might be aware of the embod-
ied values and used legitimate language skills. However, they cannot often read
non-verbal language when they work with people from different backgrounds,
which leads to more problems in their communication ability. Therefore, it is
very common that they struggle to find “proper” behaviours, shared interests,
and values when conducting conversations with local people. They often experi-
ence mishaps, described by Cultural Savvy (2003) as “hitting [an] iceberg”, when
venturing into different cultures without adequate preparation. This leads to
international students feeling left out and failing to engage their local friends in
small talk to build relationships. Such failures are not necessarily due to limited
English proficiencies but more about preferences and “ways of doing things”.
References
Barton, G., Hartwig, K., Joseph, D., and Podorova, A. 2017. “Practicum for International
Students in Teacher Education Programs: An Investigation of Three University
Sites through Multisocialisation, Interculturalisation and Reflection.” In Professional
Learning in the Work Place for International Students: Exploring Theory and Practice, edited
by Georgina Barton and Kay Hartwig, 129–146. Cham: Springer.
160 T. Pham
Blackmore, J., Gribble, C., and Rahimi, M. 2017. “International Education, the
Formation of Capital and Graduate Employment: Chinese Accounting Graduates’
Experiences of the Australian Labour Market.” Critical Studies in Education 58 (1):
69–88. http://doi.org/10.1080/17508487.2015.1117505.
Blomeke, S., Gustafsson, J.-E., and Shavelson, R. 2015. “Beyond Dichotomies:
Competence Viewed as a Continuum.” Zeitschrift fur Psychologie 223 (1): 3–13. http://
doi.org/10.1027/2151-2604/a000194.
Bourdieu, P. 1986. “The Forms of Capital.” In Handbook of Theory and Research for
the Sociology of Education, edited by John G. Richardson, 241–259. New York:
Greenwood Press.
Bourdieu, P. 1992. Language and Symbolic Power. Cambridge, UK: Polity.
Braun, E. 2021. “Performance-Based Assessment of Students’ Communication Skills.”
International Journal of Chinese Education 10 (1): 1–12. https://doi.org/10.1177/22125
868211006202.
Braun, E., and Mishra, S. 2016. “Methods for Assessing Competences in Higher
Education: A Comparative Review” In Theory and Method in Higher Education Research
Vol. 2, edited by Jeroen Huisman and Malcolm Tight, 47–68. Bingley: Emerald
Group. https://doi.org/10.1108/S2056-375220160000002003.
Briguglio, C., and Smith, R. 2012. “Perceptions of Chinese Students in an Australian
University: Are We Meeting Their Needs?” Asia Pacific Journal of Education 32 (1):
17–33. https://doi.org/10.1080/02188791.2012.655237.
Brown, P. 1995. “Cultural Capital and Social Exclusion: Some Observations on Recent
Trends in Education, Employment and the Labour Market.” Work, Employment and
Society 9 (1): 29–51. https://doi.org/10.1177/095001709591002.
Campbell, M., and Zegwaard, K. E. 2011. “Ethical Considerations and Workplace
Values in Cooperative and Work-Integrated Education.” In International Handbook for
Cooperative and Work-Integrated Education: International Perspectives of Theory, Research
and Practice, rev. ed., edited by Richard K. Coll, and Karsten E. Zegwaard, 363–369.
Lowell, MA: World Association for Cooperative Education.
Cederberg, M. 2015. “Embodied Cultural Capital and the Study of Ethnic Inequalities.”
In Migrant Capital: Networks, Identities and Strategies, edited by Louise Ryan, Umut Erel,
and Alessio D”Angelo, 33–47. Basingstoke, Hampshire; New York, NY: Palgrave
Macmillan.
Celce-Murcia, M., Dörnyei, Z., and Thurrell, S. 1995. “Communicative Competence:
A Pedagogically Motivated Model with Content Specifications.” Issues in Applied
Linguist 6 (2): 5–35. https://doi.org/10.5070/L462005216.
Clanchy, J., and Ballard, B. 1995. “Generic Skills in the Context of Higher Education.”
Higher Education Research and Development 14 (2): 156–166. https://doi.org/10.1080/
0729436950140202.
Clements, M. D., and Cord, B. A. 2013. “Assessment Guiding Learning: Developing
Graduate Qualities in an Experiential Learning Program.” Assessment and Evalua
tion in Higher Education 38 (1): 114–124. http://doi.org/10.1080/02602938.2011.
609314.
Cortazzi, M., and Jin, L. 1996. “Cultures of Learning: Language Classrooms in China.”
In Society and the Language Classroom, edited by Hywel Coleman, 169–206. New York:
Cambridge University Press.
Cultural Savvy. 2003. “What is Culture? How Does It Impact Everything We Do?”
Accessed July 23, 2022. http://www.culturalsavvy.com/culture.htm.
Assessing employability skills 161
Ngoc, B. D. 2016. “To Employ or Not to Employ Expatriate Non-Native Speaker Teachers:
Views from within.” Asian Englishes 18 (1): 67–79. http://doi.org/10.1080/13488678.
2015.1132112.
Pham, T. 2014. Implementing Cross-Culture Pedagogies: Cooperative Learning at Confucian
Heritage Cultures. Singapore: Springer.
Pham, T. 2021a. “Communication Competencies and International Graduates” Emp
loyability Outcomes: Strategies to Navigate the Host Labour Market.” Journal of Interna
tional Migration and Integration 23: 733–749. https://doi.org/10.1007/s12134-021-00869-3.
Pham, T. 2021b. “Conceptualising the Employability Agency of International Graduates.”
Working Paper 75, Centre for Global Higher Education. Oxford: University of Oxford.
https://www.researchcghe.org/perch/resources/publications/working-paper-
75-1.pdf.
Pham, T. 2021c. “Developing Effective Global Pedagogies in Western Classrooms:
A Need to Understand the Internationalization Process of Confucian Heritage
Cultures (CHC) Students.” In Transforming Pedagogies through Engagement with Learners,
Teachers and Communities, edited by Bao Dat and Thanh Pham, 37–52. Singapore:
Springer.
Pham, T., and Saito, E. 2019. “Teaching towards Graduates Attributes: Is This a Solution
for Employability of University Students in Australia?” In Innovate Higher Education to
Enhance Graduate Employability, edited by Hong T. M. Bui, Hoa T. M. Nguyen, and
Doug Cole, 109–121. NY: Routledge.
Röhner, J., and Schütz, A. 2015. Psychologie der Kommunikation. Wiesbaden: Springer VS.
http://doi.org/10.1007/978-3-531-18891-1.
Ryan, J., and Louie, K. 2006. “False Dichotomy? “Western” and “Confucian” Concepts
of Scholarship and Learning.” Educational Philosophy and Theory 39 (4): 404–417.
http://doi.org/10.1111/j.1469-5812.2007.00347.x.
Sancho, J. M. 2008. “Opening Students’ Minds”. In Researching International Pedagogies,
edited by Meeri Hellstén, and Anna Reid, 259–276. Dordrecht, The Netherlands:
Springer. https://doi.org/10.1007/978-1-4020-8858-2_16.
Shavelson, R. J., Zlatkin-Troitschanskaia, O., and Mariño, J. P. 2018. “International
Performance Assessment of Learning in Higher Education (iPAL): Research and
Development.” In Assessment of Learning Outcomes in Higher Education: Cross-National
Comparisons and Perspectives Assessment, edited by Olga Zlatkin-Troitschanskaia,
Miriam Toepper, Hans Anand Pant, Corinna Lautenbach, and Christiane Kuhn,
193–214. Cham: Springer International. http://doi.org/10.1007/978-3-319-74338-
7_10.
Singh, M. 2009. “Connecting Intellectual Projects in China and Australia.” Australian
Journal of Education 54 (1): 31–45. http://doi.org/10.1177/000494411005400104.
Tholen, G. 2015. “What Can Research into Graduate Employability Tell Us about
Agency and Structure?” British Journal of Sociology of Education 36 (5): 766–784. https://
doi.org/10.1080/01425692.2013.847782.
Tomlinson, M. 2017. “Forms of Graduate Capital and Their Relationship to Graduate
Employability.” Education + Training 59 (4): 338–352. http://doi.org/10.1108/ET-05-
2016-0090.
UK Department of Education. 2019. International Education Strategy: Global Potential, Global
Growth. https://www.gov.uk/government/publications/international-education-
strategy-global-potential-global-growth/international-education-strategy-global-
potential-global-growth.
Assessing employability skills 163
Wellings, P., Black, R., Craven, G., Freshwater, D., and Harding, S. 2019. Performance-
Based Funding for the Commonwealth Grant Scheme. Accessed January 20, 2022.
https://www.dese.gov.au/higher-education-funding/performancebased-funding-
commonwealth-grant-scheme.
Yorke, M. 2004. Employability and Higher Education: What It Is – and What It Is Not. York,
England: Higher Education Academy. https://www.advance-he.ac.uk/knowledge-
hub/employability-higher-education-what-it-what-it-not.
Zipin, L. 2005. “Dark Funds of Knowledge, Deep Funds of Pedagogy: Exploring
Boundaries between Lifeworlds and Schools.” Discourse: Studies in the Cultural Politics
of Education 30 (3): 317–331. https://doi.org/10.1080/01596300903037044.
SECTION III
Micro contexts of
assessment for inclusion:
Educators, students, and
interpersonal perspectives
15
HOW DO WE ASSESS FOR
“SUCCESS”? CHALLENGING
ASSUMPTIONS OF SUCCESS
IN THE PURSUIT OF INCLUSIVE
ASSESSMENT
Sarah O’Shea and Janine Delahunty
Introduction
“Success” at university is largely focused on calculations of “high” marks or
grades derived from assessable academic activities. While there is a sense of per-
sonal achievement in “passing” assessments, these measures of academic success
alone have become too narrow; yet they remain largely unquestioned within
the higher education environment. The relationship between “success” and
grades needs further interrogation, particularly for students who have returned
to university after a significant break in formal study. For older learners, “suc-
cess” is not exclusively academic, but often defined through complex combi-
nations involving life experience and alternative rationales for participating
in university.
In a recent national study, we asked students how they personally defined
“success” at university. Their answers were illuminating, revealing that “suc-
cess”, as a taken-for-granted term, is very diverse in its application including how
it is perceived and valued. Surprisingly, in educational literature there is limited
explicit focus on how the concept of success is individually understood, trans-
lated, and enacted. Drawing attention to this, the chapter provides a summary
overview of how success was constructed and defined through the reflections of
first in family students. Only by focussing on, and unpacking the value of, higher
education participation as defined by students themselves, can we begin to trou-
ble the ways in which assessment is traditionally constructed and designed. In
revealing tensions around understandings of “success”, this chapter is designed
to prompt thinking about how, as teaching and learning practitioners, we might
redefine assessment practices that consider success in more multi-faceted and
inclusive ways.
DOI: 10.4324/9781003293101-19
168 S. O’Shea and J. Delahunty
Success as a construction
Understandings of academic success are largely unquestioned within higher
education. Success has been problematically constructed as academic achieve-
ment, progression through a degree, overlayed with expectations of a linear,
uninterrupted barrier-free passage to completion armed with a knowledge of
the implicit “rules” of the game (Bathmaker, Ingram, and Waller 2013; Tinto
2021). However, given the diversity of our student populations and the some-
times complex circumstances they exist within, unpacking and deconstructing
taken-for-granted notions of “success” can help identify and eliminate poten-
tial barriers. Rather than perceiving “success” as a contractual arrangement that
requires judging the value or merit of a student’s performance, more nuanced,
individualised notions of success are needed.
Research and literature on success indicate highly subjective variations in
meaning. Conceptions of academic success can deviate between teaching staff and
students, such as polarised understandings on barriers to achieving success as high-
lighted by Dean and Camp (1998). These authors identified how academic staff
considered success to be determined by students’ attitudes and motivations, while
for students, it was the external factors that were the biggest influencers on success,
with success more akin to “general life satisfaction” (10). In a similar vein, Tinto
(2021) highlights the internal-external tension of students “wanting to persist” as
distinct from “being able to persist” (7) and the responsibility of institutional sup-
port in removing barriers that thwart students’ actual capacity to achieve.
Whilst research indicates some of the complexities of what constitutes success,
we argue that this complexity is exacerbated for students from equity backgrounds
accessing various pathways into higher education. For example, pathways such
as open access colleges may emphasise non-normative measures of student suc-
cess or academic achievement. In recognition of this variety, there have been
calls for alternative understandings or measures of success, which “acknowledge
the unique complexities, challenges and material conditions” of specific student
groups (Sullivan 2008, 629). Undoubtedly perceptions of success are intertwined
with preconceived ideas of what constitutes a “good” grade or the ways in which
success is measured (Yazedjian et al. 2008). This chapter seeks to consider how
alternative conceptions of success should inform and influence the objectives
and design of assessment items. Building upon previous publications which have
unpacked notions of academic success from the perspectives of equity intersected
learners, the term success cannot be assumed to have a common meaning nor be
embedded within normalised discourses of meritocracy (Delahunty and O’Shea
2019; O’Shea and Delahunty 2018).
theme, general reference to “passing” (e.g., “scores that reflect your best effort”,
“marks I’m proud of ”) was made by 23 participants, rather than as a specific goal
(e.g., “success is achieving high grades/GPA/distinction average”). Thus, we turn
attention to the more contested nature of success. These first in family learners
repeatedly linked their own success to the satisfaction they gained, often artic-
ulated in emotional terms through the embodiment of persevering or achieving
personal goals, rather than through detached academic measures:
This is not to say that grades or marks were considered unimportant or irrelevant.
Interestingly, for some students grading provided a form of “external” validation
of their entitlement to be enrolled and many were performing as well as, if not
better, than they had anticipated. Similar to others, Danielle was unsure about
openly defining herself as successful, preferring instead to defer to external valida-
tions gained from lecturers, peers, and assignment feedback as “proof ” of her suc-
cess in achieving an acceptable academic standard, as the following insight shows,
Having lecturers say …“This piece of work was so good that you should
actually use it in real life, like submit that to a government committee”;
that’s the best feedback that I could ever get in my life and then that
makes me think that yeah, you know, I am actually really successful in
what I’m doing
(Danielle, 32, 3rd year, Online, LSES)
I don’t really like to toot my horn but looking at what I’ve done and
achieved and how much people have said to me, like, ‘You’re doing really,
really well’. Yes, I do [define myself as successful].
How do we assess for “success”? 171
I do aim for HDs, but I think it’s important to realise that sometimes, not
achieving in line with your expectations is a lesson in humility
(Female Survey Respondent, 31–40, 3rd year)
Repeatedly, there was a delineation between how success was constructed by indi-
vidual learners compared to institutional or political discourses. For these partic-
ipants, success was contextualised and informed by wider social and economic
factors, rather than simply attributed to the meritocratic skill set of the learner. The
dichotomous nature of this term most clearly articulated when participants reflected
on what success was not, or even defining the act of failing in terms of success.
I have only failed one class and then from failing that one class, I have got
distinctions or high distinctions in all my other classes and also that class
when I redid it plus I’m finishing uni which I think is quite an achievement
with two children and working full-time.
(Dyahn, 25, 4th Year, LSES)
Failing was intricately bound up with success, one seemingly could not exist
independent of the other. For some not failing was an indication of success:
“Yeah I guess [I am successful], I’ve never failed anything” (Lisa, 21, 4th year).
However, experience of failure was sometimes a “wake-up call” which acted as
a catalyst for change,
I was going to major in Economics but I actually didn’t do very well with
the prerequisite classes last year so I failed Management and Finance which
was all part of that wake-up call of thinking “Yeah, I’m going to be a lot
172 S. O’Shea and J. Delahunty
more happy if I just follow my passion and don’t worry about other people’s
perceptions of me so much.
(Thomas, 20, final year)
Being successful was also defined by what it is not, defying normative assump-
tions of success by taking a particular stance against these. For one student success
was “not about getting a job … it’s about completing something that I never
thought possible” (Heather, 59, final year); for another: “I don’t think success
is 2.5 kids and a house” (Female Survey Respondent, 26–30, 5th year, LSES,
Rural). Other success-is-not definitions included downplaying grades as the
most important measure,
Not just going to university because you have to, but going because you
learn things that make you curious and inspired. It’s not necessarily about
getting great grades or succeeding all the time, but about learning from
your mistakes and becoming more resilient
(Female Survey Respondent, 26–30, 5th year)
a leap of faith, and there is no guarantee that after the leap, you won’t go
splat, no matter the amount of preparation, enthusiasm, and confidence
you bring to the task
(208)
However, such a shift is needed in order to explore how we might create the best
possible environments in which learning is emphasised, and where each student,
regardless of background, has “equitable opportunities to demonstrate their mas-
tery of course content and skills” (Chu 2020, 164).
Students, released from anxiety associated with a grade judgement of their
performance, are likely to be more willing to exercise creativity, to be more
adventurous and self-identify weaknesses or areas they would like to improve.
Learners, not defined only by meritocracy, may also be more willing to seek
feedback and consequently better understand the value of feedback. They may
even “fail” or perform poorly sometimes, such as many diverse learners who
have to make choices if other life priorities demand attention. There are few
places in the higher education curriculum where learning and failure co-exist
as opportunities for success; however, “failure” can present some of our most
memorable and transformational learning experiences, particularly when failure
is not framed as a source of embarrassment or fear.
Assessment for inclusion, therefore, must take account of intersecting equity
factors that may impact on an assumed linear pathway through a program of
174 S. O’Shea and J. Delahunty
study to completion. For many diverse students, the assumption of such line-
arity in their learning journeys is an unrealistic one (see Crawford, Chapter 16;
Delahunty 2022; O’Shea 2014, 2020). Students leading complex lives may need
to miss classes or limit time on tasks due to competing priorities and this should
not be interpreted as lacking in academic abilities or motivation. As adults they
are best placed to make such judgements regarding their commitments or per-
sonal care (Schulz-Bergin 2020), and should not be penalised for the impact that
external pressures place on their time, well-being or capacity to achieve.
Bourke (Chapter 17) emphasises that in many assessment approaches students’
attention is directed “to ‘proving’ what they know and can apply, rather than on
‘improving’ the way they learn” (p. 190). We know that grades-focus does not
incentivise learning, nor motivate students towards deep learning, is not meaningful
nor indicative of the learning taking place (Gibbs 2020; Stommel 2020), does not
allow for failure (Chu 2020), leads to gaming-the-system or corner-cutting (Blum
2020), and does not encompass various goals for learning (Gibbs 2020). This critical
perspective challenges educators to consider how current models of teaching and
assessment that are apparently designed to support students in fact fail to “meet the
needs of diverse students” and “fail to promote equity” (Blum 2020, 227).
Perhaps the biggest stumbling block to assessment for inclusion is assuming
that assessment be coupled with grading. Stommel (2020) is careful to distinguish
assessment and grading as distinctly different things, arguing that “spending less
time on grading does not mean spending less time on assessment” (36) and that
while assessment is inevitable, deeply considering the need to include grading
forces us to question “our assumptions about what assessment looks like, how
we do it, and who it is for” (36). Instead of preconceived grades or meritocratic
rankings being provided, one alternative might be to embed students’ own goals
for the assessment within marking criteria. Providing rich qualitative comments
to contextualise the feedback on execution of the task would be key to such an
approach but equally, a focus on the process of assessment rather than only the
end product is undoubtedly important.
Whatever the approach taken, it is clear that assessment needs to be embedded
within and informed by student perspectives. The next section considers the
necessity of student involvement in designing assessment to ensure inclusivity.
In adopting student-centred approaches, the intent is to address power relations
in the teaching-learning environment and ensure that assessment is embedded
within student perspectives and worldviews.
to teaching and learning but this is particularly the case in (re)designing assess-
ment. Adopting a more relational approach foregrounds student perspectives and
recognises that learners are the “best experts in their own learning” (Stommel
2020, 29). As a genuine partnership model, SaP enables educators and institutions
to move beyond opinion-based surveys that may have traditionally included the
“student voice” but retained limited scope for genuine student involvement in
curriculum or pedagogy change. Instead, SaP re-positions students as agentic,
where they can exert their influence (see Cook-Sather, Bovill, and Felten 2014;
Healey, Flint, and Harrington 2014; Matthews 2017). Such repositioning is key
for equity-related issues and can usefully inform an inclusive pedagogy across the
higher education sector (O’Shea, Delahunty, and Gigliotti 2021)
In considering a “marriage” of assessment and inclusion, it makes little sense
not to involve students, who have the most to gain (or lose). Partnerships between
faculty, students, and other stakeholders hold the promise of richer and more
meaningful assessment processes and outcomes, even though participants may
not all contribute in the same ways, all can engage equally through the “collab-
orative, reciprocal process” (Cook-Sather, Bovill, and Felten 2014, 6). Actively
seeking student engagement and collaboration in assessment (re)design not only
raises the potential for enduring change that is meaningful to those for whom it
matters most but also fosters much deeper engagement in learning in addition to
benefits to teaching practice (Healey, Flint, and Harrington 2014).
However, productive student-faculty partnerships are not always easily nego-
tiated in practice, as Dargusch, Harris, and Bearman (Chapter 19) describe.
Power relations need to be acknowledged and explicitly addressed when consid-
ering SaP projects (O’Shea, Delahunty, and Gigliotti 2021). Importantly, Bovill,
Matthews, and Hinchcliffe (2021) set out five key principles for co-creating assess-
ment change using SaP as the approach. This includes developing assessment and
feedback dialogue which is transparent and ongoing; sharing responsibility for
assessment and feedback including acknowledgement that teacher-student power
dynamics and roles will be disrupted; fostering trust through dialogue; nurtur-
ing inclusive assessment and feedback processes; and connecting partnership in
assessment and feedback with curriculum and pedagogy.
At a practical level, a SaP approach could usefully inform the practical devel-
opment of assessment including working with students to develop meaningful
goals/outcomes, assessment formats, the assessment outline/brief and even, the
assessment exemplars. Equally, creating assessment criteria that respond to the
motivations and goals of the specific student cohort would ensure these activities
are meaningful to those involved.
Concluding thoughts
Returning to the broad definitions of success articulated by our first in fam-
ily participants; these prompted us to question the relevance of traditional
assessment and its narrow focus on measurable indicators. Challenging the
176 S. O’Shea and J. Delahunty
References
Bathmaker, A.-M., Ingram, N., and Waller, R. 2013. “Higher Education, Social Class
and the Mobilisation of Capitals: Recognising and Playing the Game.” British Journal
of Sociology of Education 34 (5–6): 723–743. http://doi.org/10.1080/01425692.2013.
816041.
Blum, S. D. 2020. Ungrading: Why Rating Students Undermines Learning (and What to Do
Instead). Morgantown, WV: West Virginia University Press.
Bovill, C., Matthews, K., and Hinchcliffe, T. 2021. “Students as Partners in Assessment
(SPiA)”. https://www.advance-he.ac.uk/knowledge-hub/student-partnerships-
assessment-spia.
Chu, G. 2020. “The Point-less Classroom: A Math Teacher’s Ironic Choice in not
Calculating Grades.” In Why Rating Students Undermines Learning (and What to Do
Instead), edited by Susan D. Blum, 161–170. Morgantown, WV: West Virginia
University Press.
Cook-Sather, A., Bovill, C., and Felten, P. 2014. Engaging Students as Partners in Teaching
and Learning: A Guide for Faculty. San Francisco, CA: Jossey-Bass.
Dean, A., and Camp, W. 1998. “Defining and Achieving Student Success: University
Faculty and Student Perspectives.” In the American Vocational Association Convention.
New Orleans, LA. https://files.eric.ed.gov/fulltext/ED428180.pdf.
Delahunty, J. 2022. “You Going to Uni?”: Exploring How People from Regional, Rural and Remote
Areas Navigate into and through Higher Education. Perth: National Centre for Student Equity
in Higher Education. https://www.ncsehe.edu.au/publications/regional-rural-remote-
navigate-higher-education/.
Delahunty, J., and O’Shea, S. 2019. ““I’m Happy, and I’m Passing. That’s all that Matters!”:
Exploring Discourses of University Academic Success through Linguistic Analysis.”
Language and Education 33 (4): 302–321. https://doi.org/10.1080/09500782.2018.1562468.
Gibbs, L. 2020. “Let’s Talk about Grading.” In Ungrading: Why Rating Students Undermines
Learning (and What to Do Instead), edited by Susan Blum, 91–104. Morgantown, WV:
West Virginia University Press.
How do we assess for “success”? 177
Healey, M., Flint, A., and Harrington, K. 2014. Engagement through Partnership: Students as
Partners in Learning and Teaching in Higher Education. London, UK: Higher Education
Academy. https://www.advance-he.ac.uk/knowledge-hub/engagement-through-
partnership-students-partners-learning-and-teaching-higher.
Matthews, K. E. 2017. “Five Propositions for Genuine Students as Partners Practice.”
International Journal for Students as Partners 1 (2): 1–9. https://doi.org/10.15173/ijsap.
v1i2.3315.
O’Shea, S. 2014. “Transitions and Turning Points: How First in Family Female Students
Story Their Transition to University and Student Identity Formation.” International
Journal of Qualitative Studies in Education 27 (2): 135–158. https://doi.org/10.1080/
09518398.2013.771226.
O’Shea, S. 2020. “Crossing Boundaries: Rethinking the Ways that First in Family Students
Navigate “Barriers” to Higher Education.” British Journal of Sociology of Education 41 (1):
95–110. https://doi.org/10.1080/01425692.2019.1668746.
O’Shea, S., and Delahunty, J. 2018. “Getting through the Day and Still Having a
Smile on My Face! How Do Students Define Success in the University Learning
Environment?” Higher Education Research and Development 37 (5): 1062–1075. https://
doi.org/10.1080/07294360.2018.1463973.
O’Shea, S., Delahunty, J., and Gigliotti, A. 2021. “Creating Collaborative Spaces: Applying
a “Students as Partner” Approach to University Peer Mentoring Programs”. In Student
Support Services, edited by Henk Huijser, Megan Yih Chyn A. Kek, and Fernando F.
Padró. Singapore: Springer. https://doi.org/10.1007/978-981-13-3364-4_7-1.
Schulz-Bergin, M. 2020. “Grade Anarchy in the Philosophy Classroom.” In Ungrading:
Why Rating Students Undermines Learning (and What to Do Instead), edited by Susan
Blum, 173–187. Morgantown, WV: West Virginia University Press.
Stommel, J. 2020. “How to Ungrade.” In Ungrading: Why Rating Students Undermines
Learning (and What to Do Instead), edited by Susan Blum, 25–41. Morgantown, WV:
West Virginia University Press.
Sullivan, P. 2008. “Opinion: Measuring “Success” at Open Admissions Institutions:
Thinking Carefully about This Complex Question.” College English 70 (6): 618–630.
http://doi.org/10.2307/25472297.
Tinto, V. 2021. “Increasing Student Persistence: Wanting and Doing.” In Student Support
Services, edited by Henk Huijser, Megan Yih Chyn A. Kek, and Fernando F. Padró.
Singapore: Springer. https://doi.org/10.1007/978-981-16-5852-5_33.
Warner, J. 2020. “Wile E. Coyote: The Hero of Ungrading.” In Ungrading: Why Rating
Students Undermines Learning (and What to Do Instead), edited by Susan Blum, 25–41.
Morgantown, WV: West Virginia University Press.
Yazedjian, A., Toews, M., Sevin, T., and Purswell, K. 2008. ““It’s a Whole New World”:
A Qualitative Exploration of College Students Definitions of and Strategies for
College Success.” Journal of College Student Development 49 (2): 141–154. http://doi.
org/10.1353/csd.2008.0009.
16
INCLUSIVE AND EXCLUSIVE
ASSESSMENT
Exploring the experiences of mature-aged
students in regional and remote Australia
Introduction
“Assessment” and “inclusion” are both recognised, albeit separately, in Australian
higher education policy, within the Higher Education Standards Framework
(Threshold Standards; HESF 2021). For instance, assessment is addressed in
Section 1.4, “Learning outcomes and assessment”, which sets up the foundations
for assessment, stating: “Methods of assessment are consistent with the learning
outcomes being assessed, are capable of confirming that all specified learning
outcomes are achieved” (HESF 2021, 5). Inclusion is specifically addressed in
Section 2.2.1 (HESF 2021, 7) as follows: “Institutional policies, practices and
approaches to teaching and learning are designed to accommodate student
diversity, including the under-representation and/or disadvantage experienced
by identified groups, and create equivalent opportunities for academic suc-
cess regardless of students’ backgrounds”. (See Chapter 9 for a policy analysis.)
Despite these clear standards in the HESF and the potential role of inclusive
assessment design to foster inclusion of students from diverse backgrounds and
address their challenges, there is a gap in the literature, particularly in regards to
the experiences of students in equity groups (Tai, Ajjawi, and Umarova 2021).
Assessment has been found to influence student well-being, which is a centre-
piece of a recent national study that investigated the perspectives of mature-aged
students in, and from, regional and remote areas in Australia about what impacts
their mental well-being (Crawford 2021). A major finding of this research is the
important role of teaching and support staff, and teaching and learning environ-
ments in enhancing or hindering students’ mental well-being (Crawford 2021).
The everyday interactions that students have with teaching and support staff;
their peers; the unit/subject content and curriculum (including assessment tasks);
and the physical or online learning environments were each found to impact
DOI: 10.4324/9781003293101-20
Inclusive and exclusive assessment 179
students’ mental well-being. The research findings also suggest that entrenched
attitudes and expectations that favour and privilege some students over others
continue to prevail. For instance, challenges with course content or delivery,
and with university rules and regulations, which were found to be unconsciously
designed for so-called “ideal”, “implied”, and “traditional” students, exacer-
bated the already-challenging situations of students who did not fit this profile,
such as mature-aged students in, and from, regional and remote areas, many of
whom juggled parenting and work with their university studies (Crawford 2021;
Crawford and Emery 2021).
One of the impacts on students’ mental well-being was assessment tasks
(Crawford 2021). In the study’s survey of approximately 1,800 mature-aged stu-
dents in, and from, regional and remote areas in Australia, 39.3% of respond-
ents reported that assessment tasks impacted extremely negatively or negatively
on their mental well-being; 31.2% reported an extremely positive or positive
impact, while 29.5% were neutral (Crawford 2021, 37). To provide a nuanced
picture behind these numbers, we explore the participants’ experiences of assess-
ment by analysing the qualitative data. We then employ Bronfenbrenner’s (1995)
ecological systems model to interrogate institutions’ systemic and cultural influ-
ences on students’ experiences of assessment. We conclude by proposing some
approaches to moving towards more inclusive assessment.
Research methods
The larger project (Crawford 2021) from which this chapter draws followed a con-
current transformative mixed-methods design (Creswell 2014) and received ethics
approval from the Tasmania Social Sciences Human Research Ethics Committee.
The target population for this research was mature-aged undergraduate university
students in, and from, regional and remote areas in Australia.1 All data collection
was completed in February 2020, just prior to COVID-19 arriving in Australia.
For this chapter, we returned to the 51 interview transcripts and the open-
ended survey questions, and considered the following question: “How do
mature-aged students in, and from, regional and remote Australia experience
assessment?” We undertook reflexive thematic analysis of the qualitative data
(Braun and Clarke 2022), interpreting and making meaning of the participants’
experiences of assessments. We then considered impacts on students’ varied expe-
riences of assessment by employing Bronfenbrenner’s (1995) ecological systems
model to identify the layers of the ecological system and the array of influences.
Bronfenbrenner’s ecological systems theory (illustrated in Figure 16.1) pro-
vides a way to view a student’s everyday lifeworlds of university, home, work,
and local community (that is, their micro-level systems), and the interactions
between them (the mesosystem). It also enables consideration of the systemic and
structural, and the social, cultural, political and historical factors that impact on
an individual (that is, the exo, macro, and chrono-level systems), as well as the
interactions and interplay between the various layers (Bronfenbrenner 1995).2
180 N. Crawford, S. Emery, and A. Baird
One thing that I find very difficult, and I know I’m not alone in this [is] the
wording of a lot of the assessment tasks has really managed to get a lot of
us confused. In fact, even just in the very last assessment task that I did, the
wording was sort of a bit vague, and, so, certain students took it to mean
one thing and other students took it to mean another. And, I found that all
the way along – the wording for the assessment tasks can actually some-
times be very unclear. And, of course, you’re not in a classroom situation
where you can stick your hand up and say, “Look, this isn’t making a lot of
sense”. So, then you’ve got to go onto the discussion boards and sort of say,
“Look, I really am not getting this”.
(Lara; female; 41–50; Inner Regional; online; Dementia care)
Lara noted a disadvantage of being online is that she could not simply seek clari-
fication during or at the end of a lecture or tutorial. She had to wait for an answer
Inclusive and exclusive assessment 181
One of the biggest things that holds you up on assignments is that you’ve
got a question and you post the question to the forum, and you have a look,
and it hasn’t been answered, or you don’t really understand it still, and
sometimes it can take a while to get a response from one of the teachers.
(Alice; female; 26–30; Outer Regional; online; Nursing)
I took a break this term because I had a subject in the previous term …
how to put it? The way the materials were written was quite a mess. You
were expected to have certain things in assignments that hadn’t even been
taught yet, because they came in the later lessons. So, obviously, there’d been
changes made, but things hadn’t been matched up properly. And then the
assignment requirements are one thing, and then they’d be [another] thing in
the marking rubric, which weren’t in the assignment requirement, and then
you got marked down because you didn’t get it from the rubric or something.
So, instead of having the full assignment requirements in the brief, that was
spread around a bit. Then you had two or three tutors giving responses, and
they weren’t agreeing on things, and so it got very, very confusing.
(Beverly; female; 61–70; Outer Regional; online; Design)
We found that experiences of not understanding assessment tasks and not receiv-
ing timely clarification were more commonly expressed by students who studied
online. The inference here is that it is easier for on-campus students, by com-
parison, to seek clarification for an unclear assessment task as they have more
incidental opportunities to ask questions of their lecturers, tutors, and peers face-
to-face during a class, at break time, afterwards in the corridor or during their
teachers’ student consultation hours.
But during that time it was, just a few issues with the pregnancy before,
and then obviously with the recovery after, and I still had to look after our
182 N. Crawford, S. Emery, and A. Baird
daughter who was not quite two. And, I was, you know, in the hospital,
running to and from, trying to maintain some sort of order in the house
while visiting my wife and my new daughter in the hospital. Yeah, there
was a uni assignment due in and around that time, obviously. Yeah, and
there were times when I didn’t get an opportunity to actually sit down and
do any study until, you know, 11, 12 at night. And I would work until the
early hours of the morning as much as I could until I needed sleep. But
yeah, and I think I failed that subject because, yeah, I just couldn’t. I just
couldn’t. [Laughs] I thought I’d be right because we had some help from
family. But, yeah, it was just, the burden was just too much and it was just
too late to pull out. And, I didn’t fail by much, but I did fail, and it was just
[a] really, really tough time.
( John; male; 31–40; Inner Regional; online; Education)
Assessments always tend to be due around the same time across units. I
think with core units, at least, for each year of a degree, they could be
coordinated together better because all students have to do them. With
general electives, I understand this is probably difficult. Mature age stu-
dents are likely working and/or parenting, as I am, and structure uni time
around kids and work. The workload is never even throughout a semester.
When there are multiple assessments due around the same time, the weekly
workload increases significantly, and I find it hard to manage this around
work and kids, even though I set aside time each week for uni.
(Student Survey)
I also had 3 large assessments all due on the same day, which not only
affected my mental state but made me feel very alone.
(Student Survey)
Angela shared her humiliation around needing to disclose her divorce to seek an
extension:
Angela’s experience is an example of a traumatic life event that does not fit the
typical list of reasons why a student might be granted an extension.
Students’ experiences of receiving extensions for natural disasters were mixed.
For instance, during the devastating 2019/2020 bushfires in Australia, some stu-
dents reported having supportive teaching staff, and they received extensions
without question, while others did not; some students reported inconsistent
experiences within their university with one lecturer, for example, granting an
extension in one unit, but the same request was denied in another (Crawford
2021).
I struggled with it [assessment task], and both the lecturer and the tutor
were brilliant. And, I spent, I think, an hour on the phone with both of
them, at different times, to help with an assessment task. So, they were
really good and happy to have that kind of conversation over the phone.
Whereas some others just seemed to prefer either email contact or a drop-in
session. It’s like, “these are my hours”. It’s like, well, “that’s great, but I’m
not even in the same state as you”.
(Sabrina; female; 41–50; Outer Regional; online; Health and community care)
184 N. Crawford, S. Emery, and A. Baird
Numerous interviewees identified specific staff who spent time assisting them
with assessment tasks; Simone shared one such example of the invaluable role
played by a tutor:
[I] would say she [the tutor] has been the most impactful on just building my
confidence in myself and, like I said, giving me resources and showing me
where to go for certain things, and when I came home and had to do assess-
ments as part of that unit, I had this incredible amount of information that I
could draw upon, and I did not feel like I was kind of stabbing in the dark.
Yeah, I felt, actually, really confident with my knowledge on the subject
(Simone; female; 31–40; Outer Regional; online; Education)
Simone also acknowledged that she received support for her assessments from a
Facebook group of peers. Olivia commented positively on the role of teaching
staff in contextualising assessments and understanding students’ circumstances:
The assessment, so, it means that information that we’re given is contextual-
ised for our area. And, it also means that the person that’s teaching us, teach-
ing me, marks my assessment … It means that they understand, they have
a deeper understanding of what you’re trying to get at. It’s really special.
(Olivia; female; 31–40; Remote area; online; Education)
Notes
1 “Regional and remote” students is one of the six government-identified equity
groups in Australia. Refer to Crawford (2021, 18–19) for definitions of “mature-
aged” and “regional and remote” students.
2 The ecological systems are described in Table 9 in Crawford (2021, 70–71).
References
Biggs, J. 2014. “Constructive Alignment in University Teaching.” HERDSA Review of
Higher Education 1: 5–22. https://www.herdsa.org.au/herdsa-review-higher-education-
vol-1/5-22.
Braun, V., and Clarke, V. 2022. “Conceptual and Design Thinking for Thematic Analysis.”
Qualitative Psychology 9 (1): 9–26. https://doi.org/10.1037/qup0000196.
Bronfenbrenner, U. 1995. “The Bioecological Model from a Life Course Perspective:
Reflections of a Participant Observer.” In Examining Lives in Context: Perspectives on
the Ecology of Human Development, edited by Phyllis Moen, Glenn H. Elder, and Kurt
Lüscher, 599–618. Washington, DC: American Psychological Association.
Burgstahler, S. 2015. “Universal Design in Higher Education.” In Universal Design in
Higher Education: From Principles to Practice, edited by Sheryl E. Burgstahler, 3–28.
Cambridge, MA: Harvard Education Press.
Burke, P. J., Bennett, A., Burgess, C., Gray, K., and Southgate, E. 2016. Capability,
Belonging and Equity in Higher Education: Developing Inclusive Approaches. Callaghan,
NSW, Australia: The University of Newcastle. https://www.newcastle.edu.au/__
data/assets/pdf_file/0011/243992/CAPABILITY-ONLINE.pdf.
188 N. Crawford, S. Emery, and A. Baird
Crawford, N. 2021. “On the Radar”: Supporting the Mental Wellbeing of Mature-Aged Students
in Regional and Remote Australia. Accessed 18 January 2022. https://www.ncsehe.edu.
au/wp-content/uploads/2021/04/Crawford-Equity-Fellowship-Report_FINAL.pdf.
Crawford, N., and Emery, S. 2021. “‘Shining a Light’ on Mature-Aged Students in,
and from, Regional and Remote Australia.” Student Success 12 (2): 18–27. https://doi.
org/10.5204/ssj.1919.
Crawford, N., Kift, S., and Jarvis, L. 2019. “Supporting Student Mental Wellbeing in
Enabling Education: Practices, Pedagogies and a Philosophy of Care”. In Transitioning
Students in Higher Education: Philosophy, Pedagogy and Practice, edited by Angela Jones,
Anita Olds, and Joanne G. Lisciandro, 161–170. London: Routledge.
Creswell, J. W. 2014. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches,
rev. ed. Thousand Oaks, CA: SAGE.
Dodo-Balu, A. 2017. “Students Flourish and Tutors Wither.” Australian Universities Review
59 (1): 4–13. https://files.eric.ed.gov/fulltext/EJ1130301.pdf.
Gidley, J., Hampson, G., Wheeler, L., and Bereded-Samuel, E. 2010. “From Access to
Success: An Integrated Approach to Quality Higher Education Informed by Social
Inclusion Theory and Practice.” Higher Education Policy 23 (1): 123–147. http://doi.
org/10.1057/hep.2009.24.
Higher Education Standards Framework (Threshold Standards). 2021. Australian
Government Federal Register of Legislationю. Accessed 18 January 2022. https://
www.legislation.gov.au/Details/F2021L00488/Download.
Houghton, A.-M. 2019. “Academic and Departmental Support.” In Student Mental Health
and Wellbeing in Higher Education: A Practical Guide, edited by Nicola Barden, and Ruth
Caleb, 125–144. London: SAGE.
Ketterlin Geller, L., Johnstone, C., and Thurlow, M. 2015. “Universal Design of
Assessment.” In Universal Design in Higher Education: From Principles to Practice, edited
by Sheryl E. Burgstahler, 163–175. Cambridge, MA: Harvard Education Press.
May, R., Peetz, D., and Strachan, G. 2013. “The Casual Academic Workforce and Labour
Market Segmentation in Australia.” Labour and Industry: A Journal of the Social and
Economic Relations of Work 23 (3): 258–275. https://doi.org/10.1080/10301763.2013.
839085.
Tai, J., Ajjawi, R., and Umarova, A. 2021. “How Do Students Experience Inclusive
Assessment? A Critical Review of Contemporary Literature.” International Journal of
Inclusive Education. https://doi.org/10.1080/13603116.2021.2011441.
Ulriksen, L. 2009. “The Implied Student.” Studies in Higher Education 34 (5): 517–532.
https://doi.org/10.1080/03075070802597135.
17
NORMALISING ALTERNATIVE
ASSESSMENT APPROACHES
FOR INCLUSION
Roseanna Bourke
Introduction
If assessment tasks are effective, they will serve as powerful mediating learning
tools that enable students to demonstrate their knowledge, understanding, and
application of their knowledge to real-world contexts. Assessments that capture
the interest of students also support them to imagine their own possibilities and
future applications after the course has finished. This chapter explores how alter-
native and atypical assessment practices, that are innovative or novel for students,
can enhance their sense of engagement in a course, and can provide a more equi-
table means for students to demonstrate their learning.
Assessment tools in higher education that provide more equitable options for
students will first involve novel approaches, that over time become normalised.
Importantly, atypical approaches to assessment (such as self-assessment, ipsative
assessment, and technology-based assessments) need to move beyond being con-
sidered “alternative” or “innovative and novel” forms of assessment in higher
education. Clearly, there are challenges when introducing alternative forms of
assessment in higher education (HE) especially in highly competitive university
courses. Often students are keen to complete assessments that are tradition-
ally known to them (e.g., essays, written assignments, examinations) because
they have learned to optimise their grade through these traditional means.
Another challenge when introducing alternative forms of assessment is that stu-
dents “generally place a higher value on traditional assessment tools especially
in terms of their validity and reliability” than more novel types of assessment
(Phongsirikul 2018, 61). However, these alternative forms of assessment are fast
becoming key approaches required by students in their preparation for a post-
COVID, new world zeitgeist, premised on social justice, inclusiveness, cultural,
and Indigenous understandings.
DOI: 10.4324/9781003293101-21
190 R. Bourke
Background
Assessment methods traditionally used in university settings (essays, writ-
ten assignments, tests, and examinations) tend to determine whether, and by
how much, a student has learned against the learning outcomes of a course.
Ironically, these assessment approaches direct the students’ attention to “prov-
ing” what they know and can apply, rather than “improving” the way they
learn or even to understand themselves better in relation to their learning and
the world around them. Higher education policies, student wellbeing, and
the type of pedagogical and assessment practices within any given course,
all impact on how inclusive the course is orientated and experienced by the
student. For example, researchers have shown the critical role that Indigenous
pedagogies and practices play in higher education for all students to feel
included and to succeed (e.g., Mayeda et al. 2014; Roberston, Smith, and
Larkin 2021).
This means that an assessment tool (e.g., essay, critique, exam, self-assessment)
cannot simply be pulled out of a suite of possible assessment methods, without
a closer understanding of why, when, and how it is used, or a clear rationale for
the purpose of the assessment. An inclusive assessment approach is one where all
students can develop the skills to sustain their learning, and that can strengthen
their motivation towards their own goals. This increases the likelihood that these
assessment tasks will also be sustainable; sustainable assessment is where students
incorporate the skills, knowledge, and attitude to continue using life-long assess-
ment practices (Boud and Soler 2016).
In higher education, the assessment of students is often controlled through
policies and regulations that can either prevent responsive changes to assessment
tasks, or promulgate a vision for change. When a shift in rhetoric is pronounced,
it becomes the starting point of a change-process to enable more inclusive, equi-
table assessment approaches to be used. In the context of the introduction of
the YouTube clip assignment, the Assessment Handbook at Massey University
(2019, 3) now includes the explicit intention:
This is the key policy document that enabled the introduction of an assignment
where students developed and trialled their own YouTube clips. A growing body
of evidence shows that supporting students as partners in decisions that affect them,
will increase student motivation, learning and likely success (Bovill, Jarvis, and
Smith 2020; Cook-Sather, Bovill, and Felten 2014). Therefore, alternative and
Alternative assessment approaches 191
skills in this new context. She noted wryly, unlike her typical animal patients
“he didn’t try to bite”!
YouTube clip assignment that they enjoyed, especially with regards to applying
key concepts:
The practical aspect of this assignment was really helpful and enabled me to
understand what I was learning. I was able to see the effects of my YouTube
clip on teaching and learning, reflect on this, and apply the knowledge
gained from the course readings and teaching. I think it helped me to
cement key concepts.
(Education undergraduate student, 2021)
Sixty-five percent of the students believed the YouTube form of assessment was
more equitable than other forms of assessment (37.33% Yes, and 28% in some
ways), mainly because it enabled students to actively engage in their learning
and assessment in an authentic way showing more of themselves through the
assessment. For example, one student commented: “It provided [an] opportu-
nity to show personality and humour which isn’t necessarily accommodated in
APA [American Psychological Association] writing”. Students identified further
equity benefits, such as supporting those with learning difficulties (e.g., dyslexia),
and that the assessment task opened “thinking to being more creative and think-
ing on the spot, rather than it becoming a “tick the box” exercise and in ensuring
all references have been covered”. Importantly, students completed component
parts by actively including others (either through technical support, or through
watching and learning from the clip). One student reported: “The YouTube clip
assignment was far more interactive, so I was more willing and able to share my
learning with others in both formal and informal settings”.
experience, the question remains: do the gains outweigh the possible anxiety
created through this novel approach?
Students were also asked whether they had learned something they did not
expect through the YouTube assignment, and the responses showed that there
were gains beyond the knowledge they learned. Students reported improving
their personal learning and teaching skills, actively learning patience, and perse-
verance, gaining a thorough understanding of course content, gaining the ability
to express learning in a new way, honing their problem-solving skills, elevating
their knowledge from theoretical to practical, understanding YouTube as a form
of learning, and time management skills. As one noted:
However, other students might be more challenged by the task itself, rather than
the learning:
Some students reported they did not realise the extent of their learning, until
after the course was completed:
independent, they needed to be given the space to develop their own strategies
for completing assessments. While some students find this independence gives
greater control and ownership of their work, less confident students experience
it as stressful”.
The results from the student survey indicated that they preferred traditional
assessments, which highlights the complexity of giving students the “freedom
to choose” assessments. Students will base their choices on their own historical,
cultural, and social experiences of assessment and learning, and can be reluctant
to move into new territory. Ironically as Rogoff (1990, 202) identified, learning
involves “functioning at the edge of one’s competence on the border of incom-
petence”; learning in this sense encourages students to explore the unknown and
take risks in the belief they can, and will, achieve.
The survey also showed that the benefits gained from learning through alter-
native assessment methods were not fully realised and used by students, until
after the course was completed. Sustainable assessment practices such as YouTube
clips and self-assessment, while uncomfortable at the time, can have more impact
on students’ learning than traditional assessments. It also shows that assessment
requires an element of trust between teachers and students, and therefore for
staff-student partnership to work, power-sharing must result. Staff may have
concerns about handing over power to students when they wish to cover sub-
stantial content and they are unconvinced that students know enough about the
subject to be co-creating classes (Bovill, Jarvis, and Smith 2020, 37).
An interesting unintended consequence of the YouTube assessment activ-
ity was that it required students to think about their learning, rather than prove
their learning. This meant there was an absence of plagiarism. Plagiarism can be
examined through a policy, pedagogical, or moral lens (Eaton 2021), and insti-
tutions determine specific ways to “define, detect, prevent, and punish” students
who are found to have plagiarised (Marsh 2007). In my experience, students
tend to plagiarise when they remove themselves from the assessment, when they
look to sources to cite, or have others complete aspects of their work. In contrast,
authentic assessment approaches that engage learners in-depth, and over time,
such as the YouTube clip development, self-assessment, and ePortfolios, can tell
educators “much about how students view themselves as learners or emerging
professionals; how they are perceiving, connecting, and interpreting their in-
and out-of-class learning experiences; and why they may be struggling with
particular content or concepts” (Kahn 2019, 138). As one student who completed
a YouTube clip assignment noted:
Note
1 https://www.nowtolove.co.nz/news/latest-news/jason-gunn-treated-by-vet-on-
plane-33701
References
Boud, D., and Soler, R. 2016. “Sustainable Assessment Revisited.” Assessment & Evaluation
in Higher Education 41 (3): 400–413. http://doi.org/10.1080/02602938.2015.1018133.
Bourke, R. 2018. “Self-Assessment to Incite Learning in Higher Education: Developing
Ontological Awareness.” Assessment and Evaluation in Higher Education 43 (5): 827–839.
http://doi.org/10.1080/02602938.2017.1411881.
Bourke, R., Rainier, C., and de Vries, V. 2018. “Assessment and Learning Together in
Higher Education.” International Journal of Student Voice 25: 1–11. https://repository.
brynmawr.edu/cgi/viewcontent.cgi?article=1194&context=tlthe.
Bovill, C., Jarvis, J., and Smith, K. 2020. Co-Creating Learning and Teaching: Towards
Relational Pedagogy in Higher Education. Critical Publishing. https://www.criticalpub-
lishing.com/asset/394452/1/Co_creating_Learning_and_Teaching_sample.pdf.
Cook-Sather, A., Bovill, C., and Felten, P. 2014. Engaging Students as Partners in Learning
and Teaching: A Guide for Faculty. San Francisco: Jossey Bass.
Douglass, J. A., Thomson, G., and Zhao, Ch.-M. 2012. “The Learning Outcomes Race:
The Value of Self-reported Gains in Large Research Universities.” Higher Education
64: 317–335. http://doi.org/10.1007/s10734-011-9496-x.
Dunning, D., Heath, Ch., and Suls, J. M. 2004. “Flawed Self-Assessment. Implications
for Health, Education, and the Workplace.” Psychological Science in the Public Interest 5
(3): 69–106. https://doi.org/10.1111/j.1529-1006.2004.00018.x.
Eaton, S. E. 2021. Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity.
Santa Barbara, CA: ABC-CLIO.
198 R. Bourke
Jones, E., Priestley, M., Brewster, L., Wilbraham, S. J., Hughes, G., and Spanner, L.
2021. “Student Wellbeing and Assessment in Higher Education: The Balancing Act.”
Assessment & Evaluation in Higher Education 46 (3): 438–450. http://doi.org/10.1080/
02602938.2020.1782344.
Kahn, S. 2019. “Transforming Assessment, Assessing Transformation: ePortfolio Assessment
Trends.” In Trends in Assessment: Ideas, Opportunities, and Issues for Higher Education, edited
by Stephen. P. Hundley and Susan Kahn, 137–156. Bloomfield: Stylus Publishing.
Marsh, B. 2007. Plagiarism: Alchemy and Remedy in Higher Education. New York: SUNY Press.
Massey University. 2019. Massey University Assessment Handbook. Principles, Guidelines, and
Resources.
Mayeda, D. T., Keil, M., Dutton, H. D., and Ofamo’oni, I. F-H. 2014. “You’ve Gotta
Set a Precedent. Māori and Pacific Voices on Student Success in Higher Education.”
AlterNative 10 (2): 165–179. https://doi.org/10.1177/117718011401000206.
Panadero, E., and Alonso-Tapia, J. 2013. “Self-Assessment: Theoretical and Practical
Connotations. When It Happens, How Is It Acquired and What to Do to Develop It
in Our Students.” Electronic Journal of Research in Educational Psychology 11 (2): 551–576.
http://doi.org/10.14204/ejrep.30.12200.
Phongsirikul, M. 2018. “Traditional and Alternative Assessments in ELT: Students’ and
Teachers’ Perceptions.” rEFLections 25 (1): 61–84.
Roberston, K., Smith, J. A., and Larkin, S. 2021. “Improving Higher Education Success
for Australian Indigenous Peoples. Examples of Promising Practice.” In Marginalised
Communities in Higher Education, edited by Neil Harrison and Graeme Atherton, 179–201.
London: Routledge.
Rogoff, B. 1990. Apprenticeship in Thinking. Cognitive Development in Social Context. Oxford:
Oxford University Press.
Slavin, S. J., Schindler, D. L., and Chibnall, J. T. 2014. “Medical Student Mental Health
3.0: Improving Student Wellness through Curricular Changes.” Academic Medicine 89 (4):
573–577. http://doi.org/10.1097/ACM.0000000000000166.
Slepcevic-Zach, P., and Stock, M. 2018. “ePortfolio as a Tool for Reflection and
Self-Reflection.” Reflective Practice 19 (3): 291–307. http://doi.org/10.1080/14623943.
2018.1437399.
Tan, K. 2007. “Conceptions of Self-Assessment: What Is Needed for Long-Term Learning?”
In Rethinking Assessment in Higher Education: Learning for the Long Term, edited by David
Boud and Nancy Falchikov, 114–127. London: Routledge.
Tan, K. 2009. “Meanings and Practices of Power in Academics’ Conceptions of Student
Self-Assessment.” Teaching in Higher Education 14 (4): 361–373. http://doi.org/10.1080/
13562510903050111.
18
STUDENT CHOICE OF
ASSESSMENT METHODS
How can this approach become
more mainstream and equitable?
Geraldine O’Neill
Introduction
In higher education some assessment approaches dominate the landscape. The end
of semester unseen examination, for example, is widely used internationally (Brown
2015; National Forum 2016). However, this and other common approaches have
been criticised for not allowing for all students to play to their strengths. Diversifying
assessment methods in higher education is a logical step in supporting a more
inclusive approach to assessment for diverse cohorts (O’Neill and Padden 2021).
Diversifying assessment is also in keeping with the growing emphasis on universal
design for learning (CAST 2018). Hundley and Kahn (2019, 207) identified that a
meta-trend in higher education assessment internationally is “assessment strategies
and approaches that are becoming more inclusive, equity-oriented and reflective of
the diverse students our institutions serve”.
While diversifying assessment to move away from dominant forms of assess-
ment seems a positive step, if not approached appropriately it can put some stu-
dents under pressure and may result in unintended outcomes such as poorer
performance (Armstrong 2014; Bevitt 2015; Kirkland and Sutch 2009; Medland
2016). For example, in the case of international students “coping with novel
assessment represents just one part of a much larger and slower process of adapta-
tion” (Bevitt 2015, 116). One approach worth considering therefore is giving all
students in a module1 (course) a choice of assessment methods, thus increasing the
chance of playing to the strengths of all students and minimising any potential
disadvantage. Giving students a choice between two or more assessment meth-
ods within a module, appears to go some way towards supporting the concept of
equity, often described as “fairness” (Easterbrook, Parker, and Waterfield 2005;
Garside et al. 2009; O’Neill 2011, 2017; Waterfield and West 2006). In the case
where there is a genuine opportunity for students to achieve better outcomes,
DOI: 10.4324/9781003293101-22
200 G. O’Neill
it supports the idea of assessment for social justice (McArthur 2016). However,
using a choice of assessment approach is not without its challenges.
Over a 10-year period, as an academic staff member and an educational devel-
oper, I have engaged in a programme of research to design, implement, and eval-
uate student choice in assessment. My earlier work at institutional level resulted
in the development of: a) disciplinary case studies, b) a design template to ensure
equity between the choices given; c) a seven-step implementation process, and
d) an evaluation tool that measures equity between the choices (O’Neill 2011,
2017). During my work at national level, I advocated the concept of student-as-
partners, including the use of choice of assessment (National Forum 2016). Choice
of assessment, in our recent research, although was shown to empower students
in their learning there are concerns around equity between the choices and it
was still relatively under-used (O’Neill and Padden 2021). This programme of
research and the wider literature on choice of assessment therefore highlight key
questions that are addressed in this chapter:
• How can staff ensure that there is equity between the choices given to the
students?
• How can it become more mainstream in institutional policies and practice?
• How do staff implement this approach in their practices?
grade; its traits (visual, type of writing); the learning outcome to be assessed; the
criteria used; equity in approaches to marking/teaching/workload/feedback; and
links to some examples of the assessment choice (O’Neill 2011, 2017).
The students’ views on their experience of choice of assessment, in this institu-
tional project in 2010, were gathered by a questionnaire (respondents, n = 144/370
students). An interesting finding was that students in later years and those pursuing
postgraduate studies were more open to the use of choice of assessment (O’Neill
2011), a result that was supported by Francis (2008). This may be explained by
their increased level of experience of different assessment that helps in them mak-
ing an informed choice of assessment. Students also noted that the choice between
two assessments was sufficient level of diversity of choice (O’Neill 2011). This
speaks to an issue that was discussed in our more recent paper on the use of this
approach – there can be such a thing as too much diversity and too much choice
(O’Neill and Padden 2021).
Some of these findings are linked with “procedural equity”, but staff on the
project wanted to be sure that the different choices they are presenting to stu-
dents would also allow all students an equal chance of succeeding in the out-
comes of the assessment, that is, the grades. McArthur (2016) also emphasised
that procedural fairness is not enough and that these don’t always relate to “just
outcomes”, sometimes described as an aspect of “assessment for social justice”.
Irish students have indicated in a recent national project that “achieving high
academic attainment” is a key measure of student success (National Forum 2019,
5). Grades matter to them. Stowell (2004) describes equity in performance not
so much as “equity” but as “justice” which is more concerned with fairness in
outcomes. In examining the pattern of grades between the choices of assessment
in this institutional project, it was found that:
Wanner, Palmer, and Palmer (2021) reported in their study on flexible assess-
ment that when they gave students some choices, students reported that it helped
them get better marks. Wanner, Palmer, and Palmer (2021) study, however,
also included choice of submission dates and weighting of assessment. Although
improving student grades should be a positive outcome, a resulting tension that
can arise with this approach is that staff become concerned with how to deal
with what they describe as “grade inflation”. In our recent institutional survey
(n = 160 module co-ordinators) on diversifying assessment and use of choice of
assessment, grade inflation was more of a concern for staff, when considering this
202 G. O’Neill
approach, than fear of student failure (O’Neill and Padden 2021). Grade inflation
is often described as perceived or actual rise in students’ average grades.
This highlights challenges in institutional grading systems and the use of
norm-referenced assessment in many institutions internationally. Tannock
(2017) emphasised that grading to a normal curve can create “social division
among students depending on where they stand in the grading hierarchy, and
particularly destructive impacts on the learning, esteem and identity of students
at the lower end of this hierarchy” (1350). If it is our intention for more students
to succeed, it would make sense that more would do well, in particular if we
are not disadvantaging other students in the process. This fear of grade inflation
needs to be interrogated at institutional and national levels. We need to also
explore the complexity of other influences on institutional grading approaches,
for example, the impact of high student fees; comparability of international
grading scales (Witte 2011); staff confidence in grading; staff accountability
and the role of the “public” university (Tannock 2017). Staff fear of grade
inflation appears to be running counter to student success. Efforts should be
taken to ensure that it does not become a barrier to the introduction of choice
of assessment.
Institutional policies on assessment, including aspects such as grade distribu-
tion, can be associated with the wider concept of social justice in assessment.
McArthur (2016) explores the concept of social justice as it relates to fairness in
assessment, she maintains that “that a preoccupation with fairness as sameness is
one of the major factors constraining assessment playing a greater social justice
role” (973). She describes the concept of the “assessment for social justice” as
referring “both to the justice of assessment within higher education, and to the
role of assessment in nurturing the forms of learning that will promote greater
social justice within society as a whole” (McArthur 2016, 968). The development
of student opportunities to play to their strengths and having an opportunity to
improve their grades through the choice of assessment method, goes some way
towards this understanding of social justice. (See Chapter 2 for reflections on
assessment for social justice.)
Another positive impactful outcome was that the use of choice of assessment
had empowered students in their learning (O’Neill and Padden 2021). This gave
them some level of responsibility, trusting them to make a choice. Responsibility
and trust are two aspects of “assessment for social justice”, referred to in
McArthur’s recent description of this term (McArthur 2021). Taking respon-
sibility was noted by Wanner, Palmer, and Palmer (2021) as an important skill
for graduates in the workplace. Where choice of assessment methods falls short
in relation to the wider concept of social justice is that the methods on their
own do promote social justice within society as a whole. McArthur (2021) also
highlights that assessment for social justice should be aspirational and transform-
ative. She explores how assessment systems can be inherently unfair and not all
students have an opportunity to succeed or indeed contribute to society more
broadly.
Student choice of assessment methods 203
Design stage
Consider which module (step 1): This step recommends that the module co-
ordinator considers which modules might be best suited to empowering stu-
dents with a choice. For example, it may suit modules that have students with
a variety of learning needs; with different prior learning; or in modules with
high numbers of special accommodations (O’Neill 2011). There may be mod-
ules where allowing a choice may not be suitable, for example, where the abil-
ity to communicate through the written word (such as through an essay) is a
competency highlighted in the module’s learning outcomes. A programmatic
Student choice of assessment methods 205
approach (Gibbs and Dunbar-Goddet 2009; National Forum 2017) to its use
is one way forward, where some modules in the programme are identified as
suitable and others where it may be less appropriate.
Consider diverse choices (step 2): This step advises the module coordinator to
consider assessment methods that are dissimilar to each other, as this would max-
imise the choice for students with different strengths, approaches to learning,
learning needs and from different contexts. Two options can often be sufficient
choice (O’Neill 2011).
Develop equity (step 3): In addressing the issue of concern around fairness
(equity) between the choices, the module coordinator needs to design for this in
the assessment, as far as is reasonably practical. One tool that can support this is
the “Student Information and Equity Template” (O’Neill 2017; UCD Teaching
and Learning 2022a). This was designed to consider the equity between the
choices in relation to, for example, student workload, teaching, and learning
approaches, standards, feedback, etc. In addition, this can then be made available
to the students at the beginning of the module to assists the students in making
an informed choice.
Make standards explicit (step 4): In the case where students are unfamiliar with
one or more of the choices, they need to see examples of assessment of these
methods. Therefore, the module coordinators should share some examples of
the assessment methods and make these available to the students at beginning
of the module. In addition, it is good practice that the assessment criteria/rubrics
for both assessment methods are also available for the students (Bennett 2016;
O’Neill 2018).
206 G. O’Neill
Implementation stage
Implement (step 5): At the start of the module, the rationale for this choice of assess-
ment methods should be made clear to the students, that is, to empower them
in their learning, to allow them to play to their strengths. It needs to be clear to
students how and when they need to communicate to the staff the decision on
their assessment choice (O’Neill 2011). To streamline this, it may be useful to
decide that one assessment method is the “default” assessment, if students have not
informed staff of the choice. This could be the more familiar of the two assess-
ments. Retaining one assessment that has some familiarity could reduce some of
the challenges student experience with new and innovative assessment approaches
(Armstrong 2014; Bevitt 2015; Kirkland and Sutch 2009; Medland 2016).
Support the process (step 6): At the early stage of the module, it may be useful
to allow some in-class discussion on the choices, including opportunities for
the students to discuss these with staff and/or with other students. Throughout
the module’s implementation, the teaching activities, support for feedback, and
advice on the assessment must be relatively equitable (O’Neill 2011).
Evaluation stage
Evaluate and adjust (step 7): Finally, to ensure that there is some feedback on
the approach, module coordinators should gather students, and where relevant
staff, views on its implementation. In my original study, an evaluation tool was
developed for the approach, the “Students’ views on Choice of Assessment Methods”
(O’Neill 2011, 76–77). In one section of this tool, five key themes were devel-
oped into a 20-item scale, that is, equity, anxiety, support, empowerment, and
diversity. Four statements were created in each of these five themes (O’Neill
2011) This tool is available for use at UCD Teaching and Learning (2022c). I
developed a sub-section of this tool (O’Neill 2017) using a factor analysis, now
titled the Equity Between Choice of Assessment Evaluation Tool (available for use at
UCD Teaching and Learning 2022b). This eight-item tool tool is more focused
on the concept of equity between the choices given. It has good internal reliabil-
ity (Cronbach’s alpha = 0.792) and face validity. The key questions validated for
use in this evaluation tool were:
• I was satisfied with the level of feedback I had compared to the feedback in
other assessment method.
• Over the course of the semester, the workload for my choice appeared
similar to the other assessment method(s).
• I was satisfied with the examples available of my assessment method com-
pared to the examples of the other assessment method.
(O’Neill 2017, 228; UCD Teaching and Learning 2022b)
Conclusion
Moving away from a reliance on a narrow range of traditional methods of assess-
ment can support the increasing diversity of student cohorts in higher education
internationally. This movement is part of a wider trend towards inclusive assess-
ment and supports the growing interest in universal design for learning (CAST
2018; Hundley and Kahn 2019). One approach to diversifying, which gives stu-
dents some increased level of responsibility, is to allow students a choice of assess-
ment methods within a module. This can also support their unique assessment
preferences and may indeed support the success to which they aspire. However, we
need to ensure that the choices we give are procedurally equitable and this chapter
explores how this can be achieved in practice. Choice of assessment can support the
outcome of an increase in student grades, a key indicator of student success as noted
by them (National Forum 2019). However, more inter-stakeholder dialogue needs
to take place to explore some solutions to the tension between this aspect of student
success and what can be perceived, by some, as “unwanted” grade inflation. Failure
to resolve this issue can cause more “social division among students depending on
where they stand in the grading hierarchy” (Tannock 2017, 1350).
One challenge to mainstreaming the approach is that we should not take for
granted the underlying challenge that some staff, and indeed some students, have
towards the adoption of student-centred approaches. Trusting students and giv-
ing them more responsibility is one aspect of the emerging concept of assessment
for social justice (McArthur 2016, 2021). To support a more widespread use of
the approach, institutional polices need to resource, and supports staff in rolling
out this approach. Examples of how it has been implemented in practice needs
to be showcased and disseminated. This chapter, therefore, concludes with a
seven-step process, which describes how I supported the design, implementation,
and evaluation of the approach in my institution (O’Neill 2011, 2017).
The chapter highlights the research and practice of students’ choice of assess-
ment methods. I hope it will assist in both ensuring the choices given to students
are equitable and that it goes some way towards its more widespread use in practice.
208 G. O’Neill
Note
1 The term “module” is used in this chapter to refer to a stand-alone unit that is part of
a bigger program of study. Sometimes modules are described as a “course”. Modules
have a defined set of learning outcomes, a set student credit load and aligned teaching,
learning and assessment approaches. “Module co-ordinators” is the term used for staff
responsible for a module’s design and delivery.
References
AHEAD. 2021. “Universal Design for Learning.” Accessed 29 September 2021. https://
www.ahead.ie/udl.
Armstrong, L. 2014. “Barriers to Innovation and Change in Higher Education.”
TIAA-CREF Institute. Accessed 29 September 2021. https://www.tiaainstitute.org/
publication/barriers-innovation-and-change.
Bennett, C. 2016. “Assessment Rubrics: Thinking Inside the Boxes.” The International
Journal of Higher Education in the Social Sciences 9 (1): 50–72. http://doi.org/10.3167/
latiss.2016.090104.
Bevitt, S. 2015. “Assessment Innovation and Student Experience: A New Assessment
Challenge and Call for a Multi-Perspective Approach to Assessment Research.”
Assessment and Evaluation in Higher Education 40 (1): 103–119. https://doi.org/10.1080/
02602938.2014.890170.
Brown, S. 2015. A Review of Contemporary Trends in Higher Education Assessment.
@tic.revista d’innovacio educative 14: 43–39. http://doi.org/10.7203/attic.14.4166.
CAST. 2018. “Universal Design for Learning Guidelines Version 2.2.” Accessed 29
September 2021. http://udlguidelines.cast.org/.
Cook-Sather, A., Bovill, C., and Felten, P. 2014. Engaging Students as Partners in Learning
and Teaching: A Guide for Faculty. San Francisco, CA: John Wiley and Sons.
Craddock, D., and Mathias, H. 2009. “Assessment Options in Higher Education.”
Assessment and Evaluation in Higher Education 34 (2): 127–140. https://doi.org/10.1080/
02602930801956026.
Easterbrook, D., Parker, M., and Waterfield, J. 2005. “Engineering Subject Centre
Mini Project: Assessment Choice Case Project.” HEA-Engineering Subject Centre.
Accessed 29 September 2021. https://www.advance-he.ac.uk/knowledge-hub/
engineering-subject-centre-mini-project-assessment-choice-case-study.
EUA. 2020. “Student Assessment: Thematic Peer Group Report: Learning and Teaching
Paper # 10.” European University Association. Accessed 29 September 2021. https://
eua.eu/resources/publications/921:student-assessment-thematic-peer-group-report.
html.
Francis, R. A. 2008. “An Investigation into the Receptivity of Undergraduate Students
to Assessment Empowerment.” Assessment and Evaluation in Higher Education 33 (5):
547–557. https://doi.org/10.1080/02602930701698991.
Garside, J., Nhemachena, J. Z. Z, Williams, J., and Topping, A. 2009. “Repositioning
Assessment: Giving Students the ‘Choice’ of Assessment Methods.” Nurse Education in
Practice 9: 141–148. http://doi.org/10.1016/j.nepr.2008.09.003.
Gibbs, G., and Dunbar-Goddet, H. 2009. “Characterising Programme-Level Assessment
Environments that Support Learning.” Assessment and Evaluation in Higher Education 34
(4): 481–489. https://doi.org/10.1080/02602930802071114.
Hundley, S. P., and Kahn, S. 2019. Trends in Assessment: Ideas, Opportunities, and Issues for
Higher Education. Sterling, VI: Stylus.
Student choice of assessment methods 209
Jordan, L., Bovill, C., Watters, N., Saleh, A. M., Shabila, N. P., and Othman, S. M.
2014. “Is Student-Centred Learning a Western Concept? Lessons from an Academic
Development Programme to Support Student-Centred Learning in Iraq.” Teaching in
Higher Education 19 (1): 13–25. https://doi.org/10.1080/13562517.2013.827649.
Kapsalis, G., Ferrari, A., Punie, Y., Conrads, J., Collado, A., Hotulainen, R., Rämä,
I., Nyman, L., Oinas, S., and Ilsley, P. 2019. “Evidence of Innovative Assessment:
Literature Review and Case Studies.” European Commission. EU Science Hub. https://
data.europa.eu/doi/10.2760/552774.
Kirkland, K., and Sutch, D. 2009. “Overcoming the Barriers to Educational Innovation:
A Literature Review.” Futurelab. https://www.nfer.ac.uk/publications/FUTL61/
FUTL61.pdf.
Leathwood, C. 2005. “Assessment Policy and Practice in Higher Education: Purpose,
Standards and Equity.” Assessment and Evaluation in Higher Education 30 (3): 307–324.
https://doi.org/10.1080/02602930500063876.
Mavrou, K., and Symeonidou, S. 2014. “Employing the Principles of Universal Design for
Learning to Deconstruct the Greek-Cypriot New National Curriculum.” International
Journal of Inclusive Education 18 (9): 918–933. https://doi.org/10.1080/13603116.2013.
859308.
McArthur, J. 2016. “Assessment for Social Justice: The Role of Assessment in Achieving
Social Justice.” Assessment and Evaluation in Higher Education 41 (7): 967–981. https://
doi.org/10.1080/02602938.2015.1053429.
McArthur, J. 2021. Creating Synergies between Assessment for Social Justice and Assessment
for Inclusion. Keynote presented at Centre for Research in Assessment and Digital
Learning (CRADLE) International Symposium, 21 October 2021. https://www.
youtube.com/watch?v=f FE7qxqCVFY.
Medland, E. 2016. “Assessment in Higher Education: Drivers, Barriers and Directions
for Change in the UK.” Assessment and Evaluation in Higher Education 41 (1): 81–96.
https://doi.org/10.1080/02602938.2014.982072.
National Forum. 2016. “Profile of Assessment Practices in Irish Higher Education.”
National Forum for the Enhancement of Teaching and Learning in Higher Education.
Accessed 29 September 2021. https://www.teachingandlearning.ie/publication/profile-
of-assessment-practices-in-irish-higher-education/.
National Forum. 2017. “Enhancing Programme Approaches to Assessment and Feedback in
Irish Higher Education: Case Studies, Commentaries and Tools.” National Forum for the
Enhancement of Teaching and Learning in Higher Education. Accessed 16 January 2022.
https://hub.teachingandlearning.ie/resource/enhancing-programme-approaches-to-
assessment-and-feedback-in-irish-higher-education-case-studies-commentaries-and-tools/.
National Forum. 2019. “Understanding and Enabling Student Success in Irish Higher
Education.” National Forum for the Enhancement of Teaching and Learning in
Higher Education. Accessed 29 September 2021. https://www.teachingandlearning.ie/
publication/understanding-and-enabling-student-success-in-irish-higher-education/.
National Forum and USI. 2016. “Assessment OF, FOR and AS Learning: Students as
Partners.” National Forum for the Enhancement of Teaching and Learning in Higher
Education. https://hub.teachingandlearning.ie/resource/students-as-partners/.
O’Neill, G. (Ed.). 2011. “A Practitioner’s Guide to Choice of Assessment Methods within
a Module.” UCD Teaching and Learning. Accessed 29 September 2021. https://
www.ucd.ie/teaching/t4media/choice_of_assessment.pdf.
O’Neill, G. 2017. “It’s not Fair! Students and Educators Views on the Equity of the
Procedures and Outcomes of Students’ Choice of Assessment Methods.” Irish
Educational Studies 36 (2): 221–236. https://doi.org/10.1080/03323315.2017.1324805.
210 G. O’Neill
O’Neill, G. 2018. “Designing Grading and Feedback Rubrics.” Dublin: UCD Teaching
and Learning. https://www.ucd.ie/teaching/t4media/designing_feedback_rubrics.pdf.
O’Neill, G., and Padden, L. 2021. “Diversifying Assessment Methods: Barriers, Benefits and
Enablers.” Innovations in Education and Teaching International. https://doi.org/10.1080/
14703297.2021.1880462.
Pham, T. H. T. 2010. “Implementing a Student-Centered Learning Approach at Vietnamese
Higher Education Institutions: Barriers under Layers of Casual Layered Analysis
(CLA).” Journal of Futures Studies 15 (1): 21–38. https://jfsdigital.org/wp-content/
uploads/2014/01/151-A02.pdf.
Stowell, M. 2004. “Equity, Justice and Standards: Assessment Decision Making in Higher
Education.” Assessment and Evaluation in Higher Education 29 (4): 495–510. https://doi.
org/10.1080/02602930310001689055.
Tai, J., Ajjawi, R., and Umarova, A. 2021. “How Do Students Experience Inclusive
Assessment? A Critical Review of Contemporary Literature.” International Journal of
Inclusive Education. https://doi.org/10.1080/13603116.2021.2011441.
Tannock, S. 2017. “No Grades in Higher Education Now! Revisiting the Place of Graded
Assessment in the Reimagination of the Public University.” Studies in Higher Education
42 (8): 1345–1357. https://doi.org/10.1080/03075079.2015.1092131.
UCD Teaching and Learning. 2022a. Student Information and Equity Template. Dublin:
UCD Teaching and Learning. https://www.ucd.ie/teaching/t4media/student_
information_and_equity_template.docx.
UCD Teaching and Learning. 2022b. Equity Between Choice of Assessment Evaluation
Tool. Dublin: UCD Teaching and Learning. https://www.ucd.ie/teaching/t4media/
equity_between_choice_of_assessment_evaluation_tool.docx.
UCD Teaching and Learning. 2022c. Students Views on Choice of Assessment Methods
(Evaluation Tool). Dublin: UCD Teaching and Learning. https://www.ucd.ie/teaching/
t4media/students_views_on_choice_of_assessment_methods.docx.
Wanner, T., Palmer, E., and Palmer, D. 2021. “Flexible Assessment and Student Empower
ment: Advantages and Disadvantages – Research from an Australian University.” Teaching
in Higher Education. https://doi.org/10.1080/13562517.2021.1989578.
Waterfield, J., and West, B. 2006. “Inclusive Assessment in Higher Education: A Resource
for Change University of Plymouth.” University of Plymouth. Accessed 29 September
2021. https://www.plymouth.ac.uk/uploads/production/document/path/3/3026/Space_
toolkit.pdf.
Witte, A. E. 2011. “Understanding International Grading Scales: More Translation than
Conversion.” International Journal of Management Education 9 (3): 49–59.
19
“HOW TO LOOK AT IT
DIFFERENTLY”
Negotiating more inclusive assessment
design with student partners
DOI: 10.4324/9781003293101-23
212 J. Dargusch, L. Harris, and M. Bearman
Students as partners
Students as partners (SaP) presents a promising way forward in creating a dia-
logue, where student needs can be better understood and therefore incorporated
into assessment design. Described as process-oriented, SaP is “focused on what
students and staff do together to further common educational goals” (Mercer-
Mapstone et al. 2017, 2). The call for SaP has been growing in strength, with
attention turning to how the inclusion of student voice and partnership prac-
tices can influence traditional ways of working in higher education, including
assessment practices (Dwyer 2018; Healey, Flint, and Harrington 2016; Mercer-
Mapstone, Islam, and Reid 2021).
Underpinning successful SaP projects in higher education is what Cook-
Sather and Felten (2017, 5) refer to as an “ethic of reciprocity”, foregrounding
mutual voices and contributions between students and staff with equal impor-
tance attributed to all (Mercer-Mapstone et al. 2017). Such a process has the
potential to subvert traditional power arrangements and allow participant roles to
be renegotiated through dialogue that includes differing perspectives (Matthews
et al. 2018). These are worthy and valuable aims, and the outcomes of exist-
ing studies have largely been reported positively (Mercer-Mapstone et al. 2017).
However, SWDs appear to be seldom included in the small-scale, institutional-
level SaP partnerships and projects (Bovill et al. 2016; Mercer-Mapstone et al.
2017) reported in Australian higher education.
Assessment may present particular challenges for a SaP approach, with strong
contextual influences on design processes, such as departmental norms (Bearman
et al. 2017). However, including SaP in a dialogue may help lecturers better
understand how assessment design impacts students and their learning, poten-
tially bringing new ideas and insights into the design process. Likewise, students
may feel more invested in assessment processes, understanding that their per-
spectives are heard and valued. There are, however, tensions between the various
stakeholders’ assessment expectations, including external accreditation require-
ments, university rules and processes, and students’ understanding of what is fair
and reasonable (Tai et al. 2022).
Power inequality is a key challenge for all students. The SaP literature
acknowledges the challenge of power imbalances with some researchers describ-
ing the “reinforcement of power asymmetries between students and staff” in
SaP projects (Mercer-Mapstone, Islam, and Reid 2021, 229), framing these as
an obstacle that needs to be overcome (Matthews et al. 2018). SWDs may also
be unsure how to articulate their problems/challenges in public forums, in ways
Negotiating inclusive assessment with student partners 213
Workshop design
Online workshops were designed to elicit suggestions and recommendations, gen-
erating ideas for change. They provided opportunities for participants to speak
214 J. Dargusch, L. Harris, and M. Bearman
openly, valuing the mutual voices and contributions that underpin successful SaP
projects (Mercer-Mapstone et al. 2017). Participants were sent written materi-
als (e.g., short student narratives), and asked to anonymously reflect/respond in a
Microsoft Teams worksheet, allowing alternative forms of interaction and record-
ing thoughts generated outside of each workshop. Workshops 1 and 2 were designed
to build relationships and share exam experiences. In workshops 3 and 4, partic-
ipants considered specific units’ exams/timed assessments and discussed potential
changes in format, conditions, and mode. Workshop 5 focused on reviewing a
draft framework for generating more inclusive exams and future directions.
After workshop 5, students were invited to reflect on their workshop expe-
riences. Set questions were posed about the workshop process and structure,
students’ level of comfort, workshop resources, and suggestions for other ways to
involve students in the work of improving assessment. Additional information
about the project’s methodology and outcomes can be found in the NCSEHE
report (Tai et al. 2022).
Data analysis
The aim of this current analysis was to understand how successfully the SaP
had promoted practical dialogue, with all students and staff given pseudonyms.
We wished for insight into how participating SWDs, Dalton (Psychology)
and Veronica (Psychology) from University 1 (U1), and Pete (Business) and
Francine (Allied Health) from University 2 (U2), engaged in workshops
designed around SaP principles and how their interactions contributed to the
group (see Table 19.1).
Analysis of workshop transcripts was focused on the interactions between
participants and the roles of students. We took student utterances, understood
here to mean every spoken contribution in the conversation, as our unit of anal-
ysis and sought to examine what prompted students to speak, what they said,
and how staff reacted to what they said. A general thematic analysis was con-
ducted on the reflection transcripts to gain insights into participating students’
perceptions of the process. Table 19.2 lists the codes applied for each different
analytical focus.
1 Dalton Psychology 5 Y
1 Veronica Psychology 5 Y
2 Pete Business 3 N
2 Francine Allied Health 2 Y
e employed thematic analysis, with some supplementary counts of prevalence. Data sources were:
W
transcripts of workshop 3 (U1, n = 8 participants; U2, n = 11) and workshop 4 (U1, n = 8; U2, n = 9);
and written (n = 2) and spoken (n = 1) student reflections.
Negotiating inclusive assessment with student partners 215
How did students join the conversation? Response to a direct question; unprompted
What contributions did they make and Contribution: affirmation/agreement;
how did staff respond? personal story; comment; suggestion
Staff responses: problematise; consider,
accept; ignore; revoice
How did they feel after the workshops? Affordances (e.g., being heard), challenges
(e.g., lack of student voice)
Dalton: The idea of going into a room and sitting there for two or three
hours or even doing it … It is painful. Also, because you’ve got this huge
stress that what if something goes wrong, and I get a headache, or I get a
nosebleed? Whatever the scenario goes through one’s head, you end up, I
think, losing so much productive time and effort that you could have been
studying effectively, just worrying about concerns that could be addressed
in another way, I think, that would eliminate those concerns.
216 J. Dargusch, L. Harris, and M. Bearman
Students took on the role of expert in the workshop, with weight given to the
value of their lived experiences in understanding the challenges SWDs negoti-
ate within assessment. Given this framing, comments like the following were a
common contribution:
FRANCINE: I feel like all my assessment tasks have been pretty relevant to what
I’ve had to go out and do.
Students and staff also provided a range of concrete suggestions for changes to
timed and other types of assessments, with categories of suggestions shared in
Table 19.3. There were some common suggestions from the two groups, with
most suggestions related to task structure, types/modes, and conditions.
DALTON: … for one of the level two psych units, … there were 10 or 12 small
assessment pieces. I think that in a way works better, because then each piece
feeds into the next, and because each piece is fairly small, you get the feed-
back really quickly. … could you break some of the assessments into smaller
pieces, smaller chunks, where the person knows that this is the content for
the two weeks they’ve got to do, and they’ll do an assessment on it?
FRANCINE: Sorry, I don’t know if this is right, but I know when I was doing my
practical exam something that I really wish I could’ve done was read out that
form out loud … I couldn’t speak it, it wasn’t going into my head.
UNIT CHAIR/COORDINATOR 1: In the past students have gone into a room at
the very start and have been able to set themselves up in there. That could
possibly be an option.
RESEARCHER 2: I’m wondering, I don’t know how the practical exams take
place, but the examiner could simply just ask the student if they wanted to
read it out loud as well too, the prompt.
UNIT CHAIR/COORDINATOR 1: They’re all in, for optometry, they’re all in
a hallway quite close to each other. If they did read things out, the person
next to them will hear it. We can’t let that before they go in but definitely
when they enter the station, it’s an option. They might not be aware that
they can do that.
218 J. Dargusch, L. Harris, and M. Bearman
RESEARCHER 1: Yes. I wonder how things will go if there is still a need for
more online versions of these things versus face-to-face things because
obviously, like what Francine said about reading it out, if you’re at home
by yourself then there’s no barrier to being able to talk through stuff,
which there obviously is if you’re in a crowded space with other students
around.
Staff suggestions
Staff proposed substantive changes to assessment designs during the workshops
(see Table 19.3). In three of four units discussed, assessment changes were planned
for the next term in direct response to workshop suggestions. In the fourth unit,
the UC’s concerns about academic integrity meant the exam remained the same,
with the approach to exam preparation being the focus of change. In most cases,
and particularly at U1, planned changes did not need formal permissions through
academic committees, but could be changed by unit coordinators/chairs as part
of routine updates.
More students in meetings. I understand others were invited but did not
attend but it seemed trying to fix issues without those who suffer the issues
in the room is kind of counter-intuitive, although I also understand the
research team does have this information from surveys.
Whilst students indicated that they personally felt comfortable and unintimi-
dated when engaging the workshops, they hypothesised that to get greater par-
ticipation from a range of SWDs, “other” ways for students to interact would be
needed to ensure that workshops were a “safe space” (Veronica).
Negotiating inclusive assessment with student partners 219
of students’ work and study commitments, and included other affordances (e.g.,
physical safety during the pandemic, participants’ choice to have their camera
on or off ). Despite these advantages, it is possible that the online environment
may have impacted group cohesion. Consideration should therefore be given to
how to involve SWDs in ways that allow them to engage comfortably in various
modes and spaces/places. This might include, as these students suggested, differ-
ent ways of interacting (e.g., writing into the chat instead of speaking); it might
also mean more flexibility around attendance. Consistent with our SaP method-
ology, we believe future projects would benefit from student involvement in the
project design to ensure that eventual mechanisms for student engagement with
staff allow full participation for all within the group.
There are many reasons to continue research into, and use of, SaP processes.
The types of discrete, small-scale studies reported in the literature (Mercer-
Mapstone et al. 2017) are limited in scope and generalisability. Studies such as
this one provide insights into the ways in which SWDs can be invited to help
staff overcome assumptions about how assessment design impacts on students,
and the ways in which issues of power can influence such exchanges. If, as Dalton
remarked, SWDs such as himself are “voiceless” in HE, then partnership prac-
tices are a key first step to providing a more inclusive university experience, but
all partners must be committed to encourage students’ ideas and actively listen
to them. To reach this aim, universities must overcome a tendency to generalise
about student needs and provide many more opportunities to include diverse
student voices in co-generative, dialogic approaches to assessment design.
References
Australian Government. Department of Education, Skills and Employment. 2005.
“Disability Standards for Education.” Last modified on Thursday 15 April 2021. https://
www.dese.gov.au/disability-standards-education-2005.
Bearman, M., Dawson, P., Bennett, S., Hall, M., Molloy, E., Boud, D., and Joughin, G.
2017. “How University Teachers Design Assessments: A Cross-Disciplinary Study.”
Higher Education 74 (1): 49–64. https://doi.org/10.1007/s10734-016-0027-7.
Bessant, J. 2012. “‘Measuring Up’? Assessment and Students with Disabilities in the
Modern University.” International Journal of Inclusive Education 16 (3): 265–281. https://
doi.org/10.1080/13603116.2010.489119.
Bovill, C., Cook-Sather, A., Felten, P., Millard, L., and Moore-Cherry, N. 2016. “Addressing
Potential Challenges in Co-Creating Learning and Teaching: Overcoming Resistance,
Navigating Institutional Norms and Ensuring Inclusivity in Student–Staff Partnerships.”
Higher Education 71 (2): 195–208. https://doi.org/10.1007/s10734-015-9896-4.
CAST. 2018. “Universal Design for Learning Guidelines Version 2.2.” http://udlguidelines.
cast.org.
Cook-Sather, A., and Felten, P. 2017. “Where Student Engagement Meets Faculty
Development: How Student-Faculty Pedagogical Partnership Fosters a Sense of
Belonging.” Student Engagement in Higher Education Journal 1 (2): 3–11.
Dwyer, A. 2018. “Toward the Formation of Genuine Partnership Spaces.” International
Journal for Students as Partners 2 (1): 11–15. https://doi.org/10.15173/ijSaP.v2i1.3503.
Negotiating inclusive assessment with student partners 221
Grimes, S., Southgate, E., Scevak, J., and Buchanan, R. 2019. “University Student
Perspectives on Institutional Non-Disclosure of Disability and Learning Challenges:
Reasons for Staying Invisible.” International Journal of Inclusive Education 23 (6):
639–655. https://doi.org/10.1080/13603116.2018.1442507.
Harris, L. R., and Dargusch, J. 2020. “Catering for Diversity in the Digital Age:
Reconsidering Equity In Assessment Practices.” In Re-Imagining University Assessment in
a Digital World, edited by Margaret Bearman, Phillip Dawson, Rola Ajjawi, Joanna Tai,
and David Boud, 95–110. Berlin: Springer.
Healey, M., Flint, A., and Harrington, K. 2016. “Students as Partners: Reflections on a
Conceptual Model.” Teaching and Learning Inquiry 4 (2): 8–20. https://doi.org/10.20343/
teachlearninqu.4.2.3.
Hockings, C. 2010. Inclusive Learning and Teaching in Higher Education: A Synthesis of
Research. EvidenceNet, Higher Education Academy. https://www.advance-he.ac.uk/
knowledge-hub/inclusive-learning-and-teaching-higher-education-synthesis-research.
Lawrie, G., Marquis, E., Fuller, E., Newman, T., Qiu, M., Nomikoudis, M., Roelofs,
F., and Van Dam, L. 2017. “Moving towards Inclusive Learning and Teaching:
A Synthesis of Recent Literature.” Teaching and Learning Inquiry 5 (1): 9–21. https://
doi.org/10.20343/teachlearninqu.5.1.3.
Matthews, K. E., Dwyer, A., Hine, L., and Turner, J. 2018. “Conceptions of Students as
Partners.” Higher Education 76 (6): 957–971. https://doi.org/10.1007/s10734-018-0257-y.
Mercer-Mapstone, L., Dvorakova, S. L., Matthews, K. E., Abbot, S., Cheng, B., Felten,
P., Knorr, K., Marquis, E., Shammas, R., and Swaim, K. 2017. “A Systematic
Literature Review of Students as Partners in Higher Education.” International Journal
for Students as Partners 1 (1): 1–23. https://doi.org/10.15173/ijSaP.v1i1.3119.
Mercer-Mapstone, L., Islam, M., and Reid, T. 2021. “Are We Just Engaging ‘the Usual
Suspects’? Challenges in and Practical Strategies for Supporting Equity and Diversity
in Student–Staff Partnership Initiatives.” Teaching in Higher Education 26 (2): 227–245.
https://doi.org/10.1080/13562517.2019.1655396.
Morris, C., Milton, E., and Goldstone, R. 2019. “Case Study: Suggesting Choice: Inclu
sive Assessment Processes.” Higher Education Pedagogies 4 (1): 435–447. https://doi:10.1080/
23752696.2019.1669479.
Tai, J., Ajjawi, R., Bearman, M., Dargusch, J., Dracup, M., and Harris, L. 2022.
Re-Imagining Exams: How Do Assessment Adjustments Impact on Inclusion. Final Report
for the National Centre for Student Equity in Higher Education. https://www.ncsehe.
edu.au/publications/exams-assessment-adjustments-inclusion/.
Tierney, R. D. 2013. “Fairness in Classroom Assessment.” In SAGE Handbook of Research
on Classroom Assessment, edited by James H. McMillan, 125–144. Thousand Oaks, CA:
SAGE Publications.
20
ADDRESSING INEQUITY
Students’ recommendations on how
to make assessment more inclusive
Introduction
In July 2021, the student co-authors of this project (Shannon and Daniella) saw a
job opportunity to become paid student partners on a research project exploring
how assessment could be more inclusive and equitable to diverse students. While
we all had our own motivations for applying, what struck us was the uniqueness of
the job. Staff were asking us – students – to help them understand how to design
assessment. And staff were naming us – students – as their partners. Was this real?
As we learned through the project, the topic of inclusion in assessment is
increasingly discussed by scholars and educators (Hanesworth, Bracken, and
Elkington 2019; McArthur 2016; Nieminen 2022). However, missing from
the discussion is students’ expertise on how assessment could be improved. Too
often, students are seen only as a data source, for example, as attendees in a focus
group or participants in a survey. These opportunities do not allow for students
to freely, over time, share their ideas and recommendations, and through train-
ing and support, become co-researchers in this important topic.
Some readers may ask, why is it important for students to be co-researchers?
Our answer is that because as recipients of the education provided to us, we are
truly the ones who can evaluate its quality. For example, McArthur (2016) uses
an analogy by Nussbaum (2006) to illustrate the importance of moving beyond
procedural evaluation:
DOI: 10.4324/9781003293101-24
Addressing inequity with student partners 223
This analogy highlights how students see learning design. It’s not that stu-
dents don’t trust educators to do their best research and use the best practice to
design our learning experiences. But just because they have taken their time
and tried their hardest does not mean, necessarily, that it is the best experience
for students. Only we – students – can taste the pasta and judge for ourselves.
And further, it is important to note that we will not all give the same evalua-
tion. It depends on our subjectivities, for example, our preferred study strategies
or environments, the topics that interest us, and the varied supports that we
need. But by including us as co-researchers, scholars and educators can mini-
mise the gap between what they think is best, and what we need to succeed in
our learning.
In this chapter, we will reflect on what we have learned in this students as
partners (SaP) research project exploring inclusion and equity in assessment. We
will provide an overview of the project and then discuss our three key recom-
mendations. Our recommendations are informed both by the co-design work-
shops that we facilitated with our peers (n = 52), and our own reflections and
experiences as students. Finally, we will conclude by advocating for others to
embed a SaP approach in their future research.
Please see the Table 20.1 for an overview of CoLab activities. Note: CoLabs
were hosted online, and the team used a combination of Zoom, Mentimeter, and
Padlet software to support activities. Student partners were the facilitators of the
workshops, while staff attended to take notes.
In each of the above activities, the student participants were allocated Zoom
breakout rooms with one student partner to facilitate and one staff member to
take additional notes, as some of the feedback may occur outside of the software
(e.g., a spoken comment not recorded on the online whiteboard). To analyse
the data the team worked in pairs (one student and one staff ) and thematically
grouped data into overarching ideas, common experiences, or as seen below,
recommendations for improving inclusion.
balancing their health, which means study may not always be their main
goal. Rather than treat these other priorities as excuses, students want to be
treated with respect as mature adults. An example of this from our data was
when students felt uncomfortable having to provide explicit reasons for an
extension request for their assessments.
We also recommend from the data and our reflections for staff to feel
more open about encouraging their own stressors or challenges as a way to
create a friendlier learning environment. One student summarised this sen-
timent for the workshops with,
In fact, the student co-authors felt that when staff share aspects of their
personal or work life, students can also get a glimpse of their lives, and by
consequence understand staff’s busy schedules.
Another example of the importance of empathy emerged when students
shared experiences of when they were put down by teachers when asking for
additional clarification or help. One student voiced,
[My teacher] met me in the practicals and belittled me for not doing some-
thing right. Teachers need more empathy – there is a reason student are
struggling. The teachers – I feel like they don’t care.
Encourage students to look after one another, mental health situations are
not well explored. Knowing that there are resources out there, but also less
focus on failure and the stigma around failing an exam [it doesn’t mean
you aren’t] as good.
226 S. Krattli, D. Prezioso, and M. Dollinger
Every course is a bit different, and the structure is not very explicit, teach-
ers say things on the fly, which is fine, but they don’t say what they actually
want. For example, the marking criteria are often vague, this is confusing.
Sometimes I have done research and then found out it isn’t needed. The
assessment criteria needs to be more explicit.
Another challenge is when you get instructions for the assignment, a
rubric, FAQs, additional material, notes from a lecture and notes from
a discussion board. You end up trying to collate 7 sets of instructions
which don’t always align. So, lots of time spent/wasted working out what
they want.
Linking to this, students also felt it was important that unit chairs con-
sider which resources to recommend and condense these resources for stu-
dents, to avoid students feeling overwhelmed. For example, creating one
document that links to several key articles that students can use to start
their research as well as an FAQ with common questions that students may
have. Discussion boards were also seen as critical to support student success
because they can support informal dialogue, as well as opportunities to get
further clarification.
Conclusion
Our findings build on previous literature which has highlighted the importance
of staff training and awareness to support inclusive assessment design (Nieminen
2022; Tai, Ajjawi, and Umarova 2021). Yet as we argued, and showcase here,
Addressing inequity with student partners 229
students should play a pivotal role in future work to understand what practical
ways teachers can improve inclusion in assessment. Therefore, the value of this
project was that it was designed to give diverse students a voice to share their ideas
on how to make assessments at university more inclusive and equitable. Engaging
students as partners and listening to the students in the workshops provided the
opportunity for students to raise concerns and speak to the inequity they have
faced. The process also gave students a sense of relief knowing that these impor-
tant issues are being heard and taken into consideration for further improvement.
CoLabs proved to be a useful model to eliciting students’ feedback and gener-
ating practical ideas on how assessment design can be more inclusive to diverse
students. The three recommendations provided in this chapter, creating empa-
thetic relationships, ensuring consistent and clear instructions and rubrics, and
having all assessment information in one easily accessible location are uncompli-
cated steps any teacher can take to improve inclusion.
Even though the participants of this project were from different academic
backgrounds and neurodivergence, we have had similar experiences and are
advocating for the same thing – equity. As we showcased in this chapter, it was
this common goal, and the leverage of our unique experiences and insights, that
helped us uncover how we could make assessment more inclusive. We urge for
further projects that involve cross-cohort collaborations between student and
staff to make the best out of everyone’s academic journey.
Acknowledgements
The authors would like to acknowledge the other research team members who
were integral to collecting data and analysing results, including Rola Ajjawi, Sam
Bohra, Marina Booth, Harmeet Kaur, Danni McCarthy, Merrin McCracken,
Trina Jorre St Jorre, and Joanna Tai.
References
Becker, S., and Palladino, J. 2016. “Assessing Faculty Perspectives about Teaching
and Working with Students with Disabilities.” Journal of Postsecondary Education and
Disability 29 (1): 65–82.
Carless, D., and Chan, K. K. H. 2017. “Managing Dialogic Use of Exemplars.” Assessment
and Evaluation in Higher Education 42 (6): 930–941. https://doi.org/10.1080/02602938.
2016.1211246.
Dollinger, M., Eaton, R., and Vanderlelie, J. 2021. “Reinventing the Focus Group:
Introducing Design Thinking CoLabs.” https://unistars.org/papers/STARS2021/
03B.pdf.
Dollinger, M., and Lodge, J. 2020. “Understanding Value in the Student Experience
through Student–Staff Partnerships.” Higher Education Research and Development 39 (5):
940–952. https://doi.org/10.1080/07294360.2019.1695751.
Dollinger, M., and Vanderlelie, J. 2021. “Closing the Loop: Co-designing with Students
for Greater Market Orientation.” Journal of Marketing for Higher Education 31 (1): 41–57.
https://doi.org/10.1080/08841241.2020.1757557.
230 S. Krattli, D. Prezioso, and M. Dollinger
Hanesworth, P., Bracken, S., and Elkington, S. 2019. “A Typology for a Social Justice
Approach to Assessment: Learning from Universal Design and Culturally Sustaining
Pedagogy.” Teaching in Higher Education 24 (1): 98–114. https://doi.org/10.1080/
13562517.2018.1465405.
McArthur, J. 2016. “Assessment for Social Justice: The Role of Assessment in Achieving
Social Justice.” Assessment and Evaluation in Higher Education 41 (7): 967–981. https://
doi.org/10.1080/02602938.2015.1053429.
Mercer-Mapstone, L., Dvorakova, S. L., Matthews, K. E., Abbot, S., Cheng, B., Felten,
P., Knorr, K., Marquis, E., Shammas, R., and Swaim, K. 2017. “A Systematic
Literature Review of Students as Partners in Higher Education.” International Journal
for Students as Partners 1(1). https://doi.org/10.15173/ijsap.v1i1.3119.
Meyers, S., Rowell, K., Wells, M., and Smith, B. C. 2019. “Teacher Empathy: A Model
of Empathy for Teaching for Student Success.” College Teaching 67 (3):160–168. https://
doi.org/10.1080/87567555.2019.1579699.
Nieminen, J. 2022. “Assessment for Inclusion: Rethinking Inclusive Assessment in
Higher Education.” Teaching in Higher Education. https://doi.org/10.1080/13562517.
2021.2021395.
Nussbaum, M. C. 2006. Frontiers of Justice. Cambridge, MA: Belknap Press.
Tai, J., Ajjawi, R., and Umarova, A. 2021. “How Do Students Experience Inclusive
Assessment? A Critical Review of Contemporary Literature.” International Journal of
Inclusive Education. https://doi.org/10.1080/13603116.2021.2011441.
21
MOVING FORWARD
Mainstreaming assessment
for inclusion in curricula
The book has focused on assessment because assessment shapes and directs student
learning; it is the assessment system that formally defines what is worth learning.
The chapter authors have brought together a diversity of perspectives to explore,
conceptualise, and problematise assessment for inclusion as well as showcasing
good practice. In this final chapter, we make some concluding remarks and draw
themes from across the book to reflect ways forward for assessment for inclusion.
Assessment for inclusion has both pragmatic and conceptual features. Focussing
on immediate practical solutions alone is unlikely to be sufficient, given the phil-
osophical roots of inclusion in the promise that education will contribute to a
better world for both the individual and society more widely. Concomitantly,
only working in abstract or theoretical spaces will not help to change practice.
There is a great need to collaborate across disciplinary and organisational bound-
aries to build upon ideas, rather than operating in silos, if we are to mainstream
assessment for inclusion. Given the diversity we seek to acknowledge and support
within higher education, there are likely to be many people who can contribute
to re-casting assessment for inclusion, from a range of perspectives. Academics,
researchers, practitioners, academic developers, industry, professional bodies, and
students themselves. The backgrounds, philosophies, theories, and practices, these
people bring will also be diverse – beyond those which we have outlined within
research fields. At this early stage of considering assessment within a broader goal
of inclusion, we should be open to what each can bring, and work on finding
resonances and commonalities to make substantial advances in assessment.
It is also important to note that this work cannot exist solely within academic
research journals, or handbooks for assessment design, or even student advocacy
agendas: it must promulgate across these spaces to achieve change in what happens
on the ground. It is of no use to talk about wonderful new types of assessment
designs which might improve inclusion, if they are never implemented or proven
DOI: 10.4324/9781003293101-25
232 R. Ajjawi et al.
to be effective. It is not just wide sweeping changes, due the pandemic, that have
made an impact for diverse students already (Tai et al. 2022a). We should also look
to our own “backyards” and see what can be done incrementally, since these small
things may make the difference between students choosing a different course
(or worse – discontinuing study) or persisting with their chosen course/degree.
In the end, it is not educators who determine what is inclusive, it is the students
and their future trajectories or their absences from them. We need to be observant
about not only who is present in our courses, but as importantly, who is absent or
under-represented.
Assessment design is often simply an accretion from tradition (Dawson et al.
2013), and yet, academics often justify specific designs by referring to the “real
world”. Assessment’s fabricated constraints, and thus currently allowed adjust-
ments, do not withstand scrutiny when we consider this juxtaposition: after all,
the rules are themselves social constructions and can therefore be subject to alter-
ation (McArthur 2016). Therefore, in this book and beyond, we call for engage-
ment and involvement at every level to improve assessment for inclusion.
While the chapters in this book have focused primarily on assessment, we also
need to reflect on inclusion in other aspects of the curriculum. We cannot look at
assessment independently of what else is happening in the course. The backwash
effect of assessment is on learning and all aspects of the curriculum: the intended
learning outcomes and learning and teaching activities (Biggs and Tang 2011).
So, while we might start our focus on assessment we need to look backwards
to learning and teaching activities, the context in which they occur, and the
learning outcomes desired. Intended learning outcomes should be formulated in
ways that are not so limited that they do not permit students to work on different
things and still meet the learning outcomes. They may not need to be so depend-
ent on specific subject content that they exclude equivalent demonstrations of
meeting learning outcomes as is currently assumed. Some current learning out-
comes may be inappropriately exclusionary and need to be rethought. It is also
worth noting that while we have adopted the language of inclusion in this book,
inclusion can be tokenistic if a student is merely counted but does not feel like
they belong or are active participants with a voice. Inclusion is not just a technical
requirement, it encompasses students being part of what is being assessed.
As editors, in reflecting on the various chapters, there are common refrains
that we can draw out: 1) that students should take an active and agentic role in
assessment; 2) that inclusion needs to become a mainstay of regulatory frame-
works that govern assessment from design through to evaluation; 3) that teachers
need to adopt ethical reflexivity; and 4) that more diverse discourses need to be
embedded to disrupt positivist and ableist discourses of assessment.
1. Students as agentic
Several chapters in this book showed how students needed to be positioned as
active actors in the assessment process in order to be included. This can be as
partners involved in the design of assessment (Chapters 19 and 20), as actively
Mainstreaming assessment for inclusion 233
and diversity play out within the student experience”. Beyond this, we must
look to how the increasing manifestation of artificial intelligence and edu-
cational technology (e.g., proctoring) in assessment are unwittingly embed-
ding bias and exclusion through taking highly selected groups as representing
the whole. New forms of accountability and regulation might be needed to
prompt ethical decision-making around these new technologies (Chapter 11).
These are not simply administrative tasks. We should have more scrutiny of
assessment practices that are educational rather than bureaucratic.
Any form of scrutiny can be misused and can perpetuate conservative
practices. Whitburn and Thomas (Chapter 7, 76) remind us how “regulatory
compliance is at the fore when compelling students to disclose disabilities to
institutions, as a way to ensure that they can then expect reasonable adjust-
ments to be made to their programs of learning, rather than to consider the
inclusiveness and accessibility of courses”. Following the rules is not good
enough: ethical reflexivity and flexibility are required alongside regulation.
3. Ethical reflexivity, relationality, and flexibility to influence assessment practices
A broad survey of the higher education landscape suggests that student
diversity has increased (Marginson 2016). Assessment philosophy has also
changed, moving beyond testing what was taught to include assessment for
learning and sustainable notions of assessment (Boud and Soler 2016). This
implies that we need a different relationship between students and teach-
ers. Gleeson and Fletcher (Chapter 4) remind us that education is funda-
mentally relational – it occurs through people working together. Strong
student-teacher relationships foster inclusion (Tai et al. 2022b). The big
challenge is to get educators to think differently about assessment. And
to think carefully about who their students are and who is and isn’t being
accommodated by current assessment regimes.
Part of the inertia that surrounds the design of assessment is that assessment
regimes are set within rigid systems of quality assurance. Decisions about
assessment must be made well in advance of knowing which students are
enrolled. These early decisions, made without direct knowledge of who will
be affected by them, cannot be unmade or revisited and so the main recourse
for inclusion are individual accommodations that are peripheral to task design
(e.g., extra time, breaks or rooms). We need more flexibility in the system
and allowance for professional and ethical decision-making by academic and
course teams.
Many authors have argued that assessment should orient towards social
justice, including the key proponent of assessment for social justice Jan
McArthur (Chapter 2). Working out what social justice might involve
requires considerable prompting to encourage conversations about what this
might look like in particular disciplines and how this can be embedded in
courses. The implication that follows is that this would lead to greater satis-
faction for staff as well. Fostering communities of praxis and ethical reflex-
ivity may be needed to reimagine inclusivity not through the lens of deficit
Mainstreaming assessment for inclusion 235
In conclusion, we hope that this book opens new conversations and investi-
gations about assessment for inclusion. We ask educators to take courage in
changing assessment and to work with students to take on this challenge. We
urge the sector to fund and support continued research and development in
assessment for inclusion. Finally, we look forward to the flourishing of new
collaborations and conversations about assessment for inclusion.
References
Bartolic, S. K., Matzat, U., Tai, J., Burgess, J.-L., Boud, D., Craig, H., and Archibald,
A., et al. 2022. “Student Vulnerabilities and Confidence in Learning in the Context
of the COVID-19 Pandemic.” Studies in Higher Education. http://doi.org/10.1080/
03075079.2022.2081679.
Biggs, J., and Tang, C. 2011. Teaching for Quality Learning at University: What the Student
Does, rev. ed. Buckingham: Society for Research into Higher Education and the
Open University Press.
Boud, D., and Soler, R. 2016. “Sustainable Assessment Revisited.” Assessment and
Evaluation in Higher Education 41 (3): 400–413. http://doi.org/10.1080/02602938.2015.
1018133.
Mainstreaming assessment for inclusion 237
Dawson, P., Bearman, M., Boud, D. J., Hall, M., Molloy, E. K., and Bennett, S. 2013.
“Assessment Might Dictate the Curriculum, But What Dictates Assessment?” Teaching &
Learning Inquiry: The ISSOTL 1 (1):107–111. http://doi.org/10.20343/teachlearninqu.1.1.107.
Marginson, S. 2016. “The Worldwide Trend to High Participation Higher Education:
Dynamics of Social Stratification in Inclusive Systems.” Higher Education 72 (4):
413–434. http://doi.org/10.1007/s10734-016-0016-x.
McArthur, J. 2016. “Assessment for Social Justice: The Role of Assessment in Achieving
Social Justice.” Assessment and Evaluation in Higher Education 41 (7): 967–981. http://
doi.org/10.1080/02602938.2015.1053429.
Nieminen, J. H. 2022. “Assessment for Inclusion: Rethinking Inclusive Assessment in
Higher Education.” Teaching in Higher Education. http://doi.org/10.1080/13562517.2021.
2021395.
Peters, M. A., Rizvi, F., McCulloch, G., Gibbs, P., Gorur, R., Hong, M., and Hwang,
Y., et al. 2020. “Reimagining the New Pedagogical Possibilities for Universities Post-
Covid-19.” Educational Philosophy and Theory 54 (6): 717–760. http://doi.org/10.1080/
00131857.2020.1777655.
Tai, J., Ajjawi, R., Bearman, M., Dargusch, J., Dracup, M., Harris, L., and Mahoney, P.
2022a. Re-imagining Exams: How Do Assessment Adjustments Impact on Inclusion? Perth,
Australia: National Centre for Student Equity in Higher Education. https://www.
ncsehe.edu.au/wp-content/uploads/2022/02/Tai_Deakin_Final.pdf.
Tai, J., Ajjawi, R., and Umarova, A. 2021. “How Do Students Experience Inclusive
Assessment? A Critical Review of Contemporary Literature.” International Journal of
Inclusive Education. http://doi.org/10.1080/13603116.2021.2011441.
Tai, J., Mahoney, P., Ajjawi, R., Bearman, M., Dargusch, J., Dracup, M., and Harris, L.
2022b. “How Are Examinations Inclusive for Students with Disabilities in Higher
Education? A Sociomaterial Analysis.” Assessment and Evaluation in Higher Education.
http://doi.org/10.1080/02602938.2022.2077910.
Tierney, R. D. 2013. “Fairness in Classroom Assessment.” In SAGE Handbook of Research
on Classroom Assessment, edited by James H. McMillan, 125–144. Thousand Oaks,
CA: SAGE Publications.
Valentine, N., Durning, S. J., Shanahan, E. M., van der Vleuten, C., and Schuwirth,
L. 2022. “The Pursuit of Fairness in Assessment: Looking beyond the Objective.”
Medical Teacher 44 (4): 353–359. http://doi.org/10.1080/0142159X.2022.2031943.
Valentine, N., Shanahan, E., Durning, M., and Schuwirth, S. J. L. 2021. “Making It Fair:
Learners’ and Assessors’ Perspectives of the Attributes of Fair Judgement.” Medical
Education 55 (9): 1056–1066. https://doi.org/10.1111/medu.14574.
INDEX