Improving student learning from
assessment and feedback: a
programme-level view
Dr Tansy Jessop TESTA Project Leader
Senior Fellow L&T, University of Winchester
T@T at the University of Leeds
24 September 2013
 Why student learning?
 Why assessment and feedback?
 Why programme-level view?
 Introducing TESTA
The big questions
 HEA funded research project (2009-12)
 Seven programmes in four partner universities
 Maps programme-wide assessment
 Engages with Quality Assurance processes
 Diagnosis – intervention – cure
About TESTA
Transforming the Experience of Students through Assessment
TESTA ‘Cathedrals Group’ Universities
Edinburgh
Edinburgh Napier
Greenwich
Canterbury Christchurch
Glasgow
TESTA
“…is a way of thinking
about assessment and
feedback”
Graham Gibbs
 Captures and distributes sufficient student time and
effort - time on task
 Challenging learning with clear goals and standards,
encouraging deep learning
 Sufficient, high quality feedback, received on time,
with a focus on learning
 Students pay attention to the feedback and it guides
future studies – feeding-forward
 Students are able to judge their own performance
accurately, self-regulating
Based on assessment principles
TESTA Research Methods
(Drawing on Gibbs and Dunbar-Goddet, 2008,2009)
ASSESSMENT
EXPERIENCE
QUESTIONNAIRE
FOCUS GROUPS
PROGRAMME AUDIT
Programme
Team
Meeting
 Number of assessment tasks
 Summative/formative
 Variety
 Proportion of exams
 Oral feedback
 Written feedback
 Speed of return of feedback
 Specificity of criteria, aims and learning outcomes.
Audit in a nutshell
 Quantity of Effort
 Coverage of content and knowledge
 Clear goals and standards
 Quantity and Quality of Feedback
 Use of feedback
 Appropriate assessment
 Learning from exams
 Deep and surface learning
Assessment Experience
Questionnaire
Focus Groups
 Student voice and narrative
 Explanation
 Corroboration & contradiction
 Compelling evidence with the stats
 tells a good story
 raises a thought-provoking issue
 has elements of conflict
 promotes empathy with the central characters
 lacks an obvious, clear-cut answer
 takes a position, demands a decision &
 is relatively concise (Gross-Davis 1993)
Case Study…
Case Study X: what’s going on?
 Committed and innovative lecturers
 Lots of coursework, of very varied forms
 No exams
 Masses of written feedback on assignments (15,000 words)
 Learning outcomes and criteria clearly specified
….looks like a ‘model’ assessment environment
But students:
 Don’t put in a lot of effort and distribute their effort across few topics
 Don’t think there is a lot of feedback or that it very useful, and don’t
make use of it
 Don’t think it is at all clear what the goals and standards
 …are unhappy
Case Study Y: what’s going on?
 35 summative assessments
 No formative assessment specified in documents
 Learning outcomes and criteria wordy and woolly
 Marking by global, tacit, professional judgements
 Teaching staff mainly part-time and hourly paid
….looks like a problematic assessment environment
But students:
 Put in a lot of effort and distribute their effort across topics
 Have a very clear idea of goals and standards
 Are self-regulating and have a good idea of how to close the gap
Transmission model
Expert to novice
Planned & ‘delivered’
Feedback by experts
Feedback to novices
Privatised
Monologue
Emphasis on measuring
Competition
Metaphor - machine
Social constructivist model
Participatory, democratic
Messy and process-oriented
Peer review
Self-evaluation
Social process
Dialogue
Emphasis on learning
Collaboration
Metaphor - the journey
Two paradigms
 Between 12 and 68 summative tasks
 Between 0 and 55 formative tasks
 From 7 to 17 different types of assessment
 Feedback returned within 10 - 35 days
 936 written words of feedback to 15,412
words
 37 minutes to 30 hours of oral feedback
 0% to 79% of assessment by exams
Variations on 23 UG programmes
in 8 universities
“Formative assessment is concerned with how
judgements about the quality of student responses can
be used to shape and improve students’ competence by
short-circuiting the randomness and inefficiency of trial-
and-error learning” (Sadler, 1989, p.120).
TESTA: unmarked, required, eliciting feedback
Theme 1: Lack of formative assessment
 It was really useful. We were assessed on it but we weren’t officially
given a grade, but they did give us feedback on how we did.
 It didn’t actually count so that helped quite a lot because it was just
a practice and didn’t really matter what we did and we could learn
from mistakes so that was quite useful.
 Getting feedback from other students in my class helps. I can relate
to what they’re saying and take it on board. I’d just shut down if I
was getting constant feedback from my lecturer.
 I find more helpful the feedback you get in informal ways week by
week, but there are some people who just hammer on about what
will get them a better mark.
The potential
 If there weren’t loads of other assessments, I’d do it.
 If there are no actual consequences of not doing it, most students
are going to sit in the bar.
 It’s good to know you’re being graded because you take it more
seriously.
 I would probably work for tasks, but for a lot of people, if it’s not
going to count towards your degree, why bother?
 The lecturers do formative assessment but we don’t get any
feedback on it.
The barriers…
 How many summative tasks are necessary to measure
student achievement?
 How much formative assessment takes place on
programme at your university?
 How do we reduce summative assessment without
compromising student effort?
 How seriously do students take formative tasks?
 How do we get students (and staff) to take formative
assessment seriously?
Both/and – either/or questions
We could do with more assessments over the course of the year
to make sure that people are actually doing stuff.
We get too much of this end or half way through the term essay
type things. Continual assessments would be so much better.
So you could have a great time doing nothing until like a month
before Christmas and you’d suddenly panic. I prefer steady
deadlines, there’s a gradual move forward, rather than bam!
Theme 2: Time-on-task
 It was about nine weeks… I’d forgotten what I’d written.
 I read it and think “Well that’s fine, but I’ve already handed it
in now and got the mark. It’s too late”.
 Once the deadline comes up to just look on the Internet and
say ‘Right, that’s my mark. I don’t need to know too much
about why I got it’.
 You know that twenty other people have got the same sort of
comment.
Theme 3: Feedback - common problems
 The feedback is generally focused on the module.
 It’s difficult because your assignments are so detached
from the next one you do for that subject. They don’t
relate to each other.
 Because it’s at the end of the module, it doesn’t feed
into our future work.
 You’ll get really detailed, really commenting feedback
from one tutor and the next tutor will just say ‘Well
done’.
Programme-related feedback issues
 1220 AEQ returns, 23 programmes, 8 universities
 Strong statistical relationship between the quantity
and quality of feedback and students’ understanding
of goals and standards
 r=0.696, p<0.01
 Strong statistical relationship between overall
satisfaction and clear goals and standards
 r=0.662, p<0.01
Feedback really matters…
 It is a formal document so the language is quite complex and I’ve
had to read it a good few times to kind of understand what they
are saying.
 Assessment criteria can make you take a really narrow approach.
 It’s such a guessing game.... You don’t know what they expect
from you.
 I don’t have any idea of why it got that mark.
 They read the essay and then they get a general impression, then
they pluck a mark from the air.
 It’s a shot in the dark.
Theme 4: Clear goals and
standards
1. Less summative, more formative
2. Feedback issues: dialogue, peer-to-peer; giving feedback
before marks; cycles of feedback
3. Longer modules, linking and sequencing across modules
4. Attention to timing of tasks, bunching and spreading
5. Quicker return times
6. Streamlining variety of assessment
7. Challenging students to do more, at a higher level
8. Structural changes; synoptic assessments
Changes to assessment patterns
 Programme evidence brings the team together
 Addresses variations of standards
 The module vs greater good of the programme
 Helps to confront protectionism and silos
 Develops collegiality and conversations about
pedagogy
TESTA is about the team
TESTA is about coherence
www.testa.ac.uk
Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students'
learning. Learning and Teaching in Higher Education. 1(1): 3-31.
Gibbs, G. & Dunbar-Goddet, H. (2009). Characterising programme-level assessment
environments that support learning. Assessment & Evaluation in Higher Education. 34,4:
481-489.
Hattie, J. (2007) The Power of Feedback. Review of Educational Research. 77(1) 81-112.
Jessop, T. , El Hakim, Y. and Gibbs, G. (2013) The whole is greater than the sum of its
parts: a large-scale study of students’ learning in response to different assessment
patterns. Assessment and Evaluation in Higher Education. ifirst.
Jessop, T, McNab, N and Gubby, L. (2012) Mind the gap: An analysis of how quality
assurance processes influence programme assessment patterns. Active Learning in
Higher Education. 13(3). 143-154.
Jessop, T., El Hakim and Gibbs (2011) Research Inspiring Change. Educational
Developments. 12(4).
Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in
mass higher education, Assessment & Evaluation in Higher Education, 35: 5, 501 – 517
Sadler, D.R. (1989) Formative assessment and the design of instructional systems,
Instructional Science, 18, 119-144.
References

TESTA, University of Leeds: 'Talking @ Teaching' (September 2013)

  • 1.
    Improving student learningfrom assessment and feedback: a programme-level view Dr Tansy Jessop TESTA Project Leader Senior Fellow L&T, University of Winchester T@T at the University of Leeds 24 September 2013
  • 2.
     Why studentlearning?  Why assessment and feedback?  Why programme-level view?  Introducing TESTA The big questions
  • 3.
     HEA fundedresearch project (2009-12)  Seven programmes in four partner universities  Maps programme-wide assessment  Engages with Quality Assurance processes  Diagnosis – intervention – cure About TESTA Transforming the Experience of Students through Assessment
  • 4.
  • 5.
  • 6.
    TESTA “…is a wayof thinking about assessment and feedback” Graham Gibbs
  • 7.
     Captures anddistributes sufficient student time and effort - time on task  Challenging learning with clear goals and standards, encouraging deep learning  Sufficient, high quality feedback, received on time, with a focus on learning  Students pay attention to the feedback and it guides future studies – feeding-forward  Students are able to judge their own performance accurately, self-regulating Based on assessment principles
  • 8.
    TESTA Research Methods (Drawingon Gibbs and Dunbar-Goddet, 2008,2009) ASSESSMENT EXPERIENCE QUESTIONNAIRE FOCUS GROUPS PROGRAMME AUDIT Programme Team Meeting
  • 9.
     Number ofassessment tasks  Summative/formative  Variety  Proportion of exams  Oral feedback  Written feedback  Speed of return of feedback  Specificity of criteria, aims and learning outcomes. Audit in a nutshell
  • 10.
     Quantity ofEffort  Coverage of content and knowledge  Clear goals and standards  Quantity and Quality of Feedback  Use of feedback  Appropriate assessment  Learning from exams  Deep and surface learning Assessment Experience Questionnaire
  • 11.
    Focus Groups  Studentvoice and narrative  Explanation  Corroboration & contradiction  Compelling evidence with the stats
  • 12.
     tells agood story  raises a thought-provoking issue  has elements of conflict  promotes empathy with the central characters  lacks an obvious, clear-cut answer  takes a position, demands a decision &  is relatively concise (Gross-Davis 1993) Case Study…
  • 13.
    Case Study X:what’s going on?  Committed and innovative lecturers  Lots of coursework, of very varied forms  No exams  Masses of written feedback on assignments (15,000 words)  Learning outcomes and criteria clearly specified ….looks like a ‘model’ assessment environment But students:  Don’t put in a lot of effort and distribute their effort across few topics  Don’t think there is a lot of feedback or that it very useful, and don’t make use of it  Don’t think it is at all clear what the goals and standards  …are unhappy
  • 14.
    Case Study Y:what’s going on?  35 summative assessments  No formative assessment specified in documents  Learning outcomes and criteria wordy and woolly  Marking by global, tacit, professional judgements  Teaching staff mainly part-time and hourly paid ….looks like a problematic assessment environment But students:  Put in a lot of effort and distribute their effort across topics  Have a very clear idea of goals and standards  Are self-regulating and have a good idea of how to close the gap
  • 15.
    Transmission model Expert tonovice Planned & ‘delivered’ Feedback by experts Feedback to novices Privatised Monologue Emphasis on measuring Competition Metaphor - machine Social constructivist model Participatory, democratic Messy and process-oriented Peer review Self-evaluation Social process Dialogue Emphasis on learning Collaboration Metaphor - the journey Two paradigms
  • 16.
     Between 12and 68 summative tasks  Between 0 and 55 formative tasks  From 7 to 17 different types of assessment  Feedback returned within 10 - 35 days  936 written words of feedback to 15,412 words  37 minutes to 30 hours of oral feedback  0% to 79% of assessment by exams Variations on 23 UG programmes in 8 universities
  • 17.
    “Formative assessment isconcerned with how judgements about the quality of student responses can be used to shape and improve students’ competence by short-circuiting the randomness and inefficiency of trial- and-error learning” (Sadler, 1989, p.120). TESTA: unmarked, required, eliciting feedback Theme 1: Lack of formative assessment
  • 18.
     It wasreally useful. We were assessed on it but we weren’t officially given a grade, but they did give us feedback on how we did.  It didn’t actually count so that helped quite a lot because it was just a practice and didn’t really matter what we did and we could learn from mistakes so that was quite useful.  Getting feedback from other students in my class helps. I can relate to what they’re saying and take it on board. I’d just shut down if I was getting constant feedback from my lecturer.  I find more helpful the feedback you get in informal ways week by week, but there are some people who just hammer on about what will get them a better mark. The potential
  • 19.
     If thereweren’t loads of other assessments, I’d do it.  If there are no actual consequences of not doing it, most students are going to sit in the bar.  It’s good to know you’re being graded because you take it more seriously.  I would probably work for tasks, but for a lot of people, if it’s not going to count towards your degree, why bother?  The lecturers do formative assessment but we don’t get any feedback on it. The barriers…
  • 20.
     How manysummative tasks are necessary to measure student achievement?  How much formative assessment takes place on programme at your university?  How do we reduce summative assessment without compromising student effort?  How seriously do students take formative tasks?  How do we get students (and staff) to take formative assessment seriously? Both/and – either/or questions
  • 21.
    We could dowith more assessments over the course of the year to make sure that people are actually doing stuff. We get too much of this end or half way through the term essay type things. Continual assessments would be so much better. So you could have a great time doing nothing until like a month before Christmas and you’d suddenly panic. I prefer steady deadlines, there’s a gradual move forward, rather than bam! Theme 2: Time-on-task
  • 22.
     It wasabout nine weeks… I’d forgotten what I’d written.  I read it and think “Well that’s fine, but I’ve already handed it in now and got the mark. It’s too late”.  Once the deadline comes up to just look on the Internet and say ‘Right, that’s my mark. I don’t need to know too much about why I got it’.  You know that twenty other people have got the same sort of comment. Theme 3: Feedback - common problems
  • 23.
     The feedbackis generally focused on the module.  It’s difficult because your assignments are so detached from the next one you do for that subject. They don’t relate to each other.  Because it’s at the end of the module, it doesn’t feed into our future work.  You’ll get really detailed, really commenting feedback from one tutor and the next tutor will just say ‘Well done’. Programme-related feedback issues
  • 24.
     1220 AEQreturns, 23 programmes, 8 universities  Strong statistical relationship between the quantity and quality of feedback and students’ understanding of goals and standards  r=0.696, p<0.01  Strong statistical relationship between overall satisfaction and clear goals and standards  r=0.662, p<0.01 Feedback really matters…
  • 25.
     It isa formal document so the language is quite complex and I’ve had to read it a good few times to kind of understand what they are saying.  Assessment criteria can make you take a really narrow approach.  It’s such a guessing game.... You don’t know what they expect from you.  I don’t have any idea of why it got that mark.  They read the essay and then they get a general impression, then they pluck a mark from the air.  It’s a shot in the dark. Theme 4: Clear goals and standards
  • 26.
    1. Less summative,more formative 2. Feedback issues: dialogue, peer-to-peer; giving feedback before marks; cycles of feedback 3. Longer modules, linking and sequencing across modules 4. Attention to timing of tasks, bunching and spreading 5. Quicker return times 6. Streamlining variety of assessment 7. Challenging students to do more, at a higher level 8. Structural changes; synoptic assessments Changes to assessment patterns
  • 27.
     Programme evidencebrings the team together  Addresses variations of standards  The module vs greater good of the programme  Helps to confront protectionism and silos  Develops collegiality and conversations about pedagogy TESTA is about the team
  • 28.
    TESTA is aboutcoherence
  • 29.
  • 30.
    Gibbs, G. &Simpson, C. (2004) Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education. 1(1): 3-31. Gibbs, G. & Dunbar-Goddet, H. (2009). Characterising programme-level assessment environments that support learning. Assessment & Evaluation in Higher Education. 34,4: 481-489. Hattie, J. (2007) The Power of Feedback. Review of Educational Research. 77(1) 81-112. Jessop, T. , El Hakim, Y. and Gibbs, G. (2013) The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different assessment patterns. Assessment and Evaluation in Higher Education. ifirst. Jessop, T, McNab, N and Gubby, L. (2012) Mind the gap: An analysis of how quality assurance processes influence programme assessment patterns. Active Learning in Higher Education. 13(3). 143-154. Jessop, T., El Hakim and Gibbs (2011) Research Inspiring Change. Educational Developments. 12(4). Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in mass higher education, Assessment & Evaluation in Higher Education, 35: 5, 501 – 517 Sadler, D.R. (1989) Formative assessment and the design of instructional systems, Instructional Science, 18, 119-144. References

Editor's Notes

  • #3 Emphasis on measurement; two roles for teachers – judges of performance and achievement; collaborators with students in learning; obvious that we are interested in student learning, but our assessment systems often are perceived more as ways of measuring students than developing their capabilities; assessment and feedback key drivers for learning, Paul Ramsden: assessment always drives the curriculum; where students pay attention; feedback single most important factor in student learning – John Hattie; Black and Wiliam. NSS scores – remain the lowest – 85% of all students in the Uk satisfied with their degree courses; only 70% are satisfied with a & f. Programme level view – this is the most interesting. The rise of modularity and semesterisation; measuring students to death; whither coherence? Rising tide in the UK of thinking about how to constrain choice and get coherence back; thinking, planning in silos. Introduce you to TESTA
  • #7 What started as a research methodology has become a way of thinking. David Nicol – changing the discourse, the way we think about assessment and feedback; not only technical, research, mapping, also shaping our thinking. Why is that?
  • #9 Based on robust research methods about whole programmes - 40 audits; 2000 AEQ returns; 50 focus groups
  • #10 Hard data from chat and documents
  • #12 More than 50
  • #13 "How do we create texts that are vital? That are attended to? That make a difference?
  • #14 Large programme; modular approaches; marker variation, late feedback; dependency on tutors
  • #21 Every programme has much more summative assessment than formative. It is clear which leads to more learning.
  • #22 Student workloads often concentrated around two summative points per module. Sequencing, timing, bunching issues, and ticking off modules so students don’t pay attention to feedback at the end point.
  • #23 Quote 1: late feedback is when it’s too late to be of use. It needs to get back to them when it still matters. Quote 2 and 3: modular silos impede the transfer and use of feedback, and students are looking for more relationship between tasks within and across modules; Quote 4: Marker variation is rife, and creates wariness/distrust about using feedback; Quote 5: students exposed to lots of carefully scaffolded peer feedback find it invaluable.
  • #25 Golden thread from feedback to clear goals and standards to overall satisfaction
  • #26 Limitations of explicit criteria, marker variation is huge, particularly in humanities, arts and professional courses (non science ones) Students haven’t internalised standards which are often tacit. Marking workshops, exemplars, peer review.