Using Rubrics in Student Self-Assessment: Student Perceptions in The English As A Foreign Language Writing Context
Using Rubrics in Student Self-Assessment: Student Perceptions in The English As A Foreign Language Writing Context
Weiqiang Wang
To cite this article: Weiqiang Wang (2017) Using rubrics in student self-assessment: student
perceptions in the English as a foreign language writing context, Assessment & Evaluation in
Higher Education, 42:8, 1280-1292, DOI: 10.1080/02602938.2016.1261993
ABSTRACT KEYWORDS
The instructional value of rubrics for promoting student learning and aiding Rubric; self-assessment;
teacher feedback to student performance has been extensively researched student perceptions; EFL
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
Introduction
Defined as ‘a coherent set of criteria for students’ work that includes descriptions of levels of performance
quality on the criteria’ (Brookhart 2013, 4), rubrics have three essential features: evaluative criteria, quality
definitions of those criteria and a scoring strategy (Popham 1997; Reddy and Andrade 2010). Rubrics
are particularly important to performance assessment such as speaking and writing (Lane and Tierney
2008; Sadler 2009), where there is no single correct or best answer, as opposed to multiple-choice
tests (Messick 1996). Rubrics can be applied as both summative (also evaluative) and formative (also
instructional) instruments (Andrade 2005; Jonsson and Svingby 2007). As an evaluative tool, rubrics
can be used to improve the efficiency of teachers’ grading of student work, helping them justify the
scores assigned to student performance (Andrade 2000). As an instructional tool, rubrics are powerful
tools for facilitating student self- and peer assessment, especially for aiding them to generate self and
peer feedback (Jonsson and Svingby 2007). The present study explores students’ perceptions of the
rubric’s role in self-assessment when it is used as an instructional tool.
Self-assessment refers to ‘the qualitative assessment of the learning process, and of its final product,
realised on the basis of pre-established criteria’ (Panadero 2011, 78). Self-assessment is a key compo-
nent of self-regulated learning (Panadero and Alonso-Tapia 2013), which is defined as ‘self-generated
thoughts, feelings, and actions that are planned and cyclically adapted to the attainment of personal
goals’ (Zimmerman 2000, 14). The regular practices of self-assessment may enhance students’ ability
to assess their own work and thereby improve their self-regulated learning skills (Panadero, Jonsson,
and Strijbos 2016).
Effective implementation of self-assessment requires that assessment criteria, which may take the
form of rubrics or scripts, be shared with students before the learning processes, so that students have
clear understandings of the learning goals and can plan their work correspondingly (Panadero and
Alonso-Tapia 2013; Panadero, Jonsson, and Strijbos 2016). Panadero and Alonso-Tapia (2013) explicated
how self-assessment impacts on the three phases of Zimmerman and Moylan’s (2009) cyclic model of
self-regulated learning: forethought, performance and self-reflection. In the forethought phase, students
can analyse the task, use assessment criteria to set realistic goals for task performance and identify
the strategies for task completion. During the performance phase, students can use the criteria to
monitor their works-in-progress. In the self-reflection phase, students can check their learning product
against the criteria. Given the importance of assessment criteria to self-assessment, and the relationship
between self-assessment and self-regulated learning, it is worth exploring what self-regulated learning
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
2016; Becker 2016). Babaii, Taghaddomi, and Pashmforoosh (2016), for instance, found that sharing
assessment criteria with students narrowed the gap between students and teachers’ understandings
of EFL speaking and improved student self-grading accuracy. In another study, Becker (2016) showed
that involving English as a Second Language students in creating and/or applying a rubric significantly
improved their summary writing performance. There is, however, little research exploring students’
perceptions of rubric use in self-assessment in second/foreign language contexts.
Students’ learning in second/foreign language contexts differs from learning in most subject content
courses, in that the former involves more non-linear learning progressions than the latter (Turner and
Purpura 2015). It remains to be explored whether the same findings and principles about rubric use
derived from subject content courses also apply in second/foreign language contexts. Additionally,
there are few studies exploring the factors mediating the rubric’s effectiveness for promoting student
learning (Panadero and Jonsson 2013). Last but not the least, more research is also needed to probe
the relationship between student rubric use and self-regulation from their own perspectives, which
constitutes an important source of evidence about the validity of rubrics as instructional tools (Brown,
McInerney, and Liem 2009; Brookhart and Chen 2015).
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
The present study explores students’ perceptions of rubric’s role in their self-assessment in a Chinese
EFL writing class. Specifically, it addresses two questions:
(1) How did students perceive the rubric’s role in self-assessment, especially in relation to their
self-regulated learning of writing?
(2) What factors, if any, were perceived by the students as affecting the rubric’s effectiveness in
self-assessment in the writing class?
Methodology
Context of the study
The present study was conducted in a 32-week EFL writing course at a Chinese university. The teacher
of the writing course is also the researcher of the present study, which offers a vantage point of gaining
an insider’s understandings of students’ perceived rubric use. The writing course covers the teaching
of descriptive, narrative and expository writing. Self, peer and teacher assessment are integral parts
of the course. As expository writing is the main part of the curriculum, the present study investigates
students’ perceptions of rubric use in their self-assessment of expository writing only.
Participants
Eighty students (24 male and 56 female) from 3 intact classes at a Chinese university participated in the
study. Six of them (2 male and 4 female) were purposively chosen as the case study informants based
on their English proficiency, ability to verbalise their thinking and willingness and availability to partic-
ipate. Purposive sampling was used because it would ‘enable detailed exploration and understanding
of the central themes and puzzles which the researcher wishes to study’ (Ritche and Lewis 2003, 78).
Table 1 presents the six students’ profile, with pseudonyms used for anonymity purposes.
manual to illustrate its application, which includes six exemplars drawn from previous students’ work
representing differing levels of writing proficiency. Annotations and comments were provided to illus-
trate the practice of feedback giving.
Interview questions
Individual retrospective interviews were conducted with the six case study informants after each
self-assessment practice. Unlike the self-reflective journals, which were aimed at gleaning an overall
understanding of the students’ perceptions of rubric use in self-assessment, the interviews were used
to achieve a more nuanced understanding of individual students’ experiences and opinions about
rubric use in self-assessment. In other words, interviews and reflective journals supplement each other
to shed light on the research questions.
Data collection
All the data were collected throughout the EFL writing course, which ensures the ecological validity
of the present study. Before the formal study, the teacher/researcher organised three 45-min sessions
to practice students’ self-assessment of their writing. The training manual, which contained annotated
student sample work and feedback, was used in the training sessions.
In the formal study, the teacher/researcher firstly assigned students an essay topic, then organised a
brainstorming session to help them generate ideas and asked them to write an essay on the topic within
45 min in class. Afterwards, the teacher/researcher photocopied the first drafts and gave the original
ones back to the students within the same day. The students were instructed to do rubric-referenced
self-assessment after class. In the next class, the students paired themselves into peer assessment ses-
sions, during which they assessed their peers’ photocopied drafts. Based on the feedback generated
from self and peer assessment, they wrote their second drafts and handed them in to the teacher/
researcher in the next class. The teacher/researcher conducted retrospective interviews with the six case
study informants within three days after they handed in their second drafts. Each interview lasted for
30–45 min. The same procedure was followed with students’ self-assessment practices for 6 essays, and
altogether 36 interviews were conducted. To facilitate the case study students’ expression of opinions,
1284 W. WANG
the interviews were all done in Chinese. Near the end of the study, all the student participants wrote a
reflective journal on their experiences and perceptions of rubric use in self-assessment. Eighty student
reflective journals were collected in total.
Data analysis
An inductive approach of grounded theory (Strauss and Corbin 1998) was adopted to analyse the
students’ reflective journals and interview data. The data were recursively read with frequent reference
to the research questions and Zimmerman and Moylan’s (2009) model of self-regulated learning. The
researcher and a researcher assistant, who has an MA degree in Applied Linguistics, developed a coding
scheme based on the recursive reading. The same coding scheme was applied to both sets of data.
They then coded all the data collected using the same scheme, with inter-coder reliability calculated at
0.83. The disagreement of coding was resolved through negotiation. The researcher also independently
coded all the data collected one month after the data analysis, with intra-coder reliability calculated at
0.92. The teacher/researcher then translated the results from Chinese, the language in which the data
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
were collected, into English. Participant verification was also sought by sending a summary of the coded
results to all the student participants, receiving their agreement with the results.
Table 2 presents the themes and sub-themes of the coding scheme, as well as the frequency of
student references to them.
Results
RQ 1. How did students perceive the rubric’s role in self-assessment, especially in relation to
their self-regulated learning of writing?
Analysis of the students’ reflective journals and interviews found that using the rubric in self-assessment
guided them throughout all the three stages of self-regulated learning delineated by Zimmerman and
Moylan (2009): forethought, performance and reflection.
In other words, the rubric was regarded by the students as a roadmap clarifying the highest levels
expected in their writing performance and orientating their efforts to those levels of performance.
Additionally, score assignment in self-assessment is also valued by the students in that it is generally
perceived as ‘an indispensable means of quantifying writing performance and making it measureable,
which largely guarantees the objectivity of self-assessment’ (Student 26, reflective journal).
It can be summarised that, in the self-reflection phase, the rubric was valued by students as a means
of identifying developmental stages of EFL writing, aiding their feedback generation and quantifying
their EFL writing performance.
RQ 2. What factors, if any, were perceived by the students as affecting the rubric’s
effectiveness in self-assessment in the EFL writing class?
Analysis of the student reflective journals and interview data also identified five factors affecting the
rubric’s effectiveness in student self-assessment, which can be broadly classified into two clusters:
within-rubric (factors 1–3) and rubric-user factors (factors 4–5). The former refers to the rubric’s innate
qualities or features; the latter indicates the rubric-users’ characteristics.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
A student more particularly noted that the narrow score range hindered her from making fine-
grained judgments about her writing performance:
Although I felt that I made progress in my writing, I found that it is still not good enough to reach ‘4’, the highest
level of performance. But it is more about a level between ‘3’ and ‘4’, so I have an impulse to give it a ‘3.5’. However,
it is a pity that there is not such an option in the rubric. As a result, I am wondering whether it is feasible to enlarge
the scale to ‘7’ or ‘9’ point? (Mary, interview data)
In summary, the rubric’s narrow score range, as viewed by the students, was another factor affecting
their diagnosis and placement of their EFL writing.
Firstly, the students were found to hold a set of idiosyncratic but implicit criteria derived from their
previous writing experience. When their own criteria went beyond the categories covered by the rubric,
they were more likely to refer to their own criteria in self-assessment. This is particularly the case with
the assessment of content, the relatively ‘subjective’ dimension of writing:
When I reviewed the content of this essay, I also referred to my high school experience of writing in Chinese, which
suggested that my viewpoints should not only be distinctive but also be abstract. But since the rubric does not
include such a category, I added my own standard when doing self-assessment. (Kate, interview data)
Students’ English proficiency and knowledge about assigned essay topics were also found to interfere
with their rubric use in self-assessment of EFL writing:
Though the rubric explicitly tells me that there should be a thorough development of my thesis statement, I just
could not do it because I do not have enough knowledge about the topic. This is also true when it comes to vocab-
ulary and syntactic diversity. (Student 72, reflective journal)
Another student noted that she would circumvent or skip some of the rubric’s criteria if she was not
confident about her abilities in those regards:
Since I did not have a good command of the use of tense, I would use very simple tenses, like the present, the past,
etc. when writing. Similarly, when doing self-assessment, I would also skip the categories about tense in the rubric.
That is because without a good understanding of the fine distinctions between tense uses, I just do not know how
to improve them. (Student 48, reflective journal)
Furthermore, a student noted that instead of using the rubric, he would rather have referred to a
specific template of writing:
Given my low English proficiency, I would rather have a template which demonstrates all the formulaic sentence
patterns and often-used words. This is because I can directly use those patterns and words in my own writing. The
rubric is like a set of abstract guidelines which I could hardly follow. (Student 47, reflective journal)
The examples show that the students’ original criteria for writing, English proficiency and knowledge
about assigned essay topics, all of which constitute their domain knowledge about writing, played a
salient role in mediating the rubric’s effectiveness in their self-assessment.
However, if the students used the rubric for an overly long period, they raised at least two concerns
about it: (1) to what extent the rubric is applicable to the assessment of other genres of EFL writing;
(2) an overreliance on the rubric may foster an instrumental attitude towards it. The first is illustrated
by this case:
At later stages of rubric use, I began to doubt its scope of application. It seemed to be more like one for formulaic
patterns of writing, which may not be closely relevant to what we are supposed to write in our work or life. So, I
am wondering to what extent the rubric is suitable for writing of other purposes. (Student 78, reflective journal)
Regarding the concern of being overly reliant on the rubric, a student said that:
An overreliance on the rubric may diminish the possibilities of writing in other ways and result in our conforming to
pre-set rules for instrumental purposes, like the achievement of high scores, rather than the expressive description
of our experiences, feelings, and opinions in an engaging manner. (Student 66, reflective journal)
The findings revealed the length of intervention as an important aspect of student rubric use. On the
one hand, students should be given enough opportunities of rubric use to grasp its criteria. However,
if the same rubric was used for overly long, the students were likely to doubt its scope of application,
or even develop an instrumental attitude towards it.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
writing derived from their previous writing experience and might therefore assess their writing on that
basis. Additionally, while the students realised their present levels of performance (Question 2) using the
rubric, their limited English proficiency and insufficient knowledge about assigned essay topics might
inhibit them from improving their writing performance (Question 3). In this sense, the rubric can be
regarded as doing well in answering Question 1&2 while lacking instructional power regarding Question
3. Such a result demonstrates the secondary position of assessment (rubrics included) to learning. The
learning potential of assessment seems to be only tapped after students have acquired sufficient domain
knowledge. Without a sufficient knowledge base, assessment has but limited instructional power.
Length of intervention
Lastly, echoing previous research on rubric use (e.g. Andrade, Du, and Mycek 2010), the study revealed
that sufficient length of intervention was a necessary condition for the rubric’s effectiveness in student
self-assessment. Students needed to practice using the rubric two to three times to get familiar with its
requirements. However, negative effects might also ensue from using the same task-general rubric for
too long. In that case, after students’ use of a task-general rubric for two to three times, it is advisable
for teachers give them task-specific rubrics tailored to specific assessment tasks, which are oriented
to more specific areas of their learning and improvement. Namely, a balanced use of task-general and
task-specific rubrics is preferable for engaging students in self-assessment.
Conclusion
This study contributes to previous research on rubric use by conducting a contextual analysis of students’
perceptions of rubric use in a Confucian-heritage EFL learning context, an under-researched setting
in formative assessment research. It also provides empirical support for the relationship between stu-
dent rubric use and self-regulation, as well as Panadero and Jonsson’s (2013) theoretical model on the
factors moderating rubrics’ effectiveness. The model delineates the ways (aiding the feedback process,
improving student self-efficacy, etc.) and factors (educational level and length of intervention, gender,
etc.) that moderate rubrics’ learning effects. The present study substantiates the model by presenting
the self-regulated learning processes activated by rubric use and the factors affecting rubrics’ efficacy
for improving students’ writing performance, with pedagogical implications drawn for rubric use in
student self-assessment.
As noted by Brookhart and Chen (2015), it may not be appropriate to make direct claims about rubrics
per se; rather, a more contextual and balanced viewpoint of rubric shall be proposed. More studies, either
quasi-experimental or naturalistic ones, are needed on how and to what extent students’ perspectives on
1290 W. WANG
rubric use may be bound up with diverse educational contexts, the Confucian-heritage cultural context
included. Rubric-user characteristics, as demonstrated by the present study, have also been identified
as salient factors affecting rubrics’ effectiveness. Moreover, instructionally valuable as teacher-tailored
rubrics are, an overreliance on pre-set rubrics may lead students to adopt an instrumental approach to
learning, which risks diminishing the diversity of students’ responses in performance assessment and
inhibiting their development of learning and autonomy.
The present study, however, was based on the data collected from only 80 students, which dimin-
ishes its generalisability to other contexts. Moreover, the teacher/researcher’s identity may hinder the
students from being more truthful about their opinions about rubric use. The present study also has
a confined focus on students’ rubric use in only self-assessment, though the rubric was used in peer
assessment as well. It would be interesting to investigate students’ rubric use in peer assessment and
how it may relate to student self-assessment and self-regulated learning. Innovative ways of rubric
use are also needed to further tap rubrics’ instructional potential, sustain student engagement with
self-assessment and foster their development of self-regulated learning.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
Acknowledgement
The author wants to thank Professor Liying Cheng in the Faculty of Education at Queen’s University, Canada, for her
support in writing this paper. He is also grateful to the two anonymous reviewers for their insightful comments on an
earlier draft.
Disclosure statement
No potential conflict of interest was reported by the author.
Funding
This work was supported by the Key Research Projects of Philosophy and Social Science of Ministry of Education of
China [grant number 15JZD048]; the Innovative School Project in Higher Education of Guangdong, China [grant number
GWTP-BS-2015-03]; the Guangdong Planning Office of Philosophy and Social Science, China [grant number GD14XWW21];
and Department of Education of Guangdong, China [grant number 103-GK131017].
Notes on contributor
Weiqiang Wang is an associate professor in the School of English for International Business at Guangdong University of
Foreign Studies. His research interests focus on assessment task design, student self-assessment, peer assessment and
self-regulated learning. In 2014, Weiqiang received the Solidarity Award at the 17th World Congress of Applied Linguistics
in Brisbane.
References
Andrade, H. G. 2000. “Using Rubrics to Promote Thinking and Learning.” Educational Leadership 57 (5): 13–19.
Andrade, H. G. 2005. “Teaching with Rubrics: The Good, the Bad, and the Ugly.” College Teaching 53 (1): 27–31.
Andrade, H. G., and B. A. Boulay. 2003. “Role of Rubric-referenced Self-Assessment in Learning to Write.” The Journal of
Educational Research 97 (1): 21–30. doi:10.1080/00220670309596625.
Andrade, H., and Y. Du. 2005. “Student Perspectives on Rubric-referenced Assessment.” Practical Assessment Research &
Evaluation 10 (3): 159–181.
Andrade, H. L., Y. Du, and K. Mycek. 2010. “Rubric-referenced Self-assessment and Middle School Students’ Writing.”
Assessment in Education: Principles, Policy & Practice 17 (2): 199–214. doi:10.1080/09695941003696172.
Andrade, H. L., Y. Du, and X. Wang. 2008. “Putting Rubrics to the Test: The Effect of a Model, Criteria Generation, and Rubric-
referenced Self-assessment on Elementary Schools Students’ Writing.” Educational Measurement: Issues and Practice 27
(2): 3–13. doi:10.1111/j.1745-3992.2008.00118.x.
Andrade, H. L., X. Wang, Y. Du, and R. L. Akawi. 2009. “Rubric-referenced Self-assessment and Self-efficacy for Writing.” The
Journal of Educational Research 102 (4): 287–302. doi:10.3200/JOER.102.4.287-302.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION 1291
Babaii, E., S. Taghaddomi, and R. Pashmforoosh. 2016. “Speaking Self-assessment: Mismatches between Learners’ and
Teachers’ Criteria.” Language Testing 33 (3): 411–437. doi:10.1177/0265532215590847.
Becker, A. 2016. “Student-generated Scoring Rubrics: Examining their Formative Value for Improving ESL Students’ Writing
Performance.” Assessing Writing 29: 15–24. doi:10.1016/j.asw.2016.05.002.
Brookhart, S. M. 2013. How to Create and Use Rubrics for Formative Assessment and Grading. Alexandria, VA: ASCD.
Brookhart, S. M., and F. Chen. 2015. “The Quality and Effectiveness of Descriptive Rubrics.” Educational Review 67 (3): 343–368.
doi:10.1080/00131911.2014.929565.
Brown, G. T. L., D. M. McInerney, and G. A. D. Liem. 2009. “Student Perspectives of Assessment: Considering What Assessment
Means to Learners.” In Student Perspectives on Assessment: What Students Can Tell Us about Assessment for Learning, edited
by D. M. McInerney, G. T. L. Brown, and G. A. D. Liem, 1–21. Charlotte, NC: Information Age Publishing.
Carless, D. 2011. From Testing to Productive Student Learning: Implementing Formative Assessment in Confucian-heritage
Settings. London: Routledge.
Coe, M., Hanita, M., Nishioka, V., Smiley, R., & Park, O. 2011. An Investigation of the Impact of the 6 + 1 Trait Writing Model on
Grade 5 Student Writing Achievement (NCEE 2012-4010). Washington, DC: National Center for Education Evaluation and
Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Dawson, P. 2015. “Assessment Rubrics: Towards Clearer and More Replicable Design, Research and Practice.” Assessment &
Evaluation in Higher Education: 1–14. doi:10.1080/02602938.2015.1111294.
Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77 (1): 81–112.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
Jacobs, H., S. Zinkgraf, D. Wormuth, V. Hartfiel, and J. Hughey. 1981. Testing ESL Composition: A Practical Approach. Rowley,
MA: Newbury House.
Janssen, G., V. Meier, and J. Trace. 2015. “Building a Better Rubric: Mixed Methods Rubric Revision.” Assessing Writing 26:
51–66. doi:10.1016/j.asw.2015.07.002.
Jonsson, A., and G. Svingby. 2007. “The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences.”
Educational Research Review 2 (2): 130–144. doi:10.1016/j.edurev.2007.05.002.
Lane, S., and Tierney, S. T. 2008. Performance Assessment. In 21st Century Education: A Reference Handbook, edited by T. L.
Good, Vol. 1, 461–470. Los Angeles, CA: Sage.
Li, J., and P. Lindsey. 2015. “Understanding Variations between Student and Teacher Application of Rubrics.” Assessing Writing
26: 67–79. doi:10.1016/j.asw.2015.07.003.
Messick, S. 1996. “Validity of Performance Assessments.” In Technical Issues in Large-scale Perforamcne Assessment, edited
by G. Phillips, 1–18. Washington, DC: National Center for Education Statistics.
Panadero, E. 2011. “Instructional Help for Self-assessment and Self-regulation: Evaluation of the Efficacy of Self-assessment
Scripts vs. Rubrics.” Unpublished doctoral dissertation, Universidad Autónoma de Madrid, Madrid, Spain.
Panadero, E., and Alonso-Tapia, J. 2013. Self-assessment: Theoretical and Practical Connotations. When it Happens, How
is it Acquired and What to Do to Develop it in Our Students. Electronic Journal of Research in Educational Psychology 11
(2): 551–576. doi:10.14204/ejrep.30.12200.
Panadero, E., and A. Jonsson. 2013. “The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review.”
Educational Research Review 9: 129–144. doi:10.1016/j.edurev.2013.01.002.
Panadero, E., A. Jonsson, and J. Strijbos. 2016. “Scaffolding Self-regulated Learning through Self-assessment and
Peer Assessment: Guidelines for Classroom Implementation.” In Assessment for Learning: Meeting the Challenge of
Implementation, edited by D. Laveault and L. Allal, 311–326. Boston, MA: Springer.
Panadero, E., and M. Romero. 2014. “To Rubric or Not to Rubric? The Effects of Self-assessment on Self-regulation,
Performance and Self-efficacy.” Assessment in Education: Principles, Policy & Practice 21 (2): 133–148. doi:10.1080/096
9594X.2013.877872.
Panadero, E., J. A. Tapia, and J. A. Huertas. 2012. “Rubrics and Self-assessment Scripts Effects on Self-regulation, Learning and
Self-efficacy in Secondary Education.” Learning and Individual Differences 22 (6): 806–813. doi:10.1016/j.lindif.2012.04.007.
Popham, W. J. 1997. “What’s Wrong-and What’s Right-with Rubrics.” Educational Leadership 55: 72–75.
Reddy, Y. M., and H. Andrade. 2010. “A Review of Rubric Use in Higher Education.” Assessment & Evaluation in Higher Education
35 (4): 435–448. doi:10.1080/02602930902862859.
Reynolds-Keefer, L. 2010. “Rubric-Referenced Assessment in Teacher Preparation: An Opportunity to Learn by Using.” Practical
Assessment, Research & Evaluation 15 (8): 1–9.
Ritche, J., and J. Lewis. 2003. Qualitative Research Practice: A Guide for Social Science Students and Researchers. London: Sage.
Sadler, D. R. 2009. “Indeterminacy in the Use of Preset Criteria for Assessment and Grading.” Assessment & Evaluation in
Higher Education 34 (2): 159–179. doi:10.1080/02602930801956059.
Sadler, D. R. 2010. “Beyond Feedback: Developing Student Capability in Complex Appraisal.” Assessment & Evaluation in
Higher Education 35 (5): 535–550. doi:10.1080/02602930903541015.
Strauss, A., and J. Corbin. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory.
Thousand Oaks, CA: Sage.
Sundeen, T. H. 2014. “Instructional Rubrics: Effects of Presentation Options on Writing Quality.” Assessing Writing 21: 74–88.
doi:10.1016/j.asw.2014.03.003.
1292 W. WANG
Torrance, H. 2007. “Assessment as Learning? How the Use of Explicit Learning Objectives, Assessment Criteria and Feedback
in Post-secondary Education and Training Can Come to Dominate Learning.” Assessment in Education: Principles, Policy
& Practice 14 (3): 281–294. doi:10.1080/09695940701591867.
Turner, C. E., and J. E. Purpura. 2015. “Learning-oriented Assessment in Second and Foreign Language Classrooms.” In
Handbook of Second Language Assessment, edited by D. Tsagari and J. Baneerjee, 255–272. Boston, MA: De Gruyter
Mouton.
Wollenschläger, M., J. Hattie, N. Machts, J. Möller, and U. Harms. 2016. “What Makes Rubrics Effective in Teacher-feedback?
Transparency of Learning Goals is Not Enough.” Contemporary Educational Psychology 44–45: 1–11. doi:10.1016/j.
cedpsych.2015.11.003.
Zimmerman, B. J. 2000. “Attaining Self-regulation: A Social Cognitive Perspective.” In Handbook of Self-regulation, edited
by M. Boekaerts, P. R. Printrich, and M. Zeidner, 13–39. San Diego, CA: Academic Press.
Zimmerman, B. J., and A. R. Moylan. 2009. “Self-regulation: When Metacognition and Motivation Intersect.” In
Handbook of Metacognition in Education, edited by D. J. Hacker, J. Dunlosky, and A. C. Graesser, 299–315.
New York, NY: Routledge.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
Copyright of Assessment & Evaluation in Higher Education is the property of Routledge and
its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.