0% found this document useful (0 votes)
55 views15 pages

Using Rubrics in Student Self-Assessment: Student Perceptions in The English As A Foreign Language Writing Context

Uploaded by

Phu Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views15 pages

Using Rubrics in Student Self-Assessment: Student Perceptions in The English As A Foreign Language Writing Context

Uploaded by

Phu Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Assessment & Evaluation in Higher Education

ISSN: 0260-2938 (Print) 1469-297X (Online) Journal homepage: http://www.tandfonline.com/loi/caeh20

Using rubrics in student self-assessment: student


perceptions in the English as a foreign language
writing context

Weiqiang Wang

To cite this article: Weiqiang Wang (2017) Using rubrics in student self-assessment: student
perceptions in the English as a foreign language writing context, Assessment & Evaluation in
Higher Education, 42:8, 1280-1292, DOI: 10.1080/02602938.2016.1261993

To link to this article: http://dx.doi.org/10.1080/02602938.2016.1261993

Published online: 28 Nov 2016.

Submit your article to this journal

Article views: 322

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=caeh20

Download by: [EP- IPSWICH] Date: 26 September 2017, At: 02:31


Assessment & Evaluation in Higher Education, 2017
VOL. 42, NO. 8, 1280–1292
https://doi.org/10.1080/02602938.2016.1261993

Using rubrics in student self-assessment: student perceptions in


the English as a foreign language writing context
Weiqiang Wang
School of English for International Business, Guangdong University of Foreign Studies, Guangzhou, China

ABSTRACT KEYWORDS
The instructional value of rubrics for promoting student learning and aiding Rubric; self-assessment;
teacher feedback to student performance has been extensively researched student perceptions; EFL
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

in th educational literature. There is, nonetheless, a dearth of studies on writing


students’ rubric use in second/foreign language contexts, and fewer studies
have investigated the factors affecting rubrics’ effectiveness for promoting
student learning. The paper reports a classroom-based inquiry into students’
perceptions of rubric use in self-assessment in English as a Foreign Language
context and the factors moderating its effectiveness. Eighty students at a
Chinese university participated in the study. The data collected included their
reflective journals and six case study informants’ retrospective interviews.
Results showed that the rubric was perceived as useful for fostering the
students’ self-regulation by guiding them through the stages of goal-setting,
planning, self-monitoring and self-reflection. Both within-rubric and rubric-
user factors were identified as affecting the rubric’s effectiveness in student
self-assessment. The findings are discussed with reference to the design
features of rubrics. Implications are drawn for formative rubric use in student
self-assessment.

Introduction
Defined as ‘a coherent set of criteria for students’ work that includes descriptions of levels of performance
quality on the criteria’ (Brookhart 2013, 4), rubrics have three essential features: evaluative criteria, quality
definitions of those criteria and a scoring strategy (Popham 1997; Reddy and Andrade 2010). Rubrics
are particularly important to performance assessment such as speaking and writing (Lane and Tierney
2008; Sadler 2009), where there is no single correct or best answer, as opposed to multiple-choice
tests (Messick 1996). Rubrics can be applied as both summative (also evaluative) and formative (also
instructional) instruments (Andrade 2005; Jonsson and Svingby 2007). As an evaluative tool, rubrics
can be used to improve the efficiency of teachers’ grading of student work, helping them justify the
scores assigned to student performance (Andrade 2000). As an instructional tool, rubrics are powerful
tools for facilitating student self- and peer assessment, especially for aiding them to generate self and
peer feedback (Jonsson and Svingby 2007). The present study explores students’ perceptions of the
rubric’s role in self-assessment when it is used as an instructional tool.
Self-assessment refers to ‘the qualitative assessment of the learning process, and of its final product,
realised on the basis of pre-established criteria’ (Panadero 2011, 78). Self-assessment is a key compo-
nent of self-regulated learning (Panadero and Alonso-Tapia 2013), which is defined as ‘self-generated

CONTACT  Weiqiang Wang  [email protected], [email protected]


© 2016 Informa UK Limited, trading as Taylor & Francis Group
ASSESSMENT & EVALUATION IN HIGHER EDUCATION   1281

thoughts, feelings, and actions that are planned and cyclically adapted to the attainment of personal
goals’ (Zimmerman 2000, 14). The regular practices of self-assessment may enhance students’ ability
to assess their own work and thereby improve their self-regulated learning skills (Panadero, Jonsson,
and Strijbos 2016).
Effective implementation of self-assessment requires that assessment criteria, which may take the
form of rubrics or scripts, be shared with students before the learning processes, so that students have
clear understandings of the learning goals and can plan their work correspondingly (Panadero and
Alonso-Tapia 2013; Panadero, Jonsson, and Strijbos 2016). Panadero and Alonso-Tapia (2013) explicated
how self-assessment impacts on the three phases of Zimmerman and Moylan’s (2009) cyclic model of
self-regulated learning: forethought, performance and self-reflection. In the forethought phase, students
can analyse the task, use assessment criteria to set realistic goals for task performance and identify
the strategies for task completion. During the performance phase, students can use the criteria to
monitor their works-in-progress. In the self-reflection phase, students can check their learning product
against the criteria. Given the importance of assessment criteria to self-assessment, and the relationship
between self-assessment and self-regulated learning, it is worth exploring what self-regulated learning
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

processes may be activated by students’ rubric use in self-assessment.

Studies on formative rubric use


The past two decades witnessed a burgeoning interest in formative rubric use in general education. At
least four papers have been published synthesising research on rubric use in education, which cover a
wide scope of disciplines and educational levels (see Jonsson and Svingby 2007; Reddy and Andrade
2010; Panadero and Jonsson 2013; Brookhart and Chen 2015). The present study focuses only on the
studies on formative rubric use, which can be broadly classified into three strands: (1) rubrics’ effects
on students’ learning or performance; (2) rubrics’ effects on student self-regulated learning, motivation
and self-efficacy; (3) student experiences and perceptions of rubric use.
The first strand of studies employed mainly quasi-experimental design(s) to investigate the effects
of rubric use on student learning or performance (e.g. Andrade and Boulay 2003; Andrade, Du, and
Wang 2008; Andrade, Du, and Mycek 2010; Coe et al. 2011). Andrade, Du, and Mycek (2010), for instance,
found that using model-generated criteria in self-assessment improved third and fourth grade students’
writing scores. Sundeen (2014) showed that both explicit instructions in teaching students about a
rubric’s elements and simply giving them the rubric had the same effects on their writing performance.
The second strand of studies expanded the research foci to the effects of rubric use on student
self-regulation, motivation, self-efficacy and self-grading accuracy (e.g. Andrade et al. 2009; Panadero,
Tapia, and Huertas 2012; Panadero and Romero 2014; Wollenschläger et al. 2016). Panadero and Romero
(2014), for example, found that, although rubric use promoted pre-service teachers’ use of learning
strategies, performance and self-grading accuracy, it also led to more task stress and performance avoid-
ance self-regulation. Wollenschläger et al. (2016) compared the effects of three types of teacher rubric
feedback on students’ work and found that, compared with transparency and individual performance
information, individual performance improvement information in teacher rubric feedback resulted in
more improvements in student performance, motivation, self-regulation and self-grading accuracy.
The third strand of research explored students’ perceptions of rubric use in assessment (e.g. Andrade
and Du 2005; Reynolds-Keefer 2010; Sundeen 2014; Li and Lindsey 2015). For instance, Reynolds-Keefer
(2010) reported that the pre-service students in two educational psychology courses regarded the rubric
as useful for guiding their task completion and reflection. Li and Lindsey (2015) compared teachers’
and students’ perceptions of a holistic rubric used in a first-year university writing course and revealed
that, although both parties found the rubric useful for end-of-course assessment, they held different
understandings of the language used in the rubric.
These studies were mostly about rubric use in first language writing or subject content courses. It
was not until recently that the instructional value of rubrics caught the attention of second/foreign
language researchers (e.g. Sundeen 2014; Li and Lindsey 2015; Babaii, Taghaddomi, and Pashmforoosh
1282    W. WANG

2016; Becker 2016). Babaii, Taghaddomi, and Pashmforoosh (2016), for instance, found that sharing
assessment criteria with students narrowed the gap between students and teachers’ understandings
of EFL speaking and improved student self-grading accuracy. In another study, Becker (2016) showed
that involving English as a Second Language students in creating and/or applying a rubric significantly
improved their summary writing performance. There is, however, little research exploring students’
perceptions of rubric use in self-assessment in second/foreign language contexts.
Students’ learning in second/foreign language contexts differs from learning in most subject content
courses, in that the former involves more non-linear learning progressions than the latter (Turner and
Purpura 2015). It remains to be explored whether the same findings and principles about rubric use
derived from subject content courses also apply in second/foreign language contexts. Additionally,
there are few studies exploring the factors mediating the rubric’s effectiveness for promoting student
learning (Panadero and Jonsson 2013). Last but not the least, more research is also needed to probe
the relationship between student rubric use and self-regulation from their own perspectives, which
constitutes an important source of evidence about the validity of rubrics as instructional tools (Brown,
McInerney, and Liem 2009; Brookhart and Chen 2015).
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

The present study explores students’ perceptions of rubric’s role in their self-assessment in a Chinese
EFL writing class. Specifically, it addresses two questions:

(1)  How did students perceive the rubric’s role in self-assessment, especially in relation to their
self-regulated learning of writing?
(2)  What factors, if any, were perceived by the students as affecting the rubric’s effectiveness in
self-assessment in the writing class?

Methodology
Context of the study
The present study was conducted in a 32-week EFL writing course at a Chinese university. The teacher
of the writing course is also the researcher of the present study, which offers a vantage point of gaining
an insider’s understandings of students’ perceived rubric use. The writing course covers the teaching
of descriptive, narrative and expository writing. Self, peer and teacher assessment are integral parts
of the course. As expository writing is the main part of the curriculum, the present study investigates
students’ perceptions of rubric use in their self-assessment of expository writing only.

Participants
Eighty students (24 male and 56 female) from 3 intact classes at a Chinese university participated in the
study. Six of them (2 male and 4 female) were purposively chosen as the case study informants based
on their English proficiency, ability to verbalise their thinking and willingness and availability to partic-
ipate. Purposive sampling was used because it would ‘enable detailed exploration and understanding
of the central themes and puzzles which the researcher wishes to study’ (Ritche and Lewis 2003, 78).
Table 1 presents the six students’ profile, with pseudonyms used for anonymity purposes.

Table 1. Case study students’ profile.


Student name Gender Age English proficiency
Kelvin Male 20 Intermediate
Cathy Female 21 Intermediate
Eason Male 20 Intermediate
Mary Female 20 High
Jane Female 20 Intermediate
Kate Female 20 Intermediate
ASSESSMENT & EVALUATION IN HIGHER EDUCATION   1283

Materials and instruments


Please contact the author for copies of the instruments used.

A teacher-tailored rubric and its training manual


All the teachers of the same course collaboratively adapted the rubric based on the ESL Composition
Profile developed by Jacobs et al. (1981). The rubric, which was used in self, peer and teacher assessment,
was shared with the students in the first class of the EFL writing course.
Being the most widely used rubric for English as a second/foreign language writing (Janssen, Meier,
and Trace 2015), the profile’s five categories of evaluative criteria closely match the writing course’s
curriculum goals. The teachers revised the rubric in two aspects to better align it with the curriculum
goals. Firstly, the Content and Organisation categories were elucidated to denote the importance of
using concrete examples to support topic sentences and develop the thesis statement. Secondly, to
help students realise the equally important status of all five aspects of writing, the same ‘4-point’ score
range was assigned to all the five categories of writing.
To help students grasp the rubric, the teachers also collaboratively designed a Chinese training
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

manual to illustrate its application, which includes six exemplars drawn from previous students’ work
representing differing levels of writing proficiency. Annotations and comments were provided to illus-
trate the practice of feedback giving.

Student reflective journals


All students wrote an electronic journal on their perceived rubric use in self-assessment at the end of the
course. They were advised to write the journals in Chinese, their mother tongue, to better verbalise their
thinking. All the journals were not collected until after all the students received their final marks for the
course, in order to dissipate their worries about voicing their opinions about rubric use over the course.

Interview questions
Individual retrospective interviews were conducted with the six case study informants after each
self-assessment practice. Unlike the self-reflective journals, which were aimed at gleaning an overall
understanding of the students’ perceptions of rubric use in self-assessment, the interviews were used
to achieve a more nuanced understanding of individual students’ experiences and opinions about
rubric use in self-assessment. In other words, interviews and reflective journals supplement each other
to shed light on the research questions.

Data collection
All the data were collected throughout the EFL writing course, which ensures the ecological validity
of the present study. Before the formal study, the teacher/researcher organised three 45-min sessions
to practice students’ self-assessment of their writing. The training manual, which contained annotated
student sample work and feedback, was used in the training sessions.
In the formal study, the teacher/researcher firstly assigned students an essay topic, then organised a
brainstorming session to help them generate ideas and asked them to write an essay on the topic within
45 min in class. Afterwards, the teacher/researcher photocopied the first drafts and gave the original
ones back to the students within the same day. The students were instructed to do rubric-referenced
self-assessment after class. In the next class, the students paired themselves into peer assessment ses-
sions, during which they assessed their peers’ photocopied drafts. Based on the feedback generated
from self and peer assessment, they wrote their second drafts and handed them in to the teacher/
researcher in the next class. The teacher/researcher conducted retrospective interviews with the six case
study informants within three days after they handed in their second drafts. Each interview lasted for
30–45 min. The same procedure was followed with students’ self-assessment practices for 6 essays, and
altogether 36 interviews were conducted. To facilitate the case study students’ expression of opinions,
1284    W. WANG

the interviews were all done in Chinese. Near the end of the study, all the student participants wrote a
reflective journal on their experiences and perceptions of rubric use in self-assessment. Eighty student
reflective journals were collected in total.

Data analysis
An inductive approach of grounded theory (Strauss and Corbin 1998) was adopted to analyse the
students’ reflective journals and interview data. The data were recursively read with frequent reference
to the research questions and Zimmerman and Moylan’s (2009) model of self-regulated learning. The
researcher and a researcher assistant, who has an MA degree in Applied Linguistics, developed a coding
scheme based on the recursive reading. The same coding scheme was applied to both sets of data.
They then coded all the data collected using the same scheme, with inter-coder reliability calculated at
0.83. The disagreement of coding was resolved through negotiation. The researcher also independently
coded all the data collected one month after the data analysis, with intra-coder reliability calculated at
0.92. The teacher/researcher then translated the results from Chinese, the language in which the data
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

were collected, into English. Participant verification was also sought by sending a summary of the coded
results to all the student participants, receiving their agreement with the results.
Table 2 presents the themes and sub-themes of the coding scheme, as well as the frequency of
student references to them.

Results
RQ 1. How did students perceive the rubric’s role in self-assessment, especially in relation to
their self-regulated learning of writing?
Analysis of the students’ reflective journals and interviews found that using the rubric in self-assessment
guided them throughout all the three stages of self-regulated learning delineated by Zimmerman and
Moylan (2009): forethought, performance and reflection.

Table 2. Themes and codes of student reflective and interview data.


Themes and codes Frequenciesa
Theme 1: Rubric’s role at the forethought stage 23
 Code 1: Goal-setting 16
 Code 2: Planning an approach to the writing tasks 10
Theme 2: Rubric’s role at the performance stage 15
 Code 3: Cultivating self-monitoring habits/abilities 15
Theme 3: Rubric’s role at the self-reflection stage 54
 Code 4: Aiding self-feedback generation 46
 Code 5: Improving the objectivity of self-grading 14
Theme 4: Within-rubric factors affecting its effectiveness 18
 Code 6: Coverage of categories 8
 Code 7: Structure 15
Theme 5: Descriptors of performance quality 6
 Code 8: Wording of descriptors 6
Theme 6: Score range 9
 Code 9: Narrow score scope 9
Theme 7: Domain knowledge about writing 21
 Code 10: Students’ original criteria 16
 Code 11: Students’ English proficiency 7
 Code 12: Students’ knowledge about essay topics 5
Theme 8: Length of intervention 10
 Code 13: Sufficient practices of rubric use 6
 Code 14: Scope of applicability 2
 Code 15: Instrumental attitudes 3
a
Frequencies include the count of references from student reflective journals and case study informants’ retrospective interview
data. Theme frequencies may not total code ones because a reference to a theme usually contains several sub-references to the
codes under it.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION   1285

Forethought stage: guiding students to set goals


The students perceived the rubric as an explicit and comprehensive guide at the forethought stage,
which makes them well aware of what they were expected to achieve regarding both specific EFL writing
tasks and long-term EFL writing development, so that they could set individualistic goals for EFL writing
and plan their approach to the writing tasks. For example, a student noted that:
The highest level of performance described in the rubric is the goal that I am supposed to reach. In that sense, the
rubric and the exemplars used to illustrate it serve as a clear guide about what a good piece of writing looks like.
After having used it once or twice in self-assessment, I would bear the criteria and their descriptions in mind before
writing and try to realize those goals when I write. (Student 17, reflective journal)
The rubric’s facilitative role in guiding students’ goal-setting is not only manifest in the forethought
stage but also demonstrated by the students’ expending their efforts in other learning activities. For
example, another student noted that:
When I read other English materials, I would try to work out the ways the rubric’s criteria are realized in those mate-
rials and how I can draw on them to improve my own writing. This is particularly true for content development,
vocabulary use and sentence structures. (Student 26, reflective journal)
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

In other words, the rubric was regarded by the students as a roadmap clarifying the highest levels
expected in their writing performance and orientating their efforts to those levels of performance.

Performance stage: cultivating students’ self-monitoring habits


At the performance stage, the rubric is reported to help foster students’ development of self-­monitoring
habits and/or abilities, which is illustrated by students’ using the rubric to track the processes of writing.
For instance:
Before I knew the rubric, my writing always drifted to wherever my stream of thought flowed. I did not have the
habit of checking what I wrote during writing. However, after having become familiar with the rubric, I would
consciously guard my thought and practices against its requirements when I write. More specifically, I would see
whether I have written an explicit thesis statement in the introductory paragraph, whether there is a topic sentence
at the beginning of each body paragraph, and whether the topic sentences have echoed the thesis statement,
etc. (Mary, interview data)
Likewise, students also mentioned how the use of rubric helped them avoid making mistakes of
language use at the performance stage:
The rubric functions like an alarm when I write. Because of it, I would be especially careful of any mistake of language
use and mechanics, such as tense, number and spelling. As a result, there are fewer mistakes in those aspects in
my drafts. (Student 39, reflective journal)
It can be seen that the rubric was perceived as helpful for cultivating students’ self-monitoring habits
and improving their alertness towards with the problems with EFL writing at the performance stage.

Self-reflection stage: aiding students’ self-feedback and self-grading


At the self-reflection stage, the rubric was perceived by the students as guiding them to diagnose the
strengths and weaknesses of writing as well as aiding their generation of self-feedback. For instance,
a student held that:
The rubric enables me to review my writing in a comprehensive manner. Without it, I would have only focused on
the mistakes with the minor aspects of my writing, like vocabulary use, grammar and mechanics, paying no heed
to such global aspects as content development, organization, etc. (Student 68, reflective journal)
Moreover, a student said that ‘the gradations of quality performance in the rubric enable me to
fine-tune my feedback on a specific dimension of writing, like organisation or vocabulary use’. (Mary,
interview data)
It is also interesting to note that the students widely regarded the rubric’s scoring section as being val-
uable for providing diagnostic information and enhancing the objectivity of their self-ratings. For instance:
The scores assigned by myself to my writing are an important source of information about the placement of my
writing, since they are important reference points for me to compare the quality of my essays written at different
time points. (Student 19, reflective journal)
1286    W. WANG

Additionally, score assignment in self-assessment is also valued by the students in that it is generally
perceived as ‘an indispensable means of quantifying writing performance and making it measureable,
which largely guarantees the objectivity of self-assessment’ (Student 26, reflective journal).
It can be summarised that, in the self-reflection phase, the rubric was valued by students as a means
of identifying developmental stages of EFL writing, aiding their feedback generation and quantifying
their EFL writing performance.

RQ 2. What factors, if any, were perceived by the students as affecting the rubric’s
effectiveness in self-assessment in the EFL writing class?
Analysis of the student reflective journals and interview data also identified five factors affecting the
rubric’s effectiveness in student self-assessment, which can be broadly classified into two clusters:
within-rubric (factors 1–3) and rubric-user factors (factors 4–5). The former refers to the rubric’s innate
qualities or features; the latter indicates the rubric-users’ characteristics.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

Factor 1: coverage and structure of the rubric


The rubric was critiqued for containing a narrow coverage of evaluative criteria. For instance, a student
noted that ‘the criteria cover only five aspects of EFL writing, but the rubric has not covered students’
styles and voice of writing, which can be as important as the other criteria’ (Student 53, reflective journal).
The students also challenged the quality definitions of the rubric’s evaluative criteria, especially
those criteria involving more ‘subjective’ judgements. For instance:
Content should be the soul of a piece of writing, but I held differing conceptions of the rubric’s explanation of its
content category. In my opinion, what matters most in content may be the uniqueness or creativity of its viewpoints
and the in-depth development of those viewpoints, rather than the relevance of its content. (Eason, interview data)
It is also noteworthy that some students questioned the ‘analytic’ structure of the rubric. For instance:
A piece of writing should be a coherent and organic whole, rather than the simplistic addition of discrete compo-
nents. For example, how can we evaluate the content of writing without considering its language use simultane-
ously? If that holds true, why should we dissect an organic whole into discrete categories? (Cathy, interview data)
It can be summarised that when using the rubric to self-assess their writing, the students had con-
cerns with the rubric’s coverage of categories, the definitions of those categories, and the structure of
an ‘analytic’ rubric.

Factor 2: descriptors of performance quality


It was also held by students that performance quality descriptors in the rubric, like ‘excellent’, ‘good’,
‘fairly good’, ‘poor’, etc. are confusing and misleading, and that they tended to dismiss such wordings
when doing self-assessment. For instance:
I just ignored the descriptors of performance quality in the rubric, simply because they rely on our personal and
subjective judgments. Some may say an essay’s content is excellent because of its creative ideas, while others may
say that it is due to the author’s thorough knowledge about the topic. (Jane, interview data)
Another student noted that ‘This wording of “excellent” or “good” is confusing in the best sense and
void of essence in the worst’ (Student 80, reflective journal). In other words, the students held that the
rubric’s inclusion of adjectives with either appreciative or derogatory meanings is unnecessary because
those words were likely to evoke more subjective responses from them.

Factor 3: score range


The students also complained about the relatively narrowness of the ‘4-point’ scoring range as a factor
diminishing their self-rating accuracy. For instance, a student said that:
The 4-points rating scale provides limited room for pinpointing the level of my performance. Take Mechanics use
as an example, I clearly understand that, as long as my writing is free of errors of punctuation and spelling., I can
safely assign it a full score. However, for how many mistakes of mechanics use shall I assign it a ‘1-point’, and then
for how many a ‘2-point’? There is apparently a lack of explicitness in those regards. (Student 78, reflective journal)
ASSESSMENT & EVALUATION IN HIGHER EDUCATION   1287

A student more particularly noted that the narrow score range hindered her from making fine-
grained judgments about her writing performance:
Although I felt that I made progress in my writing, I found that it is still not good enough to reach ‘4’, the highest
level of performance. But it is more about a level between ‘3’ and ‘4’, so I have an impulse to give it a ‘3.5’. However,
it is a pity that there is not such an option in the rubric. As a result, I am wondering whether it is feasible to enlarge
the scale to ‘7’ or ‘9’ point? (Mary, interview data)
In summary, the rubric’s narrow score range, as viewed by the students, was another factor affecting
their diagnosis and placement of their EFL writing.

Factor 4: domain knowledge about writing


The students’ domain knowledge about writing, in either Chinese (their mother tongue) or English,
was identified as another factor moderating the rubric’s effectiveness in self-assessment, which
was manifest in three aspects: (1) students’ original criteria developed from their previous writing
experience; (2) students’ English proficiency; (3) students’ knowledge about the assigned essay
topics.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

Firstly, the students were found to hold a set of idiosyncratic but implicit criteria derived from their
previous writing experience. When their own criteria went beyond the categories covered by the rubric,
they were more likely to refer to their own criteria in self-assessment. This is particularly the case with
the assessment of content, the relatively ‘subjective’ dimension of writing:
When I reviewed the content of this essay, I also referred to my high school experience of writing in Chinese, which
suggested that my viewpoints should not only be distinctive but also be abstract. But since the rubric does not
include such a category, I added my own standard when doing self-assessment. (Kate, interview data)
Students’ English proficiency and knowledge about assigned essay topics were also found to interfere
with their rubric use in self-assessment of EFL writing:
Though the rubric explicitly tells me that there should be a thorough development of my thesis statement, I just
could not do it because I do not have enough knowledge about the topic. This is also true when it comes to vocab-
ulary and syntactic diversity. (Student 72, reflective journal)
Another student noted that she would circumvent or skip some of the rubric’s criteria if she was not
confident about her abilities in those regards:
Since I did not have a good command of the use of tense, I would use very simple tenses, like the present, the past,
etc. when writing. Similarly, when doing self-assessment, I would also skip the categories about tense in the rubric.
That is because without a good understanding of the fine distinctions between tense uses, I just do not know how
to improve them. (Student 48, reflective journal)
Furthermore, a student noted that instead of using the rubric, he would rather have referred to a
specific template of writing:
Given my low English proficiency, I would rather have a template which demonstrates all the formulaic sentence
patterns and often-used words. This is because I can directly use those patterns and words in my own writing. The
rubric is like a set of abstract guidelines which I could hardly follow. (Student 47, reflective journal)
The examples show that the students’ original criteria for writing, English proficiency and knowledge
about assigned essay topics, all of which constitute their domain knowledge about writing, played a
salient role in mediating the rubric’s effectiveness in their self-assessment.

Factor 5: length of intervention


Last but not the least, the length of rubric intervention was identified by the students as a factor affecting
the rubric’s effectiveness. They reported that, for rubric use to be effective, the length of intervention
should be moderately long. Firstly, they should have enough practice with rubric use to grasp the
rubric’s evaluative criteria and appreciate its instructional value:
When I first got in touch with the rubric, I was resistant to it, wondering why I should use it to assess my writing. It
is only after the repeated practices of rubric use in self-assessment that I came to realize its usefulness and became
more willing to use it. (Student 62, reflective journal)
1288    W. WANG

However, if the students used the rubric for an overly long period, they raised at least two concerns
about it: (1) to what extent the rubric is applicable to the assessment of other genres of EFL writing;
(2) an overreliance on the rubric may foster an instrumental attitude towards it. The first is illustrated
by this case:
At later stages of rubric use, I began to doubt its scope of application. It seemed to be more like one for formulaic
patterns of writing, which may not be closely relevant to what we are supposed to write in our work or life. So, I
am wondering to what extent the rubric is suitable for writing of other purposes. (Student 78, reflective journal)
Regarding the concern of being overly reliant on the rubric, a student said that:
An overreliance on the rubric may diminish the possibilities of writing in other ways and result in our conforming to
pre-set rules for instrumental purposes, like the achievement of high scores, rather than the expressive description
of our experiences, feelings, and opinions in an engaging manner. (Student 66, reflective journal)
The findings revealed the length of intervention as an important aspect of student rubric use. On the
one hand, students should be given enough opportunities of rubric use to grasp its criteria. However,
if the same rubric was used for overly long, the students were likely to doubt its scope of application,
or even develop an instrumental attitude towards it.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

Discussion and implications


The study reported on students’ perceptions of the rubric’s role in their self-assessment of EFL writing
and identified the factors affecting its effectiveness. Besides adding empirical support to previous
research on the effects of rubric use on student self-regulated learning (e.g. Panadero, Tapia, and Huertas
2012; Panadero and Romero 2014), the study demonstrates that students in the Chinese EFL context
embraced the rubric as an instructional tool guiding them throughout the forethought, performance
and reflection stages of self-regulated learning. It is noteworthy that the Chinese EFL context is a
Confucian-heritage culture setting, which has dominant examination-driven values and orientations
emphasising grading and competition, and the introduction of formative assessment (self-assessment
included) in similar cultural settings bears the hope of counterbalancing the ingrained influence of
summative testing (Carless 2011).
The study also identified five factors affecting the rubric’s effectiveness in student self-assessment
and demonstrated the complexity of learning and student-involved assessment. At least four implica-
tions about the design, adaptation and use of rubrics can be drawn from this study with reference to
Dawson’s (2015) design features of rubrics.

Coverage of categories and structure


Firstly, the coverage and structure of rubric should be a primary concern when teachers select a rubric
to guide students’ self-assessment. Students can have more power or democracy in the design and
modification of its criteria, including both the categories to be covered and the elucidation of the specific
categories. Regarding students’ possible adoption of an instrumental approach to learning resulting
from rubric use (e.g. Torrance 2007), open-ended forms of rubrics which permit students’ addition of
their own categories/criteria may be used to further engage them in self-assessment and learning.
Secondly, though the use of analytic rubrics for formative purposes has been well supported in
­assessment literature (e.g. Brookhart 2013), the present study shows that not all the students were
welcoming to analytic-structured rubrics in self-assessment. Analytic rubrics risk ‘downplaying’ or
even ‘neglecting’ the interconnections among those dimensions and the ‘organic’ or ‘holistic’ nature
of writing (Sadler 2010). Flexibility is thus proposed in using rubrics of different structures in student
self-­assessment. Given the advantage of analytic rubrics of systematically explicating the aspects/­
dimensions of writing, they can be particularly useful at the early stages of self-assessment. At later
stages of self-assessment, holistic rubrics may be preferred, which allow students more freedom for
personal interpretations and individualistic feedback about their performance.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION   1289

Quality level descriptors and score range


It is also necessary for teachers to consider both the wording and score range in rubric design. The use
of such words as ‘excellent’, ‘good’ or ‘poor’ is likely to evoke subjective responses from students and
interfere with their self-judgements of performance. Those words may be replaced by more neutral
equivalents like ‘emerging’, ‘developing’ and ‘proficient’. It is also worth considering broadening the score
scope (a 10-point one for instance) to provide more room for students to fine-tune the judgement of
their writing performance. Further research would be needed to explore whether the enlarged score
range would result in increased accuracy of self-ratings and improved self-feedback information.

Rubric users’ domain knowledge


Regarding the role of students’ domain knowledge in rubric use, it is worth exploring the rubric’s func-
tion with reference to Hattie and Timperley’s (2007) three questions on feedback: (1) Where to go? (2)
What’s up? (3) What next? As noted by the students, although the rubric has well fulfilled its function
of clarifying the goals that they should reach (Question 1), they still held idiosyncratic conceptions of
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

writing derived from their previous writing experience and might therefore assess their writing on that
basis. Additionally, while the students realised their present levels of performance (Question 2) using the
rubric, their limited English proficiency and insufficient knowledge about assigned essay topics might
inhibit them from improving their writing performance (Question 3). In this sense, the rubric can be
regarded as doing well in answering Question 1&2 while lacking instructional power regarding Question
3. Such a result demonstrates the secondary position of assessment (rubrics included) to learning. The
learning potential of assessment seems to be only tapped after students have acquired sufficient domain
knowledge. Without a sufficient knowledge base, assessment has but limited instructional power.

Length of intervention
Lastly, echoing previous research on rubric use (e.g. Andrade, Du, and Mycek 2010), the study revealed
that sufficient length of intervention was a necessary condition for the rubric’s effectiveness in student
self-assessment. Students needed to practice using the rubric two to three times to get familiar with its
requirements. However, negative effects might also ensue from using the same task-general rubric for
too long. In that case, after students’ use of a task-general rubric for two to three times, it is advisable
for teachers give them task-specific rubrics tailored to specific assessment tasks, which are oriented
to more specific areas of their learning and improvement. Namely, a balanced use of task-general and
task-specific rubrics is preferable for engaging students in self-assessment.

Conclusion
This study contributes to previous research on rubric use by conducting a contextual analysis of students’
perceptions of rubric use in a Confucian-heritage EFL learning context, an under-researched setting
in formative assessment research. It also provides empirical support for the relationship between stu-
dent rubric use and self-regulation, as well as Panadero and Jonsson’s (2013) theoretical model on the
factors moderating rubrics’ effectiveness. The model delineates the ways (aiding the feedback process,
improving student self-efficacy, etc.) and factors (educational level and length of intervention, gender,
etc.) that moderate rubrics’ learning effects. The present study substantiates the model by presenting
the self-regulated learning processes activated by rubric use and the factors affecting rubrics’ efficacy
for improving students’ writing performance, with pedagogical implications drawn for rubric use in
student self-assessment.
As noted by Brookhart and Chen (2015), it may not be appropriate to make direct claims about rubrics
per se; rather, a more contextual and balanced viewpoint of rubric shall be proposed. More studies, either
quasi-experimental or naturalistic ones, are needed on how and to what extent students’ perspectives on
1290    W. WANG

rubric use may be bound up with diverse educational contexts, the Confucian-heritage cultural context
included. Rubric-user characteristics, as demonstrated by the present study, have also been identified
as salient factors affecting rubrics’ effectiveness. Moreover, instructionally valuable as teacher-tailored
rubrics are, an overreliance on pre-set rubrics may lead students to adopt an instrumental approach to
learning, which risks diminishing the diversity of students’ responses in performance assessment and
inhibiting their development of learning and autonomy.
The present study, however, was based on the data collected from only 80 students, which dimin-
ishes its generalisability to other contexts. Moreover, the teacher/researcher’s identity may hinder the
students from being more truthful about their opinions about rubric use. The present study also has
a confined focus on students’ rubric use in only self-assessment, though the rubric was used in peer
assessment as well. It would be interesting to investigate students’ rubric use in peer assessment and
how it may relate to student self-assessment and self-regulated learning. Innovative ways of rubric
use are also needed to further tap rubrics’ instructional potential, sustain student engagement with
self-assessment and foster their development of self-regulated learning.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

Acknowledgement
The author wants to thank Professor Liying Cheng in the Faculty of Education at Queen’s University, Canada, for her
support in writing this paper. He is also grateful to the two anonymous reviewers for their insightful comments on an
earlier draft.

Disclosure statement
No potential conflict of interest was reported by the author.

Funding
This work was supported by the Key Research Projects of Philosophy and Social Science of Ministry of Education of
China [grant number 15JZD048]; the Innovative School Project in Higher Education of Guangdong, China [grant number
GWTP-BS-2015-03]; the Guangdong Planning Office of Philosophy and Social Science, China [grant number GD14XWW21];
and Department of Education of Guangdong, China [grant number 103-GK131017].

Notes on contributor
Weiqiang Wang is an associate professor in the School of English for International Business at Guangdong University of
Foreign Studies. His research interests focus on assessment task design, student self-assessment, peer assessment and
self-regulated learning. In 2014, Weiqiang received the Solidarity Award at the 17th World Congress of Applied Linguistics
in Brisbane.

References
Andrade, H. G. 2000. “Using Rubrics to Promote Thinking and Learning.” Educational Leadership 57 (5): 13–19.
Andrade, H. G. 2005. “Teaching with Rubrics: The Good, the Bad, and the Ugly.” College Teaching 53 (1): 27–31.
Andrade, H. G., and B. A. Boulay. 2003. “Role of Rubric-referenced Self-Assessment in Learning to Write.” The Journal of
Educational Research 97 (1): 21–30. doi:10.1080/00220670309596625.
Andrade, H., and Y. Du. 2005. “Student Perspectives on Rubric-referenced Assessment.” Practical Assessment Research &
Evaluation 10 (3): 159–181.
Andrade, H. L., Y. Du, and K. Mycek. 2010. “Rubric-referenced Self-assessment and Middle School Students’ Writing.”
Assessment in Education: Principles, Policy & Practice 17 (2): 199–214. doi:10.1080/09695941003696172.
Andrade, H. L., Y. Du, and X. Wang. 2008. “Putting Rubrics to the Test: The Effect of a Model, Criteria Generation, and Rubric-
referenced Self-assessment on Elementary Schools Students’ Writing.” Educational Measurement: Issues and Practice 27
(2): 3–13. doi:10.1111/j.1745-3992.2008.00118.x.
Andrade, H. L., X. Wang, Y. Du, and R. L. Akawi. 2009. “Rubric-referenced Self-assessment and Self-efficacy for Writing.” The
Journal of Educational Research 102 (4): 287–302. doi:10.3200/JOER.102.4.287-302.
ASSESSMENT & EVALUATION IN HIGHER EDUCATION   1291

Babaii, E., S. Taghaddomi, and R. Pashmforoosh. 2016. “Speaking Self-assessment: Mismatches between Learners’ and
Teachers’ Criteria.” Language Testing 33 (3): 411–437. doi:10.1177/0265532215590847.
Becker, A. 2016. “Student-generated Scoring Rubrics: Examining their Formative Value for Improving ESL Students’ Writing
Performance.” Assessing Writing 29: 15–24. doi:10.1016/j.asw.2016.05.002.
Brookhart, S. M. 2013. How to Create and Use Rubrics for Formative Assessment and Grading. Alexandria, VA: ASCD.
Brookhart, S. M., and F. Chen. 2015. “The Quality and Effectiveness of Descriptive Rubrics.” Educational Review 67 (3): 343–368.
doi:10.1080/00131911.2014.929565.
Brown, G. T. L., D. M. McInerney, and G. A. D. Liem. 2009. “Student Perspectives of Assessment: Considering What Assessment
Means to Learners.” In Student Perspectives on Assessment: What Students Can Tell Us about Assessment for Learning, edited
by D. M. McInerney, G. T. L. Brown, and G. A. D. Liem, 1–21. Charlotte, NC: Information Age Publishing.
Carless, D. 2011. From Testing to Productive Student Learning: Implementing Formative Assessment in Confucian-heritage
Settings. London: Routledge.
Coe, M., Hanita, M., Nishioka, V., Smiley, R., & Park, O. 2011. An Investigation of the Impact of the 6 + 1 Trait Writing Model on
Grade 5 Student Writing Achievement (NCEE 2012-4010). Washington, DC: National Center for Education Evaluation and
Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Dawson, P. 2015. “Assessment Rubrics: Towards Clearer and More Replicable Design, Research and Practice.” Assessment &
Evaluation in Higher Education: 1–14. doi:10.1080/02602938.2015.1111294.
Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77 (1): 81–112.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017

Jacobs, H., S. Zinkgraf, D. Wormuth, V. Hartfiel, and J. Hughey. 1981. Testing ESL Composition: A Practical Approach. Rowley,
MA: Newbury House.
Janssen, G., V. Meier, and J. Trace. 2015. “Building a Better Rubric: Mixed Methods Rubric Revision.” Assessing Writing 26:
51–66. doi:10.1016/j.asw.2015.07.002.
Jonsson, A., and G. Svingby. 2007. “The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences.”
Educational Research Review 2 (2): 130–144. doi:10.1016/j.edurev.2007.05.002.
Lane, S., and Tierney, S. T. 2008. Performance Assessment. In 21st Century Education: A Reference Handbook, edited by T. L.
Good, Vol. 1, 461–470. Los Angeles, CA: Sage.
Li, J., and P. Lindsey. 2015. “Understanding Variations between Student and Teacher Application of Rubrics.” Assessing Writing
26: 67–79. doi:10.1016/j.asw.2015.07.003.
Messick, S. 1996. “Validity of Performance Assessments.” In Technical Issues in Large-scale Perforamcne Assessment, edited
by G. Phillips, 1–18. Washington, DC: National Center for Education Statistics.
Panadero, E. 2011. “Instructional Help for Self-assessment and Self-regulation: Evaluation of the Efficacy of Self-assessment
Scripts vs. Rubrics.” Unpublished doctoral dissertation, Universidad Autónoma de Madrid, Madrid, Spain.
Panadero, E., and Alonso-Tapia, J. 2013. Self-assessment: Theoretical and Practical Connotations. When it Happens, How
is it Acquired and What to Do to Develop it in Our Students. Electronic Journal of Research in Educational Psychology 11
(2): 551–576. doi:10.14204/ejrep.30.12200.
Panadero, E., and A. Jonsson. 2013. “The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review.”
Educational Research Review 9: 129–144. doi:10.1016/j.edurev.2013.01.002.
Panadero, E., A. Jonsson, and J. Strijbos. 2016. “Scaffolding Self-regulated Learning through Self-assessment and
Peer Assessment: Guidelines for Classroom Implementation.” In Assessment for Learning: Meeting the Challenge of
Implementation, edited by D. Laveault and L. Allal, 311–326. Boston, MA: Springer.
Panadero, E., and M. Romero. 2014. “To Rubric or Not to Rubric? The Effects of Self-assessment on Self-regulation,
Performance and Self-efficacy.” Assessment in Education: Principles, Policy & Practice 21 (2): 133–148. doi:10.1080/096
9594X.2013.877872.
Panadero, E., J. A. Tapia, and J. A. Huertas. 2012. “Rubrics and Self-assessment Scripts Effects on Self-regulation, Learning and
Self-efficacy in Secondary Education.” Learning and Individual Differences 22 (6): 806–813. doi:10.1016/j.lindif.2012.04.007.
Popham, W. J. 1997. “What’s Wrong-and What’s Right-with Rubrics.” Educational Leadership 55: 72–75.
Reddy, Y. M., and H. Andrade. 2010. “A Review of Rubric Use in Higher Education.” Assessment & Evaluation in Higher Education
35 (4): 435–448. doi:10.1080/02602930902862859.
Reynolds-Keefer, L. 2010. “Rubric-Referenced Assessment in Teacher Preparation: An Opportunity to Learn by Using.” Practical
Assessment, Research & Evaluation 15 (8): 1–9.
Ritche, J., and J. Lewis. 2003. Qualitative Research Practice: A Guide for Social Science Students and Researchers. London: Sage.
Sadler, D. R. 2009. “Indeterminacy in the Use of Preset Criteria for Assessment and Grading.” Assessment & Evaluation in
Higher Education 34 (2): 159–179. doi:10.1080/02602930801956059.
Sadler, D. R. 2010. “Beyond Feedback: Developing Student Capability in Complex Appraisal.” Assessment & Evaluation in
Higher Education 35 (5): 535–550. doi:10.1080/02602930903541015.
Strauss, A., and J. Corbin. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory.
Thousand Oaks, CA: Sage.
Sundeen, T. H. 2014. “Instructional Rubrics: Effects of Presentation Options on Writing Quality.” Assessing Writing 21: 74–88.
doi:10.1016/j.asw.2014.03.003.
1292    W. WANG

Torrance, H. 2007. “Assessment as Learning? How the Use of Explicit Learning Objectives, Assessment Criteria and Feedback
in Post-secondary Education and Training Can Come to Dominate Learning.” Assessment in Education: Principles, Policy
& Practice 14 (3): 281–294. doi:10.1080/09695940701591867.
Turner, C. E., and J. E. Purpura. 2015. “Learning-oriented Assessment in Second and Foreign Language Classrooms.” In
Handbook of Second Language Assessment, edited by D. Tsagari and J. Baneerjee, 255–272. Boston, MA: De Gruyter
Mouton.
Wollenschläger, M., J. Hattie, N. Machts, J. Möller, and U. Harms. 2016. “What Makes Rubrics Effective in Teacher-feedback?
Transparency of Learning Goals is Not Enough.” Contemporary Educational Psychology 44–45: 1–11. doi:10.1016/j.
cedpsych.2015.11.003.
Zimmerman, B. J. 2000. “Attaining Self-regulation: A Social Cognitive Perspective.” In Handbook of Self-regulation, edited
by M. Boekaerts, P. R. Printrich, and M. Zeidner, 13–39. San Diego, CA: Academic Press.
Zimmerman, B. J., and A. R. Moylan. 2009. “Self-regulation: When Metacognition and Motivation Intersect.” In
Handbook  of Metacognition in Education, edited by D. J. Hacker, J. Dunlosky, and A. C. Graesser, 299–315.
New York, NY: Routledge.
Downloaded by [EP- IPSWICH] at 02:31 26 September 2017
Copyright of Assessment & Evaluation in Higher Education is the property of Routledge and
its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.

You might also like