Document
Document
Assessment allows both instructor and student to monitor progress towards achieving
learning objectives, and can be approached in a variety of ways. Formative assessment
refers to tools that identify misconceptions, struggles, and learning gaps along the way and
assess how to close those gaps. It includes effective tools for helping to shape learning,
and can even bolster students’ abilities to take ownership of their learning when they
understand that the goal is to improve learning, not apply final marks (Trumbull and Lash,
2013). It can include students assessing themselves, peers, or even the instructor, through
writing, quizzes, conversation, and more. In short, formative assessment occurs
throughout a class or course, and seeks to improve student achievement of learning
objectives through approaches that can support specific student needs (Theal and
Franklin, 2010, p. 151).
Design Clear, Effective Questions – If designing essay questions, instructors can ensure
that questions meet criteria while allowing students freedom to express their knowledge
creatively and in ways that honor how they digested, constructed, or mastered meaning.
Instructors can read about ways to design effective multiple choice questions.
Make Parameters Clear – When approaching a final assessment, instructors can ensure
that parameters are well defined (length of assessment, depth of response, time and date,
grading standards); knowledge assessed relates clearly to content covered in course; and
students with disabilities are provided required space and support.
Consider Blind Grading – Instructors may wish to know whose work they grade, in order to
provide feedback that speaks to a student’s term-long trajectory. If instructors wish to
provide truly unbiased summative assessment, they can also consider a variety of blind
grading techniques.
Exams
Exams typically consist of a set of questions aimed at eliciting specific responses and
can include multiple choice, fill-in-the-blank, diagram labeling, and short-answer
questions. When designing exams, equity-minded principles ensure they are accessible
and effective for all students.
• Relevant:
• Ensure test questions align closely with course learning objectives.
• Include applications of course concepts to real-world scenarios that
resonate with students’ interests and skills.
• Authentic:
• Require skills that students may use in their academic, professional, or
personal lives, such as critical thinking, problem-solving, and
collaboration.
• Test a range of cognitive skills, from lower-order (recall and
understanding) to higher-order (evaluation and application).
• Example: Include case studies that ask students to apply concepts to
solve real-world problems.
• Rigorous:
• Focus on tasks requiring application of skills or creation of new
knowledge in novel or complex situations, rather than simple recall.
• Include multi-step problem-solving or questions requiring students to
justify their answers through reasoning.
• Transparent:
• Clearly communicate what knowledge and skills are being tested.
• Provide practice questions that illustrate the types of questions students
will encounter on the exam.
• Explain scoring criteria, such as whether answers are marked for both
accuracy and process, or if penalties apply for incorrect answers.
• Transparency is particularly important for international and first-
generation students who may be less familiar with the exam format.
• Inclusive:
• Use language, scenarios, and examples that reflect the diverse lived
experiences of your students without assuming specific cultural
knowledge.
• Write clear, concise, and unambiguous questions to minimize confusion,
especially in online or large-class settings where clarification may not be
possible.
Formative Assessment
Formative assessments are yardsticks for learning as it is happening in the classroom.
For more examples of classroom assessment techniques (CATs) review “Ways to Assess
Student Learning During Class,” a resource from the University of Oregon.
Also, think about how you will respond to students’ feedback. In the words of Karron Lewis
(2001), “Perhaps the most important part of conducting a midsemester feedback session is
your response to the students. In your response, you need to let them know what you
learned from their information and what differences it will make” (39).
RESOURCES FOR FORMATIVE ASSESSMENT
Pre-Course Surveys
It is a good idea to ask students to complete a short pre-course survey that prompts them to
reflect on their reasons for signing up for your course and so that you can know more about
what they know, what they expect from the course, and how they think they learn best. As
with any assessment, think about what your goals are and whether this should be
anonymous, or not. For example, if you want to use this pre-course survey to assign
groups, then you would want your surveys to not be anonymous. However, an anonymous
survey would serve the goal of gathering personal information about your students which
they may not want to share publicly.
o What is something you are good at doing? How did you get to be good at it?
o Why are you interested in this course? (or, what are your expectations for this course?)
o Review the syllabus — what is most intriguing to you at this point?
o Is there anything that you feel may hinder your success in this class or anything you
want to share with me before the class starts, for example, other commitments this term,
a particular learning style preference, your own circadian rhythms, a specific goal you
want to accomplish?
In the sciences, knowledge-surveys (or ungraded quizzes) may be given the first day of
class to assess students' knowledge of the subject. This is critical when a course has
prerequisites like calculus and you need to know if students are prepared.
Midterm Evaluations
The middle of the term is another opportune time to solicit feedback from students about
their learning. It is important that this feedback is acknowledged and any changes made in
the course based on the feedback or decisions not to incorporate student feedback are
shared with the students.
o Sometimes, it may be helpful to ask questions similar to those found on the end-of-term
course evaluations during the middle of the term.
o Another simple way to get feedback is to ask students 1) what should you keep doing,
stop doing, and start doing to aid student learning, and 2) what do students intend to
keep doing, stop doing, and start doing to aid their learning?
o For more question ideas, see this formative evaluation resource from the University of
California, Berkeley, which samples midterm evaluations from various disciplines.
Canvas
Did you know that formative assessments can be distributed using Canvas?
View instructions on creating surveys in Canvas.
REFERENCES
o Angelo, T. & Cross, K.P. (1993). Classroom assessment techniques: A handbook for
college teachers (2nd ed.). San Francisco: Jossey-Bass. (Available in DCAL’s library)
o Boston, C. (2002). The concept of formative assessment. Practical Assessment,
Research & Evaluation, 8(9).
o Vanderbilt University. n.d. Gathering feedback from students. Center for Teaching.
o Lewis, K. (2001). Using mid-semester student feedback and responding to it. New
Directions for Teaching and Learning, 87, 33-44.
o Organization for Economic Cooperation and Development (OECD). (2005). Formative
assessment: improving learning in secondary classrooms. OECD Observer Policy Brief
Are lots of your students freshmen? Is this an “Introduction to…” course? If so, many of your
learning objectives may target the lower-order Bloom’s skills, because your students are
building foundational knowledge. However, even in this situation, we would strive to move a
few of your objectives into the applying and analyzing level, but getting too far up in the
taxonomy could create frustration and unachievable goals.
Are most of your students juniors and seniors? Graduate students? Do your students have
a solid foundation in much of the terminology and processes you will be working on in your
course? If so, then you should not have many remembering and understanding level
objectives. You may need a few, for any radically new concepts specific to your course.
However, these advanced students should be able to master higher-order learning
objectives. Too many lower-level objectives might cause boredom or apathy.
How Bloom’s works with learning objectives
Fortunately, there are “verb tables” to help identify which action verbs align with each level
in Bloom’s Taxonomy.
You may notice that some of these verbs on the table are associated with multiple Bloom’s
Taxonomy levels. These “multilevel verbs” are actions that could apply to different
activities. For example, you could have an objective stating “At the end of this lesson,
students will be able to explain the difference between H2O and OH-.” This would be an
understanding-level objective. However, if you wanted the students to be able to “…explain
the shift in the chemical structure of water throughout its various phases.” This would be an
analyzing-level verb.
Adding to this confusion, you can locate Bloom’s verb charts that list verbs at levels
different from what we list below. Just keep in mind that it is the skill, action, or activity you
will teach using that verb that determines the Bloom’s Taxonomy level.
How Bloom’s works with course level and lesson level objectives:
Course-level objectives are broad. You may only have 3-5 course-level objectives. They
would be difficult to measure directly because they overarch the topics of your entire
course.
Lesson-level objectives are what we use to demonstrate that a student has mastery of the
course-level objectives. We do this by building lesson-level objectives that build toward the
course-level objective. For example, a student might need to demonstrate mastery of 8
lesson-level objectives in order to demonstrate mastery of one course-level objective.
Because the lesson-level objectives directly support the course-level objectives, they need
to build up the Bloom’s Taxonomy to help your students reach mastery of the course-level
objectives. Use Bloom’s Taxonomy to make sure that the verbs you choose for your lesson-
level objectives build up to the level of the verb that is in the course-level objective. The
lesson level verbs can be below or equal to the course level verb, but they CANNOT be
higher in level. For example, your course level verb might be an Applying level verb,
“illustrate.” Your lesson-level verbs can be from any Bloom’s level that is equal or below this
level (applying, understanding, or remembering).
Steps towards writing effective learning objectives:
Make sure there is one measurable verb in each objective.
Each objective needs one verb. Either a student can master the objective, or they fail to
master it. If an objective has two verbs (say, define and apply), what happens if a student
can define, but not apply? Are they demonstrating mastery?
Ensure that the verbs in the course level objective are at least at the highest Bloom’s
Taxonomy as the highest lesson level objectives that support it. (Because we can’t verify
they can evaluate if our lessons only taught them (and assessed) to define.)
Strive to keep all your learning objectives measurable, clear, and concise.
When you are ready to write, it can be helpful to list the level of Bloom’s next to the verb you
choose in parentheses. For example:
Course level objective 1. (apply) Demonstrate how transportation is a critical link in the
supply chain.
1.1. (understand) Discuss the changing global landscape for businesses and other
organizations that are driving change in the global environment.
1.2. (apply) Demonstrate the special nature of transportation demand and the influence
of transportation on companies and their supply chains operating in a global
economy.
This trick will help you quickly see what level verbs you have. It will also let you check that
the course level objective is at least as high of a Bloom’s level as any of the lesson level
objectives underneath.
For educators, knowing where a student is in terms of skills and abilities is critical. It is
challenging to move forward in a lesson or unit if the teacher doesn’t have a clear
understanding of what a student can and cannot do. This is why diagnostic assessments
tools such as pretesting, formative assessments, and other diagnostic assessments
important. These tools allow educators to curate individualized lessons that meet students
where they are rather than teaching without a clear focus on student ability.
Pre-Assessment:
Purpose: Conducted before a new unit or course to gauge students’ prior knowledge and
skills.
Examples: Pre-tests, concept mapping, and K-W-L (Know, Want to know, Learned) charts.
Reading Diagnostic Assessment:
Purpose: Identify reading difficulties, literacy levels, and specific areas of weakness in
reading.
Examples: Phonics assessments, fluency assessments, reading comprehension
assessments, and decoding assessments.
Math Diagnostic Assessment:
Purpose: Evaluate language proficiency and identify areas of need for English language
learners.
Examples: Language proficiency assessments, oral interviews, and language development
portfolios.
Special Education Diagnostic Assessment:
Purpose: Assess students with special needs to identify learning disabilities or challenges.
Examples: Psycho-educational assessments, individualized educational plan (IEP)
evaluations, and adaptive behavior assessments.
The results of diagnostic assessment tools are valuable for teachers to tailor their
instruction to meet individual student needs. These assessments provide insights into
instructional adjustments, the development of targeted interventions, and the creation of
differentiated learning experiences. They may also help to promote a more personalized
and responsive approach to teaching, fostering student growth and academic success.
Formative and summative assessments differ not only in their purposes and timing but also
in their testing formats. The formats of these assessments are designed to align with their
respective goals and the stage of the learning process.
Here are key differences in the assessment formats of formative and summative
assessments:
Schools have new students enrolled all of the time. In some cases, these students may
speak a language other than English. In these cases, it is vitally important for the student to
be placed correctly within an English Language Development classroom. To do this many
school districts or states have a diagnostic screener test designed to accurately portray a
student’s English skills and place them into ELD programming or not based on the results.
Science
A science teacher is teaching a new lesson on the human body, specifically the
cardiovascular system. Rather than jumping right in and assuming students all are new to
the material the teacher decides to provide students with a scenario in which a person
begins to exercise. The teacher asks the students to describe what they think will happen to
the person’s body as they exercise and why they think these changes occur.
This small assessment can help the teacher gauge if students know that heart rate and
breathing rate will increase, and if they know that this is done to increase oxygen levels in
the muscles. Once the teacher has this information they can design lessons to meet the
needs of their unique learners.
Physical Education
In the physical education class, a teacher may have students with a wide range of abilities.
For example, one student may be a star baseball player who can throw a ball with ease,
while others may not know how to throw at all. As a diagnostic assessment, a teacher may
ask students to pick up a ball and throw it at a target across the room. Through
observational notes, or maybe even a video recording, the teacher can break down each
student’s throw and designate the next steps.
The main goal in developing diagnostic assessments is to ensure that students get what
they need in terms of learning when they need it and that educators can identify and fill any
gaps in learning that the student may have. Diagnostic assessment tools may be used
differently depending on the class, some classes may use observation and student
interviewing while other classes may use pretesting and journaling, it depends on the class
and how much information the educator needs.
Using traditional testing methods designing and collecting diagnostic data was a difficult
task, however, by implementing online testing platform technology, such as TAO, educators
can design, implement, and analyze diagnostic and other assessment formats quickly and
efficiently. In doing so, educators can deliver targeted lessons, enhance student
engagement, and, ultimately, increase student growth and achievement.
Assessment is a critical aspect of the teaching-learning process. Two major categories of test
items include selection-type items (e.g., multiple-choice questions or MCQs) and supply-type
items (e.g., short answers, essays). Each format has specific advantages and limitations, making
them suitable for different instructional objectives.
1. Objective Scoring:
o MCQs offer highly objective evaluation with minimal chance for scorer bias.
2. Wide Content Coverage:
o They allow assessment of a broad range of content in a relatively short time.
3. Efficient Testing:
o MCQs are easy to administer and grade, especially with automated systems.
4. Reliability:
o Due to their standardized nature, MCQs tend to have high reliability.
5. Diagnostic Use:
o Ideal for identifying specific misconceptions or gaps in knowledge.
Weaknesses:
1. Limited Depth:
o Often test recall or recognition rather than deep understanding or higher-
order thinking.
2. Guessing Factor:
o Test-takers may guess the correct answer, which can affect validity.
3. Time-Consuming to Construct:
o Creating high-quality MCQs that test application and analysis is challenging.
4. Surface-Level Learning:
o May encourage rote memorization rather than critical thinking.
Weaknesses:
1. Subjective Scoring:
o Scoring can be influenced by bias and inconsistency unless rubrics are
rigorously applied.
2. Limited Content Sampling:
o Typically assess fewer topics, which may not reflect the entire syllabus.
3. Time-Intensive:
o Both writing and grading are more time-consuming compared to MCQs.
4. Writing Skills May Skew Results:
o Performance can be influenced by a student’s language proficiency rather
than content knowledge.
Conclusion
Both selection-type and supply-type items have essential roles in educational assessment. MCQs
provide efficiency, objectivity, and broad coverage, making them ideal for large-scale or
preliminary assessments. Essays, on the other hand, are indispensable for evaluating depth,
creativity, and analytical ability. Educators should align the test format with the learning
objectives, ensuring a balanced approach that measures not just what students know, but how
they think.
Let me know if you’d like this formatted in APA or MLA style, or turned into a presentation.
In educational assessment, reliability refers to the consistency of measurement. Two key types
of reliability in classroom assessments are internal consistency and inter-rater reliability. Each
type addresses different aspects of consistency and is vital in different contexts of teaching and
evaluation.
I. Internal Consistency
Definition:
Internal consistency refers to the degree to which all items on a test measure the same construct
or concept. It ensures that the items within a test are correlated and function cohesively to assess
a specific skill or knowledge domain.
Measurement:
• Confirms that all items reflect the same skill or learning objective.
• Enhances the validity of the test results by ensuring coherent measurement.
Inter-rater reliability refers to the level of agreement or consistency between two or more
evaluators (raters) when assessing open-ended or subjective responses.
Measurement:
• Techniques include:
o Percentage agreement
o Cohen’s Kappa
o Intraclass Correlation Coefficient (ICC)
Imagine two teachers grading students’ essays on a history assignment. If both teachers
consistently give similar scores based on the same rubric, the inter-rater reliability is high.
When It Is Crucial: