0% found this document useful (0 votes)
169 views19 pages

Document

The document discusses the balance between formative and summative assessments in education, emphasizing the importance of both for continuous learning improvement and accountability. It provides practical strategies for teachers, such as designing clear questions, ensuring comprehensiveness, and considering blind grading. Additionally, it highlights the role of Bloom's Taxonomy in aligning learning objectives with assessment methods, detailing how different levels of learning can guide course design and assessment practices.

Uploaded by

mehak03278115614
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views19 pages

Document

The document discusses the balance between formative and summative assessments in education, emphasizing the importance of both for continuous learning improvement and accountability. It provides practical strategies for teachers, such as designing clear questions, ensuring comprehensiveness, and considering blind grading. Additionally, it highlights the role of Bloom's Taxonomy in aligning learning objectives with assessment methods, detailing how different levels of learning can guide course design and assessment practices.

Uploaded by

mehak03278115614
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Q: no: 1 How can formative and summative assessments be balanced to ensure

continuous learning improvement while maintaining accountability? Provide practical


strategies for teachers.

Formative and Summative Assessments

Assessment allows both instructor and student to monitor progress towards achieving
learning objectives, and can be approached in a variety of ways. Formative assessment
refers to tools that identify misconceptions, struggles, and learning gaps along the way and
assess how to close those gaps. It includes effective tools for helping to shape learning,
and can even bolster students’ abilities to take ownership of their learning when they
understand that the goal is to improve learning, not apply final marks (Trumbull and Lash,
2013). It can include students assessing themselves, peers, or even the instructor, through
writing, quizzes, conversation, and more. In short, formative assessment occurs
throughout a class or course, and seeks to improve student achievement of learning
objectives through approaches that can support specific student needs (Theal and
Franklin, 2010, p. 151).

In contrast, summative assessments evaluate student learning, knowledge, proficiency, or


success at the conclusion of an instructional period, like a unit, course, or program.
Summative assessments are almost always formally graded and often heavily weighted
(though they do not need to be). Summative assessment can be used to great effect in
conjunction and alignment with formative assessment, and instructors can consider a
variety of ways to combine these approaches.

Examples of Formative and Summative Assessments

Design Clear, Effective Questions – If designing essay questions, instructors can ensure
that questions meet criteria while allowing students freedom to express their knowledge
creatively and in ways that honor how they digested, constructed, or mastered meaning.
Instructors can read about ways to design effective multiple choice questions.

Assess Comprehensiveness – Effective summative assessments provide an opportunity for


students to consider the totality of a course’s content, making broad connections,
demonstrating synthesized skills, and exploring deeper concepts that drive or found a
course’s ideas and content.

Make Parameters Clear – When approaching a final assessment, instructors can ensure
that parameters are well defined (length of assessment, depth of response, time and date,
grading standards); knowledge assessed relates clearly to content covered in course; and
students with disabilities are provided required space and support.

Consider Blind Grading – Instructors may wish to know whose work they grade, in order to
provide feedback that speaks to a student’s term-long trajectory. If instructors wish to
provide truly unbiased summative assessment, they can also consider a variety of blind
grading techniques.

Summative Assessments: Types


Summative assessments evaluate learning at the end of a unit, course, or instructional
period. While typically used to gauge final outcomes, these assessments can also serve
formative purposes, tracking progress throughout a course. Below are common types
of summative assessments, with equity-minded design tips and links to additional
resources.

Exams
Exams typically consist of a set of questions aimed at eliciting specific responses and
can include multiple choice, fill-in-the-blank, diagram labeling, and short-answer
questions. When designing exams, equity-minded principles ensure they are accessible
and effective for all students.

• Relevant:
• Ensure test questions align closely with course learning objectives.
• Include applications of course concepts to real-world scenarios that
resonate with students’ interests and skills.
• Authentic:
• Require skills that students may use in their academic, professional, or
personal lives, such as critical thinking, problem-solving, and
collaboration.
• Test a range of cognitive skills, from lower-order (recall and
understanding) to higher-order (evaluation and application).
• Example: Include case studies that ask students to apply concepts to
solve real-world problems.
• Rigorous:
• Focus on tasks requiring application of skills or creation of new
knowledge in novel or complex situations, rather than simple recall.
• Include multi-step problem-solving or questions requiring students to
justify their answers through reasoning.
• Transparent:
• Clearly communicate what knowledge and skills are being tested.
• Provide practice questions that illustrate the types of questions students
will encounter on the exam.
• Explain scoring criteria, such as whether answers are marked for both
accuracy and process, or if penalties apply for incorrect answers.
• Transparency is particularly important for international and first-
generation students who may be less familiar with the exam format.
• Inclusive:
• Use language, scenarios, and examples that reflect the diverse lived
experiences of your students without assuming specific cultural
knowledge.
• Write clear, concise, and unambiguous questions to minimize confusion,
especially in online or large-class settings where clarification may not be
possible.

Best Practices for Exams

• Open-Book or Group Exams: Encourage critical thinking, collaboration, and


analytical skills. These formats also reduce stress and support equity (Johanns
et al., 2016; Martin et al., 2014).
• Exam Wrappers: Include follow-up reflections to help students assess their
preparation and engage in metacognition (Lovett, 2013

Formative Assessment
Formative assessments are yardsticks for learning as it is happening in the classroom.

ABOUT FORMATIVE ASSESSMENTS


Unlike summative (“end-of-term” or milestone) assessments, which evaluate learning and
teaching at the end of a course or unit, formative assessments happen earlier in the term
(even before the class starts if you use a pre-course survey) and throughout the term to
provide students and teachers an opportunity to make adjustments that will improve
learning. According to Angelo and Cross (1993), there are several unique characteristics of
formative classroom assessment: it is learner-centered, teacher-directed, mutually
beneficial, context-specific, ongoing, and rooted in good teaching practice. Typically, a
classroom assessment is also ungraded and anonymous.
BENEFITS OF EARLY ASSESSMENT
There are many benefits associated with practicing formative assessments in college
classrooms. Formative assessments may lead to “improved student success” (Boston,
2002), more equitable student learning outcomes, and enhanced learning strategies among
students (OECD, 2005).
“Teachers need a continuous flow of accurate information on student learning. For example,
if a teacher’s goals is to help student learn points A through Z during the course, then that
teacher needs to know whether all students are really starting at point A and, as the course
proceeds, whether they have reached intermediate points B, G, L R, W, and so on.” (Angelo
and Cross, 1993:4).
STRATEGIES FOR EARLY ASSESSMENT
There are many classroom assessment techniques, and they should be implemented based
on the instructor’s goal. For instance, The Minute Paper asks for student’s perceptions of
the important take-away for a given class and assesses whether that aligns with the
instructor’s goal. Another type of assessment might ask specific questions — in Likert-scale
or multiple-choice formats — about students’ perceptions of the clarity, usefulness, and
engagement with the material in a particular class or unit.

For more examples of classroom assessment techniques (CATs) review “Ways to Assess
Student Learning During Class,” a resource from the University of Oregon.

Also, think about how you will respond to students’ feedback. In the words of Karron Lewis
(2001), “Perhaps the most important part of conducting a midsemester feedback session is
your response to the students. In your response, you need to let them know what you
learned from their information and what differences it will make” (39).
RESOURCES FOR FORMATIVE ASSESSMENT

Pre-Course Surveys

It is a good idea to ask students to complete a short pre-course survey that prompts them to
reflect on their reasons for signing up for your course and so that you can know more about
what they know, what they expect from the course, and how they think they learn best. As
with any assessment, think about what your goals are and whether this should be
anonymous, or not. For example, if you want to use this pre-course survey to assign
groups, then you would want your surveys to not be anonymous. However, an anonymous
survey would serve the goal of gathering personal information about your students which
they may not want to share publicly.

Sample pre-course survey questions:

o What is something you are good at doing? How did you get to be good at it?
o Why are you interested in this course? (or, what are your expectations for this course?)
o Review the syllabus — what is most intriguing to you at this point?
o Is there anything that you feel may hinder your success in this class or anything you
want to share with me before the class starts, for example, other commitments this term,
a particular learning style preference, your own circadian rhythms, a specific goal you
want to accomplish?
In the sciences, knowledge-surveys (or ungraded quizzes) may be given the first day of
class to assess students' knowledge of the subject. This is critical when a course has
prerequisites like calculus and you need to know if students are prepared.

Midterm Evaluations

The middle of the term is another opportune time to solicit feedback from students about
their learning. It is important that this feedback is acknowledged and any changes made in
the course based on the feedback or decisions not to incorporate student feedback are
shared with the students.

o Sometimes, it may be helpful to ask questions similar to those found on the end-of-term
course evaluations during the middle of the term.
o Another simple way to get feedback is to ask students 1) what should you keep doing,
stop doing, and start doing to aid student learning, and 2) what do students intend to
keep doing, stop doing, and start doing to aid their learning?
o For more question ideas, see this formative evaluation resource from the University of
California, Berkeley, which samples midterm evaluations from various disciplines.
Canvas

Did you know that formative assessments can be distributed using Canvas?
View instructions on creating surveys in Canvas.
REFERENCES

o Angelo, T. & Cross, K.P. (1993). Classroom assessment techniques: A handbook for
college teachers (2nd ed.). San Francisco: Jossey-Bass. (Available in DCAL’s library)
o Boston, C. (2002). The concept of formative assessment. Practical Assessment,
Research & Evaluation, 8(9).
o Vanderbilt University. n.d. Gathering feedback from students. Center for Teaching.
o Lewis, K. (2001). Using mid-semester student feedback and responding to it. New
Directions for Teaching and Learning, 87, 33-44.
o Organization for Economic Cooperation and Development (OECD). (2005). Formative
assessment: improving learning in secondary classrooms. OECD Observer Policy Brief

Q.2. “Clear learning objectives are the foundation of effective


assessment.” Analyze how Bloom’s Taxonomy can guide the
alignment of objectives with the assessment method.
Using Bloom’s Taxonomy to Write Effective
Learning Objectives:
What is Bloom’s Taxonomy?
Bloom’s Taxonomy is a classification of the different objectives and skills that educators
set for their students otherwise known as learning objectives. The taxonomy was proposed
in 1956 by Benjamin Bloom, an educational psychologist at the University of Chicago. The
terminology has been recently updated to include the following six levels of learning. These
6 levels can be used to structure the learning objectives, lessons, and assessments of your
course. :

Remembering: Retrieving, recognizing, and recalling relevant knowledge from long‐term


memory.
Understanding: Constructing meaning from oral, written, and graphic messages through
interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining.
Applying: Carrying out or using a procedure for executing or implementing.
Analyzing: Breaking material into constituent parts and determining how the parts relate to
one another and to an overall structure or purpose through differentiating, organizing, and
attributing.
Evaluating: Making judgments based on criteria and standards through checking and
critiquing.
Creating: Putting elements together to form a coherent or functional whole; reorganizing
elements into a new pattern or structure through generating, planning, or producing.
Like other taxonomies, Bloom’s is hierarchical, meaning that learning at the higher levels is
dependent on having attained prerequisite knowledge and skills at lower levels. You will
see Bloom’s Taxonomy often displayed as a pyramid graphic to help demonstrate this
hierarchy. We have updated this pyramid into a “cake-style” hierarchy to emphasize that
each level is built on a foundation of the previous levels.

Bloom’s Taxonomy cake style graphic


You may use this graphic for educational or non-profit use if you include a credit for Jessica
Shabatura and a citation back to this website.

How Bloom’s can aid in course design


Bloom’s taxonomy is a powerful tool to help develop learning objectives because it
explains the process of learning:
Before you can understand a concept, you must remember it.
To apply a concept you must first understand it.
In order to evaluate a process, you must have analyzed it.
To create an accurate conclusion, you must have completed a thorough evaluation.
However, we don’t always start with lower-order skills and step through the entire
taxonomy for each concept you present in your course. That approach would become
tedious–for both you and your students! Instead, start by considering the level of learners
in your course:

Are lots of your students freshmen? Is this an “Introduction to…” course? If so, many of your
learning objectives may target the lower-order Bloom’s skills, because your students are
building foundational knowledge. However, even in this situation, we would strive to move a
few of your objectives into the applying and analyzing level, but getting too far up in the
taxonomy could create frustration and unachievable goals.
Are most of your students juniors and seniors? Graduate students? Do your students have
a solid foundation in much of the terminology and processes you will be working on in your
course? If so, then you should not have many remembering and understanding level
objectives. You may need a few, for any radically new concepts specific to your course.
However, these advanced students should be able to master higher-order learning
objectives. Too many lower-level objectives might cause boredom or apathy.
How Bloom’s works with learning objectives
Fortunately, there are “verb tables” to help identify which action verbs align with each level
in Bloom’s Taxonomy.

You may notice that some of these verbs on the table are associated with multiple Bloom’s
Taxonomy levels. These “multilevel verbs” are actions that could apply to different
activities. For example, you could have an objective stating “At the end of this lesson,
students will be able to explain the difference between H2O and OH-.” This would be an
understanding-level objective. However, if you wanted the students to be able to “…explain
the shift in the chemical structure of water throughout its various phases.” This would be an
analyzing-level verb.

Adding to this confusion, you can locate Bloom’s verb charts that list verbs at levels
different from what we list below. Just keep in mind that it is the skill, action, or activity you
will teach using that verb that determines the Bloom’s Taxonomy level.

Bloom’s Level Key Verbs (keywords) Example Learning Objective


Create design, formulate, build, invent, create, compose, generate, derive, modify, develop.
By the end of this lesson, the student will be able to design an original homework
problem dealing with the principle of conservation of energy.
Evaluate choose, support, relate, determine, defend, judge, grade, compare, contrast,
argue, justify, support, convince, select, evaluate. By the end of this lesson, the
student will be able to determine whether using conservation of energy or conservation of
momentum would be more appropriate for solving a dynamics problem.
Analyze classify, break down, categorize, analyze, diagram, illustrate, criticize,
simplify, associate. By the end of this lesson, the student will be able to differentiate
between potential and kinetic energy.
Apply calculate, predict, apply, solve, illustrate, use, demonstrate, determine, model,
perform, present. By the end of this lesson, the student will be able to calculate the
hukinetic energy of a projectile.
Understand describe, explain, paraphrase, restate, give original examples of, summarize,
contrast, interpret, discuss. By the end of this lesson, the student will be able to describe
Newton’s three laws of motion in her/his own words
Remember list, recite, outline, define, name, match, quote, recall, identify, label,
recognize. By the end of this lesson, the student will be able to recite Newton’s three
laws of motion.
Learning objective examples adapted from, Nelson Baker at Georgia Tech:
[Link]@[Link]

How Bloom’s works with Quality Matters


For a course to meet the Quality Matters standards, it must have measurable learning
objectives. Using a verb table like the one above will help you avoid verbs that cannot be
quantified, like: understand, learn, appreciate, or enjoy. Quality Matters also requires that
your course assessments (activities, projects, and exams) align with your learning
objectives. For example, if your learning objective has an application-level verb, such as
“present,” then you cannot demonstrate that your students have mastered that learning
objective by simply having a multiple-choice quiz.

Course-level and lesson-level objectives


The biggest difference between course and lesson-level objectives is that we don’t directly
assess course-level objectives. Course-level objectives are just too broad. Instead, we use
several lesson-level outcomes to demonstrate mastery of one course-level outcome. To
create good course-level objectives, we need to ask ourselves: “What do I want the
students to have mastery of at the end of the course?” Then, after we finalize our course-
level outcomes, we have to make sure that mastery of all of the lesson-level outcomes
underneath confirms that a student has mastery of the course-level outcome–in other
words, if your students can prove (through assessment) that they can do every one of the
lesson level outcomes in that section, then you as an instructor agree they have mastery of
the course level outcome.

How Bloom’s works with course level and lesson level objectives:
Course-level objectives are broad. You may only have 3-5 course-level objectives. They
would be difficult to measure directly because they overarch the topics of your entire
course.
Lesson-level objectives are what we use to demonstrate that a student has mastery of the
course-level objectives. We do this by building lesson-level objectives that build toward the
course-level objective. For example, a student might need to demonstrate mastery of 8
lesson-level objectives in order to demonstrate mastery of one course-level objective.
Because the lesson-level objectives directly support the course-level objectives, they need
to build up the Bloom’s Taxonomy to help your students reach mastery of the course-level
objectives. Use Bloom’s Taxonomy to make sure that the verbs you choose for your lesson-
level objectives build up to the level of the verb that is in the course-level objective. The
lesson level verbs can be below or equal to the course level verb, but they CANNOT be
higher in level. For example, your course level verb might be an Applying level verb,
“illustrate.” Your lesson-level verbs can be from any Bloom’s level that is equal or below this
level (applying, understanding, or remembering).
Steps towards writing effective learning objectives:
Make sure there is one measurable verb in each objective.
Each objective needs one verb. Either a student can master the objective, or they fail to
master it. If an objective has two verbs (say, define and apply), what happens if a student
can define, but not apply? Are they demonstrating mastery?
Ensure that the verbs in the course level objective are at least at the highest Bloom’s
Taxonomy as the highest lesson level objectives that support it. (Because we can’t verify
they can evaluate if our lessons only taught them (and assessed) to define.)
Strive to keep all your learning objectives measurable, clear, and concise.
When you are ready to write, it can be helpful to list the level of Bloom’s next to the verb you
choose in parentheses. For example:

Course level objective 1. (apply) Demonstrate how transportation is a critical link in the
supply chain.

1.1. (understand) Discuss the changing global landscape for businesses and other
organizations that are driving change in the global environment.
1.2. (apply) Demonstrate the special nature of transportation demand and the influence
of transportation on companies and their supply chains operating in a global
economy.

This trick will help you quickly see what level verbs you have. It will also let you check that
the course level objective is at least as high of a Bloom’s level as any of the lesson level
objectives underneath.

Before you begin constructing your objectives:


Please read our Learning Objectives: Before and After Examples page.

Additional External Resources:


For a longer list of Bloom’s Verbs – TIPS tip: You can also use the “find” function (press:
Ctrl-f or command-f on a mac) in your browser to locate specific verbs on this list.

Q. 3. How can diagnostic assessments identify learning gaps, and what


follow-up strategies should teachers adopt? Illustrate with examples.
Diagnostic Assessment in Education: 3 Examples
from a Teacher:
Young students utilizing diagnostic assessment tools in the classroom sitting at a desk with
a computer.

For educators, knowing where a student is in terms of skills and abilities is critical. It is
challenging to move forward in a lesson or unit if the teacher doesn’t have a clear
understanding of what a student can and cannot do. This is why diagnostic assessments
tools such as pretesting, formative assessments, and other diagnostic assessments
important. These tools allow educators to curate individualized lessons that meet students
where they are rather than teaching without a clear focus on student ability.

Creating diagnostic tools and assessments can be time-consuming, however, using


technology and online testing platforms can make the process more efficient. This allows
educators to quickly determine a student’s level and begin planning impactful lessons right
away. Within diagnostic testing, there are many different diagnostic assessment formats
that educators may utilize to improve student learning and growth.

What are Diagnostic Assessments?


In education, diagnostic assessments are evaluations to identify students’ strengths,
weaknesses, knowledge gaps, and learning needs. These assessments are designed to
inform educators about students’ current levels of understanding and to guide
instructional planning. The primary goal is to diagnose specific areas where students may
need additional support or intervention.

Some common types of diagnostic assessments in education include:

Pre-Assessment:

Purpose: Conducted before a new unit or course to gauge students’ prior knowledge and
skills.
Examples: Pre-tests, concept mapping, and K-W-L (Know, Want to know, Learned) charts.
Reading Diagnostic Assessment:

Purpose: Identify reading difficulties, literacy levels, and specific areas of weakness in
reading.
Examples: Phonics assessments, fluency assessments, reading comprehension
assessments, and decoding assessments.
Math Diagnostic Assessment:

Purpose: Assess mathematical understanding, identify gaps in knowledge, and pinpoint


specific areas of difficulty.
Examples: Math fluency assessments, problem-solving assessments, and concept-
specific assessments.
Writing Diagnostic Assessment:

Purpose: Evaluate students’ writing skills, including mechanics, organization, and


expression.
Examples: Writing samples, essays, and rubrics assessing writing conventions.
Science and Social Studies Diagnostic Assessment:

Purpose: Assess understanding of key concepts in science and social studies.


Examples: Concept maps, content-specific quizzes, and project-based assessments.
English Language Learners (ELL) Diagnostic Assessment:

Purpose: Evaluate language proficiency and identify areas of need for English language
learners.
Examples: Language proficiency assessments, oral interviews, and language development
portfolios.
Special Education Diagnostic Assessment:

Purpose: Assess students with special needs to identify learning disabilities or challenges.
Examples: Psycho-educational assessments, individualized educational plan (IEP)
evaluations, and adaptive behavior assessments.
The results of diagnostic assessment tools are valuable for teachers to tailor their
instruction to meet individual student needs. These assessments provide insights into
instructional adjustments, the development of targeted interventions, and the creation of
differentiated learning experiences. They may also help to promote a more personalized
and responsive approach to teaching, fostering student growth and academic success.

Differences Between Formative and Summative Testing


Formative and summative assessments are two distinct types of assessments used in
education, each serving different purposes in the learning and evaluation process. The
summative assessment formats tend to be more traditional and may include end-of-unit or
year tests, essays, or projects, whereas formative assessments are ongoing, smaller in
size, and may provide immediate feedback. Generally speaking, formative assessment
formats allow teachers to see what students know and change instruction accordingly.
Summative assessments are designed to tell a teacher what a student has learned after all
instruction has occurred.

Formative and summative assessments differ not only in their purposes and timing but also
in their testing formats. The formats of these assessments are designed to align with their
respective goals and the stage of the learning process.

Here are key differences in the assessment formats of formative and summative
assessments:

Formative Assessment Summative Assessment


Informal and Varied: They can include classroom discussions, quizzes, polls, exit tickets,
short written reflections, observations, and peer reviews. Standardized and
Structured: They aim to provide a comprehensive and consistent evaluation of students’
overall performance at the end of a unit, course, or academic period.
Low-Stakes: Typically low-stakes in nature. The emphasis is on providing constructive
feedback to guide learning rather than assigning grades. The goal is to identify and address
learning gaps promptly. High-Stakes: Generally high-stakes, as they are used for
making important decisions such as assigning grades, determining promotion, or
evaluating program effectiveness.
Adaptable and Flexible: Formats are adaptable and flexible, allowing teachers to modify
assessments based on the needs of individual students or the class as a whole.
Objective and Closed-Ended: This may involve objective and closed-ended formats,
such as multiple-choice exams, standardized tests, essays, and long-answer questions.
Qualitative Feedback: Teachers may provide comments, suggestions, or corrective
guidance to help students understand and improve their understanding. Score-Based
Feedback: While some written feedback may be provided, the primary focus is on
summarizing and quantifying students’ performance.
Frequent and Ongoing: Because formative assessments are ongoing and frequent, they
often involve quick checks for understanding integrated into daily instruction.
Comprehensive Coverage: Covers a broad range of topics and learning objectives.
The goal is to assess overall achievement and mastery of the content covered during the
entire instructional period.
Peer and Self-Assessment: This may include opportunities for peer and self-assessment.
Students may be involved in providing feedback to their peers or reflecting on their
understanding and progress. External Evaluation: Often involves external evaluation,
especially in the case of standardized tests. These assessments are designed to provide an
impartial and standardized measure of student achievement.
Examples of Diagnostic Assessment Tools
Diagnostic assessments may be used to assess students before learning is completed or
before it has started altogether. Within the classroom, a teacher may implement and use
these assessments in a variety of ways. Here are 3 real-world examples of diagnostic
assessments and how they can be used to inform instruction:

English Language Development (ELD)

Schools have new students enrolled all of the time. In some cases, these students may
speak a language other than English. In these cases, it is vitally important for the student to
be placed correctly within an English Language Development classroom. To do this many
school districts or states have a diagnostic screener test designed to accurately portray a
student’s English skills and place them into ELD programming or not based on the results.

Science

A science teacher is teaching a new lesson on the human body, specifically the
cardiovascular system. Rather than jumping right in and assuming students all are new to
the material the teacher decides to provide students with a scenario in which a person
begins to exercise. The teacher asks the students to describe what they think will happen to
the person’s body as they exercise and why they think these changes occur.

This small assessment can help the teacher gauge if students know that heart rate and
breathing rate will increase, and if they know that this is done to increase oxygen levels in
the muscles. Once the teacher has this information they can design lessons to meet the
needs of their unique learners.

Physical Education

In the physical education class, a teacher may have students with a wide range of abilities.
For example, one student may be a star baseball player who can throw a ball with ease,
while others may not know how to throw at all. As a diagnostic assessment, a teacher may
ask students to pick up a ball and throw it at a target across the room. Through
observational notes, or maybe even a video recording, the teacher can break down each
student’s throw and designate the next steps.

The main goal in developing diagnostic assessments is to ensure that students get what
they need in terms of learning when they need it and that educators can identify and fill any
gaps in learning that the student may have. Diagnostic assessment tools may be used
differently depending on the class, some classes may use observation and student
interviewing while other classes may use pretesting and journaling, it depends on the class
and how much information the educator needs.

Using traditional testing methods designing and collecting diagnostic data was a difficult
task, however, by implementing online testing platform technology, such as TAO, educators
can design, implement, and analyze diagnostic and other assessment formats quickly and
efficiently. In doing so, educators can deliver targeted lessons, enhance student
engagement, and, ultimately, increase student growth and achievement.

Q. 4. Analyze the strengths and weaknesses of selection-type


(e.g., MCQs) and supply-type (e.g., essays) test items. When
should each be prioritized?

Assessment is a critical aspect of the teaching-learning process. Two major categories of test
items include selection-type items (e.g., multiple-choice questions or MCQs) and supply-type
items (e.g., short answers, essays). Each format has specific advantages and limitations, making
them suitable for different instructional objectives.

I. Selection-Type Test Items (e.g., MCQs)


Strengths:

1. Objective Scoring:
o MCQs offer highly objective evaluation with minimal chance for scorer bias.
2. Wide Content Coverage:
o They allow assessment of a broad range of content in a relatively short time.
3. Efficient Testing:
o MCQs are easy to administer and grade, especially with automated systems.
4. Reliability:
o Due to their standardized nature, MCQs tend to have high reliability.
5. Diagnostic Use:
o Ideal for identifying specific misconceptions or gaps in knowledge.

Weaknesses:

1. Limited Depth:
o Often test recall or recognition rather than deep understanding or higher-
order thinking.
2. Guessing Factor:
o Test-takers may guess the correct answer, which can affect validity.
3. Time-Consuming to Construct:
o Creating high-quality MCQs that test application and analysis is challenging.
4. Surface-Level Learning:
o May encourage rote memorization rather than critical thinking.

II. Supply-Type Test Items (e.g., Essays, Short Answers)


Strengths:

1. Assessment of Higher-Order Thinking:


o Essays are excellent for evaluating analysis, synthesis, creativity, and
argumentation.
2. Authentic Expression:
o Allows students to organize and express thoughts in their own words.
3. Insight into Student Thinking:
o Teachers gain deeper understanding of students’ thought processes and
problem-solving skills.
4. Encourages Deep Learning:
o Prepares students to synthesize and integrate knowledge.

Weaknesses:

1. Subjective Scoring:
o Scoring can be influenced by bias and inconsistency unless rubrics are
rigorously applied.
2. Limited Content Sampling:
o Typically assess fewer topics, which may not reflect the entire syllabus.
3. Time-Intensive:
o Both writing and grading are more time-consuming compared to MCQs.
4. Writing Skills May Skew Results:
o Performance can be influenced by a student’s language proficiency rather
than content knowledge.

III. When Should Each Be Prioritized?


Recommended
Purpose Rationale
Item Type

Selection-Type Efficient and objective for assessing


Testing factual knowledge
(MCQs) basic recall.

Selection-Type Enables evaluation of more material in


Broad content coverage
(MCQs) limited time.

Measuring critical Essays allow assessment of reasoning


Supply-Type (Essays)
thinking and depth of understanding.

Selection-Type Helps pinpoint specific


Diagnostic assessment
(MCQs) misunderstandings quickly.

Evaluating Enables students to articulate ideas,


Supply-Type (Essays)
communication skills arguments, and narratives.

Large-scale standardized Selection-Type Easier to score, more practical for large


testing (MCQs) populations.

Classroom assessment A combination gives a more holistic


Mix of Both
and feedback picture of learning.

Conclusion

Both selection-type and supply-type items have essential roles in educational assessment. MCQs
provide efficiency, objectivity, and broad coverage, making them ideal for large-scale or
preliminary assessments. Essays, on the other hand, are indispensable for evaluating depth,
creativity, and analytical ability. Educators should align the test format with the learning
objectives, ensuring a balanced approach that measures not just what students know, but how
they think.

Let me know if you’d like this formatted in APA or MLA style, or turned into a presentation.

Q. 5. Compare internal consistency and inter-rater reliability,


providing examples of when each is crucial in classroom
assessments.
Here's a detailed assignment-ready explanation for:

In educational assessment, reliability refers to the consistency of measurement. Two key types
of reliability in classroom assessments are internal consistency and inter-rater reliability. Each
type addresses different aspects of consistency and is vital in different contexts of teaching and
evaluation.

I. Internal Consistency
Definition:

Internal consistency refers to the degree to which all items on a test measure the same construct
or concept. It ensures that the items within a test are correlated and function cohesively to assess
a specific skill or knowledge domain.

Measurement:

• Common statistical tools include:


o Cronbach’s Alpha
o Kuder-Richardson Formula (KR-20 or KR-21)

Example in Classroom Assessment:

Suppose a teacher creates a 20-item multiple-choice test to assess students’ understanding of


algebraic equations. If the items are internally consistent, students who perform well on one
question related to solving equations are likely to perform well on the others.
When It Is Crucial:

• During the construction of standardized or teacher-made tests.


• When measuring a single construct or concept, such as grammar proficiency or
reading comprehension.
• To ensure test reliability before high-stakes assessments, like midterm or final
exams.

Benefits of High Internal Consistency:

• Confirms that all items reflect the same skill or learning objective.
• Enhances the validity of the test results by ensuring coherent measurement.

II. Inter-Rater Reliability


Definition:

Inter-rater reliability refers to the level of agreement or consistency between two or more
evaluators (raters) when assessing open-ended or subjective responses.

Measurement:

• Techniques include:
o Percentage agreement
o Cohen’s Kappa
o Intraclass Correlation Coefficient (ICC)

Example in Classroom Assessment:

Imagine two teachers grading students’ essays on a history assignment. If both teachers
consistently give similar scores based on the same rubric, the inter-rater reliability is high.

When It Is Crucial:

• In assessments involving subjective judgment, such as:


o Essay writing

You might also like