0% found this document useful (0 votes)
41 views128 pages

Assessment of Learning 1

Criterion-referenced test

Uploaded by

Jessie Desabille
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views128 pages

Assessment of Learning 1

Criterion-referenced test

Uploaded by

Jessie Desabille
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 128

ASSESSMENT OF LEARNING

Mr. Carmilo F. Flores


The names of the successful takers of the
Licensure Examination for Teachers (L.E.T.) given last
MARCH 2017 were released MAY , by the PRC.

10.39 % or 5,600 out of the 53,915 applicants passed the


L.E.T. of elementary education;
25.46 % or 18,482 out of 72,584examinees for the L.E.T.
of secondary education..
 1. What is the difference between reliability and
validity?
A.Reliability is concerned with the consistency of
measures, whereas validity is concerned with whether
a measure of a concept measures the concept.
B.Validity is concerned with the consistency of
measures, whereas reliability is concerned with
causality.
C.Reliability is concerned with predictability, whereas
validity is concerned with causality.
D.Reliability and validity may be used interchangeably.
2.Teacher Fenny wanted to teach the pupils the
skill to do cross stitching. Her check-up quiz
was a written on the steps of cross stitching.
Which characteristics of good test does it lack?

A. Scorability
B. Objectivity
C. Reliability
D. Validity
Test VALIDITY
• The validity of a test concerns what the test measure
and how well it does so.
—Anne Anastasi

o Validity is the agreement between a test score or measure


and the quality it is believed to measure.

o It answers the question “ Is the test actually measuring


what it is designed to measure?”
Content Validity

Content validity is a logical process where


connections between the test items and the sets of pre-
define objectives..

-answers the question : “ How adequately


does the test content sample the larger
universe of situations it represents?
Content validity is most often employed
with achievement test, so the performance
domain is often defined by a list of
instructional objectives.
Criterion-Related Validity

It is the degree on which the test scores can be


related to a criterion.
It indicates the effectiveness of a test in
predicting an individual performance in specified
activities.
Predictive Validity is done by correlating the
sets of scores obtained from two measures
given at a longer time interval in order to
describe the future performance of an
individual.
Concurrent Validity is determined by
correlating the scores obtained from two
measures given concurrently in order to
describe the present status of an individual.
Construct Validity

The construct validity of a test is the extent


to which the test may be said to measure a
theoretical construct or trait.

- established by statistically comparing


psychological factors that affect the scores in a
test.
Type Question to be answered

Content Validity How adequately does the test content sample the
larger universe of situations it represents?

Criterion-related How well does test performance predict future


performance, (predictive validity) or estimate
Validities
present standing (concurrent validity) on some
other valued measure called a criterion?

Construct Validity How well can test performance be explained in


terms of psychological attributes?
3. Mr. Gabriel correlated the scores of his
pupils in the social studies test with their
grades in the same Subject for the second
grading period. What test validity id he try to
establish?

A. Concurrent validity
B. Construct validity
C. Criterion-related validity
D. Content validity
4. What type of validity is needed if a test
must have course objectives and scopes ?

A Concurrent validity
B. Construct validity
C. Criterion-related validity
D. Content validity
5. Which questioning practice promotes more
class interaction?

A. Asking the question before calling on a student.


B. Focusing on divergent questions.
C. Focusing on convergent questions.
D. Asking rhetorical questions.
Content Question
A content question is a question with which the
speaker asks the hearer to supply specific
information about participants or settings.

Content questions are questions that contain


an interrogative phrase, like who, what, where,
when.
CLOSED QUESTIONS(Convergent)

A closed question can be answered with either


a single word or a short phrase.

Closed questions have the following


characteristics:
They give you facts.
They are easy to answer.
They are quick to answer.
They keep control of the conversation
with the questioner.
OPEN QUESTIONS (Divergent)

An open question is likely to receive a long


answer.

Open questions have the following


characteristics:
 They ask the respondent to think and reflect.
 They will give you opinions and feelings.
 They hand control of the conversation to

the respondent.
FUNNEL QUESTIONS

This technique involves starting with


general questions, and then homing in on a
point in each answer, and asking more and
more detail at each level.
LEADING QUESTIONS

Leading questions try to lead the respondent to


your way of thinking.
RHETORICAL QUESTION
A rhetorical question is a figure of speech in the form of a
question that is asked in order to make a point, rather than to
elicit an answer.[1]

Though classically stated as a proper question, such a


rhetorical device may be posed declaratively by implying a
question, and therefore may not always require a question mark
when written.

 a rhetorical question is asked when the questioner himself


knows the answer already or an answer is not actually
demanded. So, an answer is not expected from the audience.
Such a question is used to emphasize a point or draw the
audience’s attention.
6. This technique involves starting with
general questions, and then homing in on a
point in each answer, and asking more and
more detail at each level.

A. Convergent C. Rhetorical
B. Leading D. Funnel
7. This questioning technique may not elicit an
answer, its purpose is to emphasize a point
or draw audience’s attention.

A. Informational C. Rhetorical
B. Leading D. Divergent
8. For maximum interaction, a teacher ought to
avoid ________questions.

A. Informational
B. Rhetorical
C. Leading
D. Divergent
9. If teacher has to ask more higher-order
questions, he has to ask more______ questions.

A. Closed
B. Fact
C. Concept
D. Divergent
10. Which is a convergent question?

A. Did the LA SOLIDARIDAD accomplish its


purpose? Why or why not?
B. Who was the first editor of the LA
SOLIDARIDAD?
C. Why did the LA SOLIDARIDAD come about?
D. If you were editor of the LA SOLIDARIDAD
what would you do?
11. Which assessment task is most fit for
logic-smart learners?
A. Solving a puzzle
B. Showing the steps through a diagram
C. Describing the solution
D. Composing a song
12. With synthesizing skills in mind, which one
has the highest diagnostic value?

A. Essay test
B. Performance Test
C. Completion Test
D. Multiple Choice test
13. How does measurement differ from evaluation?
A. Measurement is a pre-requisite of assessment
while evaluation is the pre-requisite testing
B. Measurement is assigning a numerical value to a
given trait while evaluation is giving meaning to the
numerical value of the trait
C. Measurement is gathering data while assessment
is quantifying the data gathered
D. Measurement is the process of quantifying data
while evaluation is the process of organizing data.
14. Mr. Alimagno gave his students a test to
find out how well they could demonstrate the
newly-taught skills in basketball. What
possible type of assessment could have been
used by Mr. Alimagno?

A. Paper-and-pencil test
B. Performance-based test
C. Journals
D. portfolios
15.Which term refers to that process of
analyzing, interpreting, and giving judgment
on the value of organized data?

A. Test
B. Measurement
C. Evaluation
D. Assessment
MEASUREMENT
--An educational process that checks the
specificity of an individual which is expressed
quantitatively.
--the quantification of what students learned
through the use of tests, questionnaires,
rating scales, checklists, and other devices.
--It answers the question, how much does a
student learn or know?
EVALUATION
 An educational process that checks the
personality of an individual which is
expressed qualitatively.
 A process of making judgements, assigning

value, or deciding on the worth of student’s


performance.
 EVALUATION answers the question, how

good, adequate, or desirable is it?


ASSESSMENT
 The full range of information gathered and
synthesized by teachers about their students
and their classrooms. Gathered through
observation, verbal exchange, written reports,
or outputs.
 ASSESSMENT looks into how much change

has occurred on the student’s acquisition of a


skill, knowledge or value before and after a
given learning experience.
PURPOSES OF M-E-A

 Appraisal of the school, curriculum,


instructional materials, physical plant,
equipment
 Appraisal of the teacher
 Appraisal of the school child
FUNCTIONS OF M-E-A
 Improvement of student learning
 Identification of students’ strengths and
weaknesses
 Assessment of the effectiveness of a particular
teaching strategy
 Appraisal of the effectiveness of the curriculum
 Assessment and improvement of teaching
effectiveness
 Communication with and involvement of
parents in their children’s learning
16.These type of test are designed to measure
student performance against a fixed set of
predetermined criteria or learning standards.
A. Intelligence test
B. Aptitude test
C. Criterion – reference test
D. Norm- referenced test
17. Mrs. Pansoy uses this method of grading
that does not compare a learner’s performance
that of the others. What type of method called?

A. criteria-referenced grading
B. average grading
C. norm-referenced grading
D. relative grading
APPROACHES TO EVALUATION

CRITERION-REFERENCED MEASURE (CRM)


-A student’s performance is compared against a
predetermined or agreed upon standard.
-Designed to measure students’ performance with respect
to some particular criterion or standard. It is used to
evaluate performance against performance objective.

NORM-REFERENCED MEASURE (NRM)


-A student’s performance is compared with the
performance of other students.
-Designed to measure the ability of one student compared
to the abilities of other students in the same class.
METHODS OF COLLECTING ASSESSMENT DATA

1. Paper-and-pen
 Supply Type – requires the student to produce or

construct an answer to the question


 Selection Type – requires the student to choose

the correct answer from a list of options

2. Observation
 Involves watching the students as they perform

certain learning tasks like reading, speaking....


TYPES OF EVALUATION

1. DIAGNOSTIC EVALUATION

-Undertaken before instruction, in order to


assess student’s prior knowledge of a
particular topic or lesson. Done to determine
strengths and weaknesses of students as
bases for remedial instruction.
2.FORMATIVE EVALUATION
--Administered during the instructional
process to provide feedback to students and
learners on how well the former are learning
the lesson being taught. Frequently done to
determine who have reached mastery of the
lesson.
3. SUMMATIVE EVALUATION
Undertaken to determine student achievement
for grading purposes. Usually done at the end
of a unit, which summarizes the student’s
accomplishments.
18. Teacher Alma does norm-referenced
interpretation of scores. Which of the following
does she do?
A. She uses a specified content as it frame of
reference.
B. She describes group performance in relation
to a level of master test.
C. She compares every individual students
scores with other’s scores.
D. She describes what should be their
performance.
19. What is claimed to be the overall advantage
of criterion-referred interpretation?
A. An individual’s score is compared with the
set mastery level
B. An individual’s score is compared with that
of his peers.
C. An individual’s score is compared with the
average scores.
D. An individual’s score does not to be
compared with any measure.
20.Which is an element of norm-referenced
grading?
A. The student’s past performance.
B. An absolute standard
C. The performance of the group
D. What constitutes a perfect score
21. Which characteristic of good test will pupils
be assured of when a teacher constructs a
table of specifications for test construction
purposes?
A. Authenticity
B. Reliability
C. Construct Validity
D. Content Validity
CRITERIA to CONSIDER in
CONSTRUCTING A GOOD TEST
 VALIDITY- degree to which the test measures
what is intended to measure.
 RELIABILITY- refers to the consistency of

score obtained by the same person whose


retested using the same instrument or one
that is parallel to it.
 ADMINISTRABILITY-with ease, clarity and

uniformity so that scores obtained are


comparable, can be attained by setting time
limit and oral instructions.
CRITERIA to CONSIDER in
CONSTRUCTING A GOOD TEST
 SCORABILITY- the test should be easy to score such
that the directions for scoring are clear, scoring key is
simple; provisions for answer sheets are made.
 ECONOMY- the test should be given in the cheapest

way, which means that the answer sheets must be


provided so that the test can be given from time to
time.
 ADEQUACY- contain a wide sampling of items to

determine the educational outcomes or abilities so


that the resulting scores are representatives of the
total performance in the areas measured.
 AUTHENTICITY-test should be simulating real-life

situations.
22. Which method in measuring reliability
involves administering the same test twice
correlating the scores in both tests?

A. Parallel forms
B. Test-retest
C. Analytic
D. Holistic
What to use to measure RELIABILITY
 TEST-RETEST= involves administering the
same test twice with an interval of one or two
weeks and correlating the score obtained from
both tests.
 PARALLEL FORMS= is done by evaluating the

scores of equivalent forms of testing given


with close time interval between forms.
 SPLIT-HALF= involves dividing a test into two

subsets (even numbered and odd numbered)


and correlating the scores in these subsets.
23. What is the correct sequence of these given
steps in test construction?
1.Prepare a table of specifications
2.Determine the purpose of the test
3.Specify the instructional objective
4.Determine the item format, number of test
items, and difficulty level of the test
A. 2,3,1,4 C. 1,2,3,4
B. 1,4,2,3 D. 3,1,4,2
STEPS IN TEST CONSTRUCTION
1. Determining the purpose of the test
2. Specifying the instructional objectives
3. Preparing the table of specifications
4. Determining the item format, number of test
items and difficulty level of the test
5. Writing the test items that match the objectives
6. Editing, revising and finalizing test items
7. Administering the test
8. Scoring
9. Tabulating and analyzing the results
10. Assigning grades
PRINCIPLES OF TEST CONSTRUCTION

PRINCIPLE APPLICATION

1. COMPREHENSIVENESS Prepare a table of specifications and use it as


The test should include items that a guide for writing test items.
measure the content areas and processes
covered in the lesson.

2. COMPATIBILITY Match the test items with the instructional


There should be a close association objectives.
between the intended learning outcomes
and the test items.

3. COMPREHENSIBILITY Keep the reading difficulty and vocabulary


The test items as well as the directions level of the test items as simple as possible.
should be clearly understood by the test- Ensure that the test directions are direct and
takers. clear.

4. ACCURACY State each test item so that only one answer


Each test item should have only one is correct.
correct answer. It should be unanimously
acceptable to the experts concerned.
24. Which is the first step in planning an
achievement test?
A. Select the type of items to be used
B. Decide on the length of the test.
C. Define the instructional objective.
D. Build a table of specification.
25. Which guideline in test construction is NOT
observed in this item?
Edgar Allan Poe wrote ______________.
A. The length of the blank suggests the
answer.
B. The central problem is not packed in the
stem.
C. It is open for more than one correct answer
D. The blank is at the end of the question
26. When constructing a teacher-made test, it
is most important for the teacher to :
A. Develop one-fourth of the question at the
level of challenge appropriate for the test
takers.
B. Ask question based on both factual and
conceptual learning
C. Ask students to express their point of view
D. Stress the objectives used during the lesson
27. Which of the following is not considered in
preparing items for objective tests?
A. Make each test items comprehensive
B. Group items belonging to the same type
together
C. Provide specific directions on how the test is
to be taken
D. Review very difficult test items
28. The type of test evaluates a particular
student's performance by comparing it to the
performance of some other well-defined
group of students is
called ___________________.
A.Criterion-referenced test
B.Summative test
C.Norm-reference test
D. Diagnostic test
TYPES OF TESTS AND THEIR USES
1. Mode of Response

◦ Oral
◦ Written
◦ Performance
Advantages of written tests

 More evidence could be obtained of the


achievements of each pupil or teacher.
 A written record of those achievements would
be produced.
 Each pupil would be asked the same questions ;
thus all would be treated alike.
 There would be less possibility of favoritism for
or bias against particular pupils or teachers.
TYPES OF TESTS AND THEIR USES
2. Ease of Quantification of Response
 Objective – with definite/ exact answer
 Subjective – divergent answers
TYPES OF TESTS AND THEIR USES
3. Mode of Administration
 Individual – one student at a time
 Group – simultaneous
TYPES OF TESTS AND THEIR USES
4. Test Constructor
 Standardized – prepared by an expert or

specialist; follow uniform procedure


 Teacher-Made – prepared by classroom

teacher with no established norm for scoring


and interpretation
TYPES OF TESTS AND THEIR USES
5. Mode of Interpreting Results
 Norm-Referenced – comparing performances

of students
 Criterion-Referenced – comparing an

individual performance with a specific goal


TYPES OF TESTS AND THEIR USES
6.Nature of Answer
 Personality – emotion, social adjustment,

dominance & submission, value orientation,


disposition, emotional stability, frustration level,
degree of introversion or extroversion
 Intelligence – mental ability (I.Q.)
 Aptitude – predicting the likelihood in a learning

area
 Achievement – to determine what student has

learned from formal instruction


 Accomplishment – to determine what students has

learned from a broader area


TYPES OF TESTS AND THEIR USES
 Socio-metric (Preference) – discovering learner’s likes
and dislikes; social acceptance; social relationships
 Trade – to measure an individual’s skill or
competence in an occupation or vocation
 Speed – to determine ability and accuracy bounded
with time
 Diagnostic – to identify specific strengths and
weaknesses in past and present learning
 Formative – to improve teaching and learning while it
is going on
 Summative – given at the end of instruction to
determine student’s learning and assign grades
 Standardized assessments are defined as
assessments constructed by experts and published
for use in many different schools and classrooms.
These assessments are used in various contexts
and serve multiple purposes.
 Standardized tests may be comprised of different
types of items, including multiple-choice, true-
false, matching, essay and spoken items. These
assessments may also take the form of traditional
paper-pencil tests or be administered via
computer. In some instances, adaptive
testing occurs when a computer is used. Adaptive
testing is when the students' performance on
items at the beginning of the test determines the
next items to be presented.
ADVANTAGES
1. Standardized tests are practical, they're easy
to administer and they consume less time to
administer versus other assessments.

2. Standardized testing results are


quantifiable. By quantifying students'
achievements, educators can identify
proficiency levels and more easily identify
students in need of remediation or
advancement.
ADVANTAGES
3. Standardized tests are scored via
computer, which frees up time for the
educator.

4. Since scoring is completed by computer,


it is objective and not subject to educator
bias or emotions.
ADVANTAGES
5. Standardized testing allows educators to
compare scores to students within the same
school and across schools. This information
provides data on not only the individual
.student's abilities but also on the school as
a whole. Areas of school-wide weaknesses
and strengths are more easily identifiable.
ADVANTAGES
6. These tests have been giving accurate and
reliable comparisons in between sub-
groups. Such sub-groups involve data on
socio-economic status, ethnicity, special
needs, and more.

7. Standardized testing provides a


longitudinal report of student progress.
Over time, educators are able to see a trend
of growth or decline and rapidly respond to
the student's educational needs.
Standardized testing allows educators to
determine trends in student progress
DISADVANTAGES
Standardized test items are not parallel with
typical classroom skills and behaviors. Due to
the fact that questions have to be
generalizable to the entire population, most
items assess general knowledge and
understanding.
DISADVANTAGES
Since general knowledge is assessed,
educators cannot use standardized test
results to inform their individual instruction
methods. If recommendations are made,
educators may begin to 'teach to the test' as
opposed to teaching what is currently in the
curriculum or based on the needs of their
individual classroom.
DISADVANTAGES
Standardized test items do not assess
higher-level thinking skills.
29. Quiz is to formative test while periodical
exam is to ________.

A. Criterion-referenced test
B. Summative test
C. Norm-reference test
D. Diagnostic test
30.Which DOES not belong to the group?

A. Completion
B. Matching
C. Multiple Choice
D. Alternate response
31. Which are direct measures of competence?

A. Personality Test
B. Performance Test
C. Paper-and-pencil test
D. Standardized tests
32. What do diagnostic tests identify?
A. the specific nature of the remedial program
B. The general weakness in class performance
C. The causes underlying academic difficulties
D. The specific nature of pupil's difficulties
TYPES OF TEACHER-MADE TEST

1. Objective
◦ Multiple Choices - Analogy
◦ Matching Type - Rearrangement
◦ Alternative Response - Identification
◦ Completion/Augmentation - Labeling
TRUE or FALSE TEST
 The true – false item is simply a declarative statement
that the student must judge as true or false.
 This item type is characterised by the fact that only
two responses are possible.
 Since only two choices are possible, the uninformed
students has a fifty-fifty chance of guessing the
answer.
 Whenever there are only possible responses, the
true-false item or some adaptation of it is likely to
provide the most effective measure.
 When assembling the test, it is necessary to place
true and false statement in a random fashion.
Rules of construction true – false items

 Include only one central, significant point in


each statement.
 Word the statement so precisely that it can
unequivocally be judged true or false.
 Keep the statements short and use simple
language structure.
 Statements of opinion should be attributed to
some source.
 Avoid extraneous clues to the answers.
MULTIPLE CHOICES

 Stem – question or problem in each item; can be


presented in 2 ways:
◦ Incomplete statement – all the options end with a
period or only the last option ends with a period.
◦ Direct question – options do not end with a period but
stem ends with a question mark.
 Options - alternatives where student selects the
correct answer
- there is only one correct/best answer from the
options, the less appropriate are foils or
distracters (maximum no. of options is 5 and the
minimum is 4)
ADVANTAGES

 great versatility in measuring objectives - from the


level of memorization to the most complex level
 the teacher can cover a substantial amount of
course material in relatively short time
 scoring is objective
 teachers can construct options that require
students to discriminate among them - vary in the
degree of correctness
 effects of guessing are largely reduced since there
are greater options
 items are more amenable to item analysis
DISADVANTAGES

 more time-consuming in terms of looking for


options that are plausible
 more than one defensible correct answer
RULES

 essence of the problem should be in the


stem; all options should measure the same
objective
 there should be coherence in stems and

options
 there should be consistency in the

length/presentation of choices
 avoid repetition of words in the options
RULES

 the choices should be arranged


ascendingly/descendingly
 the choices should be arranged in

vertical/columnar order
 stems and options should be stated positively

whenever possible
 avoid negative statements statements in the

stem
 options should be plausible and

homogeneous
RULES

 items should have defensible correct or best


option
 vary the placement of correct options (to

avoid pattern)
 avoid overlapping options
 make sure there is only one correct/best

answer to an item
RULES

 stem and options should be in a single page


 avoid using none of the above
 avoid using all of the above
 It is a poor distracter since it has very little
discriminating power to identify knowledgeable
from non-knowledgeable students.
RULES

 do not have combination of all of the above and


none of the above in the options
 use four or five options
 there should be uniformity in the number of
choices for all the items
 there should be no articles a/an at the end of the
stem
 stem should be clear and grammatically correct
and should contain elements common to each
option (MC obey Standard English rules of
punctuation and grammar; a question requires a
question mark)
TYPES OF TEACHER-MADE TEST

2. Subjective (Essay)
◦Extended
◦Restricted
TYPES OF ESSAY

 1. EXTENDED RESPONSE QUESTIONS


◦ Leave students free to determine the content and
to organize the format of their answer
◦ Opinionated or open-ended answers are solicited
from students
 2. RESTRICTED RESPONSE QUESTIONS
◦ Limit both the content and the format of the
students’ answers
◦ Certain parameters are used in the
questions/problems
ADVANTAGES OF ESSAY

 No guessing, assesses factual information


 Allows divergent thinkers to demonstrate

higher order thinking skills (HOTS)


 Reduces lead time required to produce
 Less work to administer for smaller number

of students
 Can be rich in diagnostic information
DISADVANTAGES OF ESSAY

 Subjectivity in scoring
 Even different times of day make a difference
 First paper to be read/checked often sets

standard
 Time consuming in checking
 Can result to student rambling, confusion or

inability to find a focus


HOW TO WRITE ESSAY QUESTIONS

 Define the task clearly to the student


 When testing for content, make each item

relatively short and increase the number of


items
 Do NOT provide a choice of questions
 Devise answer key as you write question
 Give students the criteria for evaluating the

answers
 Present material to get higher thinking skills
To be effective, essay questions need to….

 Be related to classroom and/or homework learning


 Be clearly articulated
 Be unambiguous
 Cover larger segments of material, rather than
have a very limited scope
 Provide sufficient time for the quality of answers
expected
 Require incorporation of factual knowledge
 Require students to provide reasoning for their
answers
 Include clear directions as to length and structure
INCREASING OBJECTIVITY OF ESSAY SCORING

 Score blind
 Read one question at a time
 Halo effects
 Have a policy on irrelevant answers, errors
33. It is a chart prepared to determine the
goals, the content and the number of items to
be included in the test

A. Test chart
B. Test book
C. Table of specifications
D. Skewed chart
TABLE OF SPECIFICATIONS
A TOS, sometimes called a test blueprint, is a
table that helps teachers align objectives, instruction,
and assessment (e.g., Notar, Zuelke, Wilson, &
Yunker,2004).
When constructing a test, teachers need to be
concerned that the test measures an adequate sampling
of the class content at the cognitive level that the
material was taught. The TOS can help teachers map
the amount of class time spent on each objective with
the cognitive level at which each objective was taught
thereby helping teachers to identify the types of items
they need to include on their tests.
SAMPLE OF ONE WAY T.O.S. in LINEAR FUNCTION

Content No. of class No. of Test Item


sessions items Placement
1. Definition of linear Function 2 4 1-4
2. Slope of a line 2 4 5-8
3. Graph of linear function 2 4 9-12
4. Equation of linear function 2 4 13-16
5.Standard forms of a line 3 6 17-22
6. Parallel and perpendicular 4 8 23-30
lines
7. Applications of linear 5 10 31-40
functions
TOTAL 20 40 40
Determining the No. of Items
No. of items = No. of Class Sessions X desired total No. of Items
Total No. of Class Sessions

Example : No. of items for the topic “definition of Linear function”

No. of class sessions = 2


Desired No. of items = 40
Total No. of Class Sessions= 20

No. of items = 2 x 40
20

=4
SAMPLE OF TWO WAY T.O.S. in LINEAR FUNCTION

CONTENT Class Knowle Compre Applic Analy Synthe Evalua Total


Sessions dge hension ation sis sis tion

1. Definition of linear 2 1 1 1 1 4
Function

2. Slope of a line 2 1 1 1 1 4

3. Graph of linear function 2 1 1 1 1 4

4. Equation of linear 2 1 1 1 1 4
function

5.Standard forms of a line 3 1 1 1 1 1 1 6

6. Parallel and 4 1 1 2 1 1 2 8
perpendicular lines

7. Applications of linear 5 1 1 3 1 1 3 10
functions

TOTAL 20 5 6 8 6 5 10 40
34. The test item “ Group the following items
according to shape” is a thought test item on
____________.

A. Creating
B. Generalizing
C. Classifying
D. Predicting
35. Which test item is in the highest level of
Bloom’s taxonomy of objectives?

A. Explain how trees receive nutrients


B. Explain how trees function in relation to the
ecosystem
C. Rate three different methods of controlling
tree growth
D. List the part of tree
BLOOM’S TAXONOMY of Educational
Objectives
1. COGNITIVE DOMAIN- call for outcomes of
mental activity such as memorizing, reading,
problem solving, analyzing, synthesizing and
drawing conclusions.
2. AFFECTIVE DOMAIN- refers to a person’s
awareness and internalization of objects and
stimulation, it focus on emotions.
3. PSYCHOMOTOR DOMAIN- focus on the physical
and kinesthetic skills of the learners,
characterized by progressive levels of behaviors
from observing mastery of physical skills.
Bloom’s Cognitive Taxonomy
1. Knowledge – recognizes students’ ability to use memorization
and recall facts.
2. Comprehension- involves students’ activity to read subject
matter, extrapolate and interpret important and put other
ideas to their own words.
3. Application- students take new concept and apply to another
situation.
4. Analysis- students have the ability to take new information
and break it down into parts and to differentiate them.
5. Synthesis- students are able to take various types of
information to form a whole creating a pattern where one did
not previously exist.
6. Evaluation- involves students’ ability to look at someone
else’s idea or principles and see the worth of the work and the
value of the conclusion.
Bloom’s Revised Taxonomy
(Anderson and Krathwohl’s AffectiveTaxonomy)

THREE prominent changes with Bloom’s


taxonomy

1. changing the names in the six


categories from noun to verb forms
2. Rearranging them
3. Creating a process and levels of knowledge
matrix
ORIGINAL DOMAIN NEW DOMAIN
 Evaluation
 Synthesis  Creating
 Evaluating
 Analysis
 Analyzing
 Application  Applying
 Comprehension  Understanding
 Knowledge  Remembering
Table of Revised Cognitive Domain
CATEGORY KEY WORDS, TECHNOLOGIES FOR
LEARNING
REMEMBERING : recall or retrieve KEY WORDS : defines, describes,
previous learned information identifies, knows,labels,
lists, names, outlines,recalls,
recognizes, reproduces, selects,
states
Table of Revised Cognitive Domain
CATEGORY KEY WORDS, TECHNOLOGIES FOR
LEARNING
UNDERSTANDING : Comprehending KEY WORDS : comprehends,
the meaning, translation, converts, defends, distinguishes,
interpolation, and interpretation of estimates, explains, extends,
instructions and problems. State a generalizes, gives an example,
problem in one’s own words. infers, interprets, paraphrases,
predicts, rewrites, summarizes,
translates
Table of Revised Cognitive Domain
CATEGORY KEY WORDS, TECHNOLOGIES FOR
LEARNING
APPLYING : Use a concept in a new KEY WORDS : applies, changes,
situation or unprompted use of an computes, constructs,
abstraction. Applies what was demonstrates, discovers,
learned in the classroom into novel manipulates, modifies, operates,
situations in the work place. predicts, prepares, produces,
relates, shows, solves, uses
Table of Revised Cognitive Domain
CATEGORY KEY WORDS, TECHNOLOGIES FOR
LEARNING
ANALYZING : Separates material or KEY WORDS :analyzes, breaks
concepts into component parts so down, compares, contrasts,
that its organizational structure diagrams, deconstructs,
may be understood. Distinguishes differentiates, discriminates,
between facts and inferences. distinguishes, identifies,
illustrates, infers, outlines, relates,
selects, separates
Table of Revised Cognitive Domain
CATEGORY KEY WORDS, TECHNOLOGIES FOR
LEARNING
EVALUATING : make judgments KEY WORDS : appraises, compares,
about the value of ideas or concludes, contrasts, criticizes,
materials. critiques, defends, describes,
discriminates, evaluates, explains,
interprets, justifies, relates,
summarizes, supports
Table of Revised Cognitive Domain
CATEGORY KEY WORDS, TECHNOLOGIES FOR
LEARNING
CREATING : builds a structure or KEY WORDS : categorizes,
pattern from diverse elements. Put combines, compiles, composes,
parts together to form a whole, creates, devises, designs, modifies,
with emphasis on creating a new organizes, plans, rearranges,
meaning or structure. reconstructs,
36. Sample question such as : Who
invented the….Where is the…?fall under
what level of questions in Bloom’s
taxonomy of objectives?

a. Comprehension
b. Application
c. Knowledge
d. Analysis
37.Mrs. De Leon wants her students to
compare and contrast two Native American
folktales and the cultures each represents. At
which level of thought is Mrs. De Leon asking
her students to work according to Bloom’s
Taxonomy?

a. Remembering
b. Understanding
c. Analyzing
 Creating
38.Mrs. Andrews assigned the following tasks
as part of a reading lesson. Place them in
order of their location on the revised Bloom’s
Taxonomy from lowest to highest.

1. Place the events of the story in chronological order.


2. Write a new ending for the story.
3. Choose one of the story’s characters as a “best friend” and
justify your choice.
4. On what date did this story begin?

a. 1, 2, 3, 4 c. 4, 1, 2, 3
d. 1, 4, 3, 2
b. 4, 1, 3, 2
39. Which is included in item analysis?

A. Determining the percentage equivalent of


the cut off score
B. Identifying the highest score
C. Determining the cut off score
D. Determining the effectiveness of distracters
ITEM ANALYSIS
-refers to the process of examining the
students’ response to each item in the test.

There are two characteristics of an item:


1. An item that has desirable
characteristics can be retained for
subsequent use
2. An item that has undesirable characteristics
is either be revised or rejected.
THREE CRITERIA in determining the
desirability and undesirability of an
item
1. Difficulty of an item
2. Discriminating power of an item
3. Measures of attractiveness
40. A negative discrimination index means that

A. Less from the lower group got the test item


correctly
B. More from the lower group answered the
test item correctly
C. The test item could not discriminate
between the lower and upper group
D. More from the upper group answered the
test item correctly
 Item-difficulty index (p)- determined by
calculating the proportion of examinees that
answer the item correctly.
 Item-discrimination index (d)-difference

between the population of high performing


students who got the item right and the
proportion of low performing students who
got an item right.
The high and low performing students usually defined as the
upper 27% of the students based on the total examination score and
the lower 27% of the students based on the total examination.

Positive Discrimination- the proportion of students who got an item


right in the upper performing group is GREATER than the proportion
of the low performing group.

Negative Discrimination- the proportion of students who got an item


right in the lower performing group is GREATER than the proportion
of the upper performing group.

Zero Discrimination- the proportion of students who got an item


right in the upper performing group is EQUAL to the proportion of
the low performing group.

You might also like