0% found this document useful (0 votes)
77 views54 pages

Grading and Reporting Systems

1. Grading and reporting systems serve instructional, parental, and administrative purposes. They provide feedback to improve student learning and inform parents and schools of student progress. 2. Teachers assign letter grades based on factors like tests, assignments, and behavior. Grades can be assigned relatively, by comparing students, or absolutely, by measuring mastery of objectives. 3. Grading software helps teachers efficiently record and calculate grades from multiple assessments. Feedback also guides students and builds rapport between teachers and students.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views54 pages

Grading and Reporting Systems

1. Grading and reporting systems serve instructional, parental, and administrative purposes. They provide feedback to improve student learning and inform parents and schools of student progress. 2. Teachers assign letter grades based on factors like tests, assignments, and behavior. Grades can be assigned relatively, by comparing students, or absolutely, by measuring mastery of objectives. 3. Grading software helps teachers efficiently record and calculate grades from multiple assessments. Feedback also guides students and builds rapport between teachers and students.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Grading and Reporting Systems

Topics

1- Functions of Grading and Reporting Systems


2- Assigning letter grades
3- Relative Versus Absolute Grading
4- Record Keeping and Grading Software
5- Use of Feedback

2
Topic-1

FUNCTIONS OF GRADING AND REPORTING SYSTEMS

3
1- Functions of Grading and Reporting Systems

Grading and reporting systems serve following


functions 1- Instructional uses
2- Report to parents/guardians
3- Administrative and guidance uses

4
Instructional Uses

The main focus of grading and reporting systems is the improvement of students’
learning. This occurs when the report
A- Clarifies the instructional objectives
B- Indicates the students strengths and weaknesses
C- Provides information regarding personal-social development
D- Contributes to the students motivation
 Day-to-day assessment of learning and the feedback can improve students’ learning.
 Periodic progress reports influence students’ motivation.
 Well-designed progress report can be helpful in evaluating instructional procedures.

5
Report to Parents/Guardians

 The primary function of grading and reporting systems is to inform


parents about the progress of their children.
 What were the objectives and how well they were achieved can be
learnt from these reports.
By having this information
 Parents are better able to cooperate with school.
 They can give emotional support and encouragement to children.
 They can help them in making educational and vocational plans.
To serve these purposes reports should contains as much information as
parents can understand.

6
Administrative and Guidance Uses

 Grading and reporting systems can also be used


for I- Promotion
II- Awarding honors
III- Reporting to other schools & prospective employers
 For most administrative purposes single letter grade is used.
 Counselors use these reports to help students make realistic educational
plans and in adjustment problems.
For serving these purposes the reports must be comprehensive and detailed

7
Topic- 2

ASSIGNING LETTER GRADES

8
2- Assigning Letter Grades

 School use the A,B,C,D,F grading system. While assigning grades


teachers come across the following questions:
1- What should be included in a letter grade?
2- How should achievement data be combined in signing letter grade?
3- What frame of reference should be used in grading?

9
Determining what to include in a grade

 When letter grades represent only achievement, they are considered most
meaningful and useful.
 If we include effort, amount of work completed, personal conduct etc. Their
interpretation will become confused and they lose their meaningfulness.
 For example a letter grade of “B” may represent average achievement with
outstanding effort and excellent conduct or high achievement with little effort.
 Only by using letter grades for achievement and separating other aspects can
improve our description of student learning and development.

10
Combining data and assigning grades

When we include the aspects of achievement like tests, written reports, or


performance ratings in a letter grade and decide to give emphasis to each
aspect, then the next step is to combine the various elements so that each
element receives its intended weight.
For example
If we decide that the final examination should count 40%, the mid term 30%,
laboratory performance 20%, we will want our course grade to reflect these
emphasis.
A typical procedure is to combine the elements in to a composite score by
assigning appropriate weights to each element and then use these composite
scores as a basis for grading.

11
Selecting the proper frame of reference for grading

 One of the following frame of reference is used for assigning letter grades

1- Performance in relation to other group members(relative grading)


2- Performance in relation to specified standards(absolute grading)

12
Cont...
1- Assigning grades on relative basis involves comparing a student performance
with that of a reference group/classmates. In this system a student grade is
determined by his relative ranking in that group rather than by some absolute
standards of achievement.
 Student’s performance and the performance of the group influence the grade
as the grading is based on relative performance.
2- Assigning grades on an absolute basis involves comparing a student
performance to specified standard set by the teacher/school.
 These standards may be concerned with the degree of mastery to be achieved
by the students and may be specified (a)- Task to be performed (e.g. type 40
words per minute without error) or (b)- The percentage of correct answers to
be obtained on a test designed to measure a clearly defined set of learning
tasks. In this type of grading id all students demonstrate a high level of
mastery, all will receive high grades.
13
Topic-3

RELATIVE VERSUS ABSOLUTE GRADING

14
3- Relative Versus Absolute Grading

Relative Grading
 Assigning letter grade on the basis of a student’s rank in the group.
 Before assigning grades, proportion of As, Bs, Cs, Ds, and Fs is determined.
 Grading on the basis of normal curve results in equal percentage of As and
Fs, and Bs and Ds.

15
Suggested Distribution of Grading

A = 10% to 20% of the students


B = 20% to 30% of the students
C = 30% to 50% of the students
D = 10% to 20% of the students
F = 0% to 10% of the students

16
Absolute Grading

 Letter grades in an absolute system may be defined as the degree to which the
objectives have been achieved.
 A = Outstanding. Students has mastered major and minor instruction goals
 B = Very good . Student has mastered all the major instructional goals and most of
the minor ones
 C = Satisfactory. Student has mastered all major goals but just a few of minors
 D = Very weak. Student has mastered just a few of the major and minor
instructional goals remedial work would be desirable.
 F= Unsatisfactory. Student has not mastered any of major instructional goals and
lacks the essentials needed for the next highest level of instruction .Remedial
work is needed.

17
Scores in terms of percentage of correct answers

A = 95% to 100% correct


B = 85% to 94% correct
C = 75% to 84% correct
D = 65% to 74% correct
F = below 65% correct
 Distribution of grade is not predetermined in absolute grading system. All
students can receive high grades if they demonstrate high level of mastery.
 A comprehensive report includes a check list of objectives to inform students
and parents which objectives have been mastered and which have not been
mastered

18
Topic-4

RECORD KEEPING AND GRADING SOFTWARE

19
4- Record Keeping and Grading Software

 Specialized software available to facilitate the common tasks of recording and


combining grades.
 Most computer gradebook software is based on underline spreadsheet design.
 The software may have templates to add in data entry and simple procedures for
specifying rules for combining grades from several sources like homework , tests,
and projects.
 The software provides various options for printing, reporting, and summarizing
results.
 Sometimes this type of software is linked to software designed to perform other
functions such as test constructions, test administration ,or keeping attendance.
 Existing software is constantly being updated so a search of the Internet is a good
way to bring such a list up to date.

20
TOPIC-5

USE OF FEEDBACK

21
5- Use of Feedback

 Feedback can serve a number of purposes and take a


number of forms. It can be provided as a single entity or a
combination of multiple entities.

22
Informal Feedback

 Informal feedback can occur at any times as it is something


that emerges spontaneously in the moment or during action.
Therefore informal feedback requires the building of rapport
with students to effectively encourage, couch or guide them
in daily management and decision-making for learning. This
might occur in the classroom, over the phone, in an online
forum or virtual classroom.

23
Formal Feedback

 Formal feedback is planned and systematically scheduled into


the process. Usually associated with assessment tasks, formal
feedback includes the likes of marking criteria, competencies
or achievement of standards, and is recorded for both the
students and is recorded for both the students and
organization as evidence.

24
Formative Feedback

 The goal of formative assessment is to monitor student


learning and to provide feedback to improve their learning. It
can also be used by teachers to improve their teaching. It
helps students to improve and prevent them from making the
same mistakes again.

25
Summative Feedback

 The goal of summative assessment is to evaluate student


learning at the end of instructional unit by comparing it
against some standard or benchmark . Therefore summative
feedback consist of detail comment that are related to
specific aspects of their work, clearly explains how the mark
was derived from the criteria provided and additional
constructive comments on how this work could be improved .

26
Student Peer Feedback

 With basic instruction and ongoing support students can


learn to give quality feedback, that can be highly valued
by peers. Providing students with opportunities to give
and receive peer feedback enriches their learning
experiences and develops their professional skill.

27
Student Self Feedback

 During the provision of feedback teachers not only provide


direction for the students but also to teach them by modeling
and instructions the skill of self assessment and goal setting,
leading them to become independent(Sackstein,2017) To help
students reach autonomy teachers can identify, share, and
clarify learning goals

28
Constructive Feedback

 This type of feedback is specific, issue-focused and based on observation.


It has four types:
1- Negative feedback: corrective comments about passed behavior. It focuses
on behavior that was not successful and shouldn’t be repeated
2- Positive feedback: affirming comments about past behavior. It focuses
on behavior that was successful and should be continued.
3- Negative feed-forward: corrective comments about future performance.
Focuses on behavior that should be avoided in the future.
4- Positive feed-forward: affirming comments about future behavior.
It focused on behavior that will improve performance in the future

29
Administration Of Classroom Tests And Assessments
Guiding principle

“All students must be given a fair chance to demonstrate their achievement of


the learning outcome being measured”
Things that create test anxiety

 Warning students to do their best as test is important


 Telling students to work fast in order to finish on time
 Threatening dire consequences if they fail
Suggestions for administration

1- Avoid unnecessary talk in the beginning of a test


 Students are mentally set for the test and unnecessary talk
 May influence their recall of information or
 May increase test anxiety
 Create hostility toward teacher
2- Avoid interruptions

 You may hang


 “Do not disturb-Testing” sign outside the door
3- Do not give hints to students

 If the item is ambiguous then clarified it for the whole class.


 Helping favourite students decreases the validity of result and lower the
class moral
4-Discourage cheating

 Careful supervision and special seating arrangements can discourage cheating


 Students performance must be based on their own efforts
Steps to prevent cheating

1- Take special precautions to keep the test secure during


preparation, storage and administration.
2- Have students clear of the tops of their desks
3- If scratch paper is used, have it turned in with the test
4- Walk around the room and observe how the students are doing ( careful supervision)
5- Use special seating arrangements, if possible
6- Use two forms of test and give a different form to each row of the students
(use the same test but simply rearrange the order of the items for the second form)
7- Prepare test that students will view as relevant fair and useful
8- Create and maintain a positive attitude concerning the value of the test
for improving learning
Chapter No. 04
Assessment Techniques in affective and Psychomotor Domain, observation, Self Report,
Questionnaire, interview, Rating Scale, Anecdotal Record, Checklist, Peer appraisal.
Observation
• Observation is the process of closely observing or monitoring something or someone.
• Observation is the act of noticing something or a judgment.
• Observation is watching what people do.
There are different types of observation
1. Controlled Observations
2. Natural Observations
3. Participant Observations
Controlled Observation
Controlled observations or structured observation is that observation in which observer
decides where the observation will take place, at what time, with which participants, in
what circumstances and uses a standardized procedure. The examiner systematically
classifies the behavior they observe into different categories. Coding might involve
numbers or letters to describe a characteristics, or use of a scale to measure behavior
intensity/amount.
Naturalistic Observation
Naturalistic observation (i.e. unstructured observation) involves observing the
spontaneous/natural behavior of participants in natural surroundings. The observer simply
records what he sees. Compared with controlled/structured methods it is like the difference
between studying wild animals in a zoo and studying them in their natural environment.
Participant Observation
Participant observation is a different of the above (natural observations) but here the
observer joins in and becomes part of the group that is being observed to get a deeper
insight into their lives. If it were research on animals we would now not only be studying
them in their natural habitat but be living alongside them as well!
Recording of Data of observation
With all observations an important decision the observer has to make is how to classify and
record the data. Usually this will involve a method of sampling. The three main sampling
methods are:
1. Event sampling. The observer decides in advance what types of behavior (events)
he is interested in and records all occurrences. All other types of behavior are
ignored.
2. Time sampling. The observer decides in advance that observation will take place
only during specified time periods (e.g. 10 minutes every hour, 1 hour per day) and
records the occurrence of the specified behavior during that period only.
3. Instantaneous (target time) sampling. The observer decides in advance the pre-
selected moments when observation will take place and records what is happening
at that instant. Everything happening before or after is ignored.
Self-Report
A self-report is a type of assessment form in which respondents read the question and select a
response by themselves such as personality traits, moods, thoughts, attitudes, preferences, and
behaviors without researcher interfering. A self-report is a technique which involves asking
the participants about their feelings, attitudes, beliefs and so on. Self-reports are often used as
a way of gaining participants' responses for different assessment procedures.
Self-report is a comparatively simple way to collect data from many people quickly and at low
cost. Self-report data can be collected in various ways such as through Questionnaires,
Internet, interview in person or over the telephone. Through self report researchers can collect
data regarding behaviors that cannot be observed directly.
Questionnaire: Questionnaire is a set of questions usually in a highly structured written
form. Questionnaires can contain both open questions and closed questions and
participants record by their own.
Closed questions are questions which provide a limited choice especially if the answer must
be taken from a predetermined list. Such questions provide quantitative data, which is easy to
analyze. However these questions do not allow the participant to give in-depth insights.
An open-ended question asks the respondent to formulate his own answer. Open questions
are those questions which invite the respondent to provide answers in their own words and
provide qualitative data. Although these types of questions are more difficult to analyze, they
can produce more in-depth responses and tell the researcher what the participant actually
thinks, rather than being restricted by categories.
There are following types of questionnaires:
Computer questionnaire. Respondents are asked to answer the questionnaire which is sent
by mail. The advantages of the computer questionnaires include their inexpensive price, time
can be saved, and respondents do not feel pressured, therefore can answer when they have
time, giving more accurate answers. However, the main shortcoming of the mail
questionnaires is that sometimes respondents do not bother answering them and they can just
ignore the questionnaire.
Telephone questionnaire. Researcher may choose to call potential respondents with the aim
of getting them to answer the questionnaire. The advantage of the telephone questionnaire is
that, it can be completed during the short amount of time. The main disadvantage of the phone
questionnaire is that it is expensive most of the time. Moreover, most people do not feel
comfortable to answer many questions asked through the phone and it is difficult to get sample
group to answer questionnaire over the phone.
In house survey. This type of questionnaire involves for the researcher visiting respondents
in their house or workplaces. The advantage of in house survey is that more focus towards the
questions can be gained from respondents. However, in house surveys also have a range of
disadvantages which include its being time consuming, more expensive and respondents may
not wish to have the researcher in their houses or workplaces for various reasons.
Mail Questionnaire. This sort of questionnaires include for the researcher to send the
questionnaire list to respondents through post, often attaching pre-paid envelope. Mail
questionnaires have an advantage of providing more accurate answer, because respondents can
answer the questionnaire in their spare time. The disadvantages associated with mail
questionnaires include them being expensive, time consuming and sometimes they end up in
the bin put by respondents.
Multiple choice question– respondents are offered a set of answers they have to choose from.
Dichotomous Questions. This type of questions within questionnaire gives two options to the
respondent – yes or no, to choose from and is the easiest form of questionnaire for the
respondent in terms of responding it.
Scaling Questions. Also referred to as ranking questions, they present an option for
respondents to rank the available answers to the questions on the scale of given range of values
(for example from 1 to 10).
Rating Scale
One of the most common rating scales is the Likert scale. A statement is used and the
participant decides how strongly they agree or disagree with the statements. For example the
participant decides "strongly agree", "agree", "undecided", "disagree", and "strongly
disagree". One strength of Likert scales is that they can give an idea about how strongly a
participant feels about something. This therefore gives more detail than a simple yes no
answer. Another strength is that the data are quantitative, which are easy to analyse
statistically. However, there is a tendency with Likert scales for people to respond towards the
middle of the scale, perhaps to make them look less extreme. As with any questionnaire,
participants may provide the answers that they feel they should. Moreover, because the data is
quantitative, it does not provide in-depth replies.
Interview: Interview is technique of collecting information orally from others in face to face
situation. It is two-way technique to interchange ideas and information. In interview
interviewer put questions to interviewee and get information verbally.
The word "interview" refers to a one-on-one conversation with one person acting in the role
of the interviewer and the other in the role of the interviewee.
There are some types of Interview.

The more you know about the style of the interview, the better you can
prepare. The Telephone Interview
Often companies request an initial telephone interview before inviting you in for a face to face
meeting in order to get a better understanding of the type of candidate you are. The one benefit of
this is that you can have your notes out in front of you. You should do just as much preparation as
you would for a face to face interview, and remember that your first impression is vital. Some
people are better meeting in person than on the phone, so make sure that you speak confidently,
with good pace and try to answer all the questions that are asked.
The Face-to-Face Interview
This can be a meeting between you and one member of staff or even two members.
The Panel Interview
These interviews involve a number of people sitting as a panel with one as chairperson. This
type of interview is popular within the public sector.
The Group Interview
Several candidates are present at this type of interview. You will be asked to interact with each
other by usually a group discussion. You might even be given a task to do as a team, so make
sure you speak up and give your opinion.
The Sequential Interview
These are several interviews in turn with a different interviewer each time. Usually, each
interviewer asks questions to test different sets of competencies. However, if you are asked
the same questions, just make sure you answer each one as fully as the previous time.
The Lunch / Dinner Interview
This type of interview gives the employer a chance to assess your communication and
interpersonal skills as well as your table manners! So make sure you order wisely (no spaghetti
Bolognese) and make sure you don’t spill your drink (non-alcoholic of course!).
All these types of interviews can take on different question formats, so once you’ve checked
with your potential employer which type of interview you’ll be attending, get preparing!
Competency Based Interviews
These are structured to reflect the competencies the employer is seeking for the particular job.
These will usually be detailed in the job .
Formal / Informal Interviews
Some interviews may be very formal, others may be very informal and seem like just a chat
about your interests. However, it is important to remember that you are still being assessed,
and topics should be friendly and clean!
Portfolio Based Interviews
In the design / digital or communications industry it is likely that you will be asked to take your
portfolio along or show it online. Make sure all your work is up to date without too little or too
much. Make sure that your images if in print are big enough for the interviewer to see properly,
and always test your online portfolio on all Internet browsers before turning up.
The Second Interview
You’ve pass the first interview and you’ve had the call to arrange the second. Congratulations!
But what else is there to prepare for? You did as much as you could for the first interview! Now
is the time to look back and review. You may be asked the same questions you were asked before,
so review them and brush up your answers. Review your research about the company; take a look
at the ‘About Us’ section on their website, get to know their client base, search the latest news on
the company and find out what the company is talking about.
General Interview Preparation
Here’s a list of questions that you should consider your answers for when preparing…
• Why do you want this job?
• Why are you the best person for the job?
• What relevant experience do you have?
• Why are you interested in working for this company?
• What can you contribute to this company?
• What do you know about this company?
• What challenges are you looking for in a position?
• Why do you want to work for this company?
• Why should we hire you?
• What are your salary requirements?

• The Telephone Interview. ...


• The Face-to-Face Interview. ...
• The Panel Interview. ...
• The Group Interview. ...
• The Sequential Interview. ...
• The Lunch / Dinner Interview. ...
• Competency Based Interviews. ...
• Formal / Informal Interviews.
• Structured Interview
• Unstructured Interview
• The Working Interview
What is an anecdotal record?
An anecdotal record is like a short story that educators use to record a significant incident that
they have observed. Anecdotal records are usually relatively short and may contain
descriptions of behaviors and direct quotes.
Why use anecdotal records?
Anecdotal records are easy to use and quick to write, so they are the most popular form of
record that educators use. Anecdotal records allow educators to record a child’s specific
behavior or the conversation between two children. These details can help educators plan
activities, experiences and interventions. Because they can be written after the fact, when an
educator is on his break or in different activities.
How do I write an anecdotal record?
Anecdotal records are written after the fact, so use the past tense when writing them. Being
positive and objective, and using descriptive language are also important things to keep in
mind when writing your anecdotal records. Remember that anecdotal records are like short
stories; so be sure to have a beginning, a middle and an end for each anecdote.

Checklist
A checklist is a list of items you need to verify, check or inspect. Checklists are used in every
field — from building inspections to complex medical surgeries, or educational point of view
etc. Using a checklist allows you to ensure you don’t forget any important steps that you have
to check.
A checklist is a list of all the things that you need to do, information that you want to find out,
or things that you need to take somewhere, which you make in order to ensure that you do not
forget anything. So a list of items, facts, names, etc, to be checked.
Why we use checklist:
It is easy for us to forget things and recovery is usually more complex than getting it right the
first time. A simple tool that helps to prevent these mistakes is the checklist. A checklist is
simply a list of the required things. There are seven benefits to using a checklist:
1. Organization: Checklists can help us stay more organized by assuring we don't skip any
steps in a process. They are easy to use and effective.
2. Motivation: Checklists motivate us to take action and complete tasks. Since checklists can
make us more successful, it becomes an honorable circle where we are motivated to
accomplish more due to the positive results.
3. Productivity: By having a checklist you can complete dull tasks more quickly and
efficiently, and with less mistakes. You become more productive and accomplish more each
day.
4. Creativity: Checklists allow you to master the boring tasks and utilize more brain power
for creative activities. Since the checklist means rarer mistakes and less stress, you not only
have more time to be creative, you have the ability to think more clearly.
5. Delegation: By breaking down tasks into specific tasks, checklists give us more confidence
when delegating or assigning activities. When we are more comfortable that tasks will be done
correctly, we delegate more and become significantly more productive.
6. Saving lives: Checklists can literally save lives. When the U.S. Army Air Corps introduced the
B-17 bomber during WWII an experienced aviator crashed the plane during its second
demonstration flight. After this tragedy the Army required that pilots use a checklist before taking
off. This is the same type of checklist we see pilots use today that helps to avoid crashes. Checklists
also reduce deaths in hospitals. When checklists have been implemented for use by surgical teams,
deaths dropped 40 percent. Similar results have been seen when checklists are required for doctor's
introducing central lines into their sick patients.
7. Excellence: Checklists allow us to be more effective at taking care of customers. By helping
to assure that you provide superior customer service we can achieve excellence in the eyes of
the customer. Excellence is a differentiator that improves brand equity.
Using checklists ensures that you won't forget anything. So, if you do something again and
again, and want to do it right every time, use a checklist. So Checklists free up mental RAM.

Peer Appraisal
Employee/workers assessments conducted by colleagues in the direct working environment
i.e. the people the employee interacts with regularly. Peer appraisals are a form of performance
appraisal which is designed to monitor and improve job performance. The peer appraisal
process include insight and knowledge – workers have their ‘ear to the ground’ and are often
in the best position to appraise a colleague’s performance.
Peer appraisal is a type of feedback system in the performance appraisal process. The system
is designed to monitor and improve the job performance. It is usually done by colleagues who
are a part of the same team. This type of appraisal system excludes supervisors or managers.
Description: As a part of the appraisal process, an employee is assessed based on the
feedback given by his/her colleagues or people within his/her close working environment.
This feedback is secret. A typical peer appraisal does not take feedback from superiors. It
is meant to monitor and improve job performance.
Why should one do peer appraisal?
Employees can assess the skills of their co-workers much more clearly than management
because they work together. It helps in team-building. People understand that opinions of their
colleagues are important and one must build relationships. Since people trust their co-
workers, they consider the feedback to be constructive. It makes the process of skill
improvement public and accountable.
Peer Appraisal is a variation of 360 degree feedback in the performance appraisal
process. Peers and teammates provide a unique perspective on performance. Peers
provide insight into an individual’s interpersonal interactions and skills. Peer appraisal is
commonly used as part of the performance appraisal process.
Chapter No: 05 Test Appraisals

Qualities of a Good Test:

A good test should have the following qualities:


1- Validity:
It means that it measures what it is supposed to measure. It tests what it ought to test. • A
test is said to be valid if it measures what it intends to measure.
• There are different types of validity:
1. Operational validity
2. Predictive validity
3. Content validity
4. Construct validity
Operational Validity
– A test will have operational validity if the tasks required by the test are satisfactory to
evaluate the definite activities or qualities.
Predictive Validity
– A test has predictive validity if scores on it predict future performance
Content Validity
– If the items in the test constitute a representative sample of the total course content to be
tested, the test can be said to have content validity.
Construct Validity
– Construct validity involves explaining the test scores psychologically. A test is
interpreted in terms of numerous research findings.
2- Reliability:
If the test is taken again by (same students, same conditions), the score will be almost the same
regarding that the time between the test and the retest is of reasonable length. If it is given
twice to same students under the same circumstances, it will produce almost the same results.
In this case it is said that the test provides consistency in measuring the items being evaluated.
It is called reliability of a test.
• Reliability of a test refers to the consistency of measures what it indented to measure.
• A test with high validity has to be reliable also.
• Valid test is also a reliable test, but a reliable test may or may not be a valid one.
Different method for determining
Reliability Test-retest method
A test is administrated to the same group with short interval. The scores are tabulated
and correlation is calculated. The higher the correlation, the more the reliability.
Split-half method.
The scores of the odd and even items are taken and the correlation between the two sets
of scores determined.
Parallel form method
Reliability is determined using two equivalent forms of the same test content.
– These prepared tests are administrated to the same group one after the other.
– The test forms should be identical with respect to the number of items, content,
difficulty level etc.
– Determining the correlation between the two sets of scores obtained by the group in
the two tests.
– If higher the correlation, the more the reliability.
Discriminating Power
• Discriminating power of the test is its power to discriminate between the upper and
lower groups who took the test.
• The test should contain different difficulty level of questions.
3- Practical:
It is easy to be conducted, easy to score without wasting too much time or effort and easy
to interpret.
4- Comprehensive:
It covers all the items that have been taught or studied. It includes items from different areas
of the material assigned for the test so as to check accurately the amount of students’
knowledge.
5- Relevant:
It measures reasonably well the achievement of the desired objectives.
6- Balanced:
It tests linguistic as well as communicative competence and it reflects the real command of the
language. It tests also appropriateness and accuracy.
7- Appropriate in difficulty:
It is neither too hard nor too easy. Questions should be progressive in difficulty to reduce stress
and tension
8- Clear:
Questions and instructions should be clear. Pupils should know what to do exactly.
9- Authentic:
The language of the test should reflect everyday dialogue.
10- Appropriate for time:
A good test should be appropriate in length for the allotted time.
11- Objective:
If it is marked by different teachers, the score will be the same. Marking process should not be
affected by the teacher’s personality. Questions and answers are so clear and definite that the
marker would give the students the score he/she deserves.
12- Economical:
It makes the best use of the teacher’s limited time for preparing and grading and it makes the
best use of the pupil’s assigned time for answering all items.
13-Useful
A good test should be useful. What defines or constitutes a useful test? Well, this would be a
balancing of a number of factors including:
▪ Length – a shorter test is generally preferred
▪ Time – a test that takes less time is generally preferred
▪ Low cost – speaks for itself
▪ Easy to administer
▪ Easy to score
▪ Differentiates between candidates – a test is of little value if all the applicants obtain
the same score
▪ Adequate test manual – provides a test manual offering adequate information
and documentation
▪ Professionalism – is produced by test developers possessing high levels of expertise
14-Objectivity
• A test is said to be objective if it is free from personal biases in interpreting its scope as
well as in scoring the responses.
• Objectivity of a test can be increased by using more objective type test items and
the answers are scored according to model answers provided.
15-Comprehensiveness
• The test should cover the whole syllabus.
• Due importance should be given all the relevant learning materials.
• Test should be cover all the expected objectives.
16-Simplicity:
Simplicity means that the test should be written in a clear, correct and simple language, it is
important to keep the method of testing as simple as possible while still testing the skill you
intend to test. (Avoid ambiguous questions and ambiguous instructions).

Test Items Bank

Preparing test item is very useful task in testing process. Test Item bank means a

collection of a lot of test items. In test item bank every type of test item is included.

Different difficulty level of items are included. Test item bank helps to select or choose

different test items according to our test need and demand. Due to test item bank we

can save our time as well as have no need to construct item urgently because we have

a lot of test items in test bank. In test item bank test items are collected after proper item

analysis and after ensure the validity and reliability of test items.
Item analysis:

Item analysis is a procedure which determines the effectiveness of each item in a


test. It is a mathematical approach to assessing an item's utility. It provides
statistics of performance for each and every type of item in a test. It also provides
information concerning how well each item in the test functioned. Item analysis
is the back bone of test development. Test construction is fruitful only through
the well-thought, careful, and sophisticated process of item analysis.

Item analysis raises and answers at least three questions about each item in a test.

a. How difficult is the item?


b. Does the item discriminate between the good and poor student?
c. How effective is each distracter in the item?

According to Termazi (1984).

"Item analysis procedure provides the difficulty level of


item, the discrimination power of the item and
effectiveness of each distracter. Item analysis information
tell us if an item was too easy or too hard, how well it
discriminated between high and low scorer on the test, and
whether all the distracters functioned as intended."

Procedure of Item Analysis:


Procedure of item analysis is included on two
stages l. First stage is included on 4. steps
2. Second stage is included on Item Analysis working Sheet.
First Stage:
R.L. Ebel and D.A. Frisbie (1991 ) suggested the followmg steps for
the procedure of item analysis.
Step.No. 1. After scoring the test, arrange the test papers in order from the highest
score to the lowest score.
Step. No. 2.Select the 27% OR 25% of the total papers with the highest score and
call it the high-scoring group.
Step. No.3. Select the same (27% OR 25%) of the total papers with the lowest
score and call it the low-scoring group.
Note: The middle group of papers is not needed in item analysis. For Example
if we had 40 students papers,we would select 10 papers in the high-scoring
group and 10 papers in the low-scoring group.
Step. No.4. Tabulate the responses of students of both high-and-low-scoring
groups on each test item as shown in table 3. I (Item analysis
tabulation sheet).

You might also like