IDEAL Notebook
IDEAL Notebook
Development of
Excellence in
Assessment
Leadership
January, 2017
Facilitators:
Daina Briedis
Gloria Rogers
James Warnock
ABET Adjunct Directors, Professional Development
1
2
COPYRIGHT
This Institute contains copyrighted material and
may only be reproduced in whole or in part for
personal or educational purposes for your
institution/program provided that copies are not
altered, and that the copyright owner, ABET, Inc.
or the original source is credited when the material
is used.
3
4
Assessment
Fundamentals
SETTING THE STAGE
5
ISSUE BIN
Use the sticky notes to record:
• Something that is still unclear to
you
• A question related to program
assessment that has come to
mind but not yet been addressed
• Place sticky notes in the Issue
Bin.
The facilitators will monitor the issue bins and respond after the breaks or
at the end of the workshop
ABET’S VISION
6
ABET ACCREDITED PROGRAMS
ETAC EAC
629 accredited 2550 accredited
programs at 220 programs at 528
institutions institutions
CAC ASAC
461 accredited 87 accredited
programs at 354 programs at 67
institutions institutions
IDEAL OVERVIEW
• Setting the context for program assessment –
the big picture
• Understanding the terminology
• Program Educational Objectives
• Identifying the similarities/differences between
classroom and program assessment
• Review and critique student outcomes
• Review and critique scoring rubrics
• Establishing inter-rater reliability
• Mapping the curriculum
6
7
IDEAL OVERVIEW
• Identifying assessment methods
• Writing/revising surveys
• Developing efficient and effective assessment
processes
• Reporting results
• Review case study
• Assessment lessons learned
• Leadership
• Context • Tools
• Process • Progress
7
8
WHAT CAN WE DO AFTER THE
INSTITUTE?
ORGANIZING PREMISES
• Outcomes assessment is becoming/has become
an international standard of quality
• In an era of accountability and transparency, it is
not going away
• It is important for us to define anticipated student
learning before someone else does it for us
• Because we are going to be mandated to
provide the evidence, it is critical for us to
develop assessment processes that are
consistent with our institutional values and honor
faculty priorities
10
9
BEST PRACTICES SHOULD BE
CONSISTENT WITH PRINCIPLES OF
LEARNING
• Learning occurs best when we build on what
students already know
• Learning is an active process (importance of
students active involvement in their own
learning)
• Learners perform better when expectations for
their learning is clear
11
10
PRINCIPLES OF PROGRAM/
INSTITUTIONAL ASSESSMENT
• Student learning is cumulative over time
• What students learn in one course, they use,
practice, develop and get feedback in other
courses.
• Focus of providing evidence of program/
institutional assessment is on the cumulative
effect of student learning and influences:
• When to collect data
• From whom to collect data
• Interpretation of the results
13
11
HIERARCHY OF ASSESSMENT
LEARNING I can take what I have learned and
put it in context. I begin to question
what I hear, challenge assumptions
and make independent decisions
Create about effective practices for my
program.
Remember
15
12
THE ASSESSMENT PROCESS
17
A
T
T
Competency‐Based I
Instruction T
“Gatekeeping”
U
Individual
Assessment‐Based Curriculum K D B
Individual Perf. Tests Admissions Tests N E E
O S H
Rising Junior Exams S
W A
Placement K
Comprehensive Exams L & V
I
Advanced Placement Tests E I
Level Certification Exams L
D V O
of Vocational Preference Tests L
G A R
Assessment Other Diagnostic Tests S
E L
(Who?)
U
E
Program Enhancement Campus and Program S
Evaluation
Individual assessment
results may be aggregated to Program Reviews
Group serve program evaluation needs Retention Studies
Alumni Studies
“Value‐added” Studies
Learning/Teaching Accountability
(Formative) (Summative)
13
DEFINITIONS
19
TERMS DEFINITIONS
Program Program educational objectives are broad statements that describe what
graduates are expected to attain within a few years of graduation. Program
Educational educational objectives are based on the needs of the program’s
Objectives constituencies.
Student outcomes describe what students are expected to know and be able
Student
to do by the time of graduation. These relate to the skills, knowledge, and
Outcomes behaviors that students acquire as they progress through the program.
Assessment is one or more processes that identify, collect, and prepare data
to evaluate the attainment of student outcomes. Effective assessment uses
Assessment relevant direct, indirect, quantitative and qualitative measures as appropriate to
the outcome being measured. Appropriate sampling methods may be used as
part of an assessment process.
Evaluation is one or more processes for interpreting the data and evidence
accumulated through assessment processes. Evaluation determines the extent
Evaluation to which student outcomes are being attained. Evaluation results in decisions
and actions regarding program improvement.
14
ABET TERMS OTHER POSSIBLE TERMS
FOR THE SAME CONCEPT
Program Educational Goals, Outcomes, Purpose,
Objectives Mission, etc.
Goals, Objectives,
Student Outcomes Competencies, Standards,
etc.
Performance Criteria,
Competencies, Outcomes,
Performance Indicators
Standards, Rubrics,
Specifications, Metrics, etc.
Assessment Evaluation
Evaluation Assessment
15
16
PEO’s & SO’s
PROGRAM EDUCATIONAL
OBJECTIVES
22
PROGRAM EDUCATIONAL
OBJECTIVES
• Program educational objectives are broad
statements that describe what graduates are
expected to attain within a few years of
graduation. Program educational objectives
are based on the needs of the program’s
constituencies.
PEO’s answer the question: What knowledge and
skills will our graduates need to be successful in their
careers?
23
17
CRITERION 2: PROGRAM
EDUCATIONAL OBJECTIVES
Mission Statement
• Provide the institutional mission statement.
Program Educational Objectives
• List the program educational objectives and state where these can be
found by the general public.
Consistency of the Program Educational Objectives with the
Mission of the Institution
• Describe how the program educational objectives are consistent with
the mission of the institution.
24
CRITERION 2: PROGRAM
EDUCATIONAL OBJECTIVES
Program Constituencies
• List the program constituencies. Describe how the program
educational objectives meet the needs of these constituencies.
Process for Revision of the Program Educational Objectives
• Describe the process that periodically reviews and revises, as
necessary, the program educational objectives including how the
program’s various constituencies are involved in this process. Include
the results of this process and provide a description of any changes
that were made to the program educational objectives and the
timeline associated with those changes since the last general review.
25
18
Application: Critique
PROGRAM EDUCATIONAL
OBJECTIVES
26
APPLICATION
PROGRAM EDUCATIONAL OBJECTIVES
Part A: 15 minutes
1. Working independently, review the PEO’s and make a list of
strengths and weaknesses (5 minutes). Document your findings
using the table below the PEOs.
Think about:
a) Do they meet the ABET definition?
• Broad statements
• Based on the needs of the constituents
• Describe what graduates are expected to attain within a few years of
graduation
b) Are they well constructed?
c) Clearly defined?
• Serve as thresholds for early career development
• Relevant to the profession
• Achievable and realistic
• Align with constituent needs and institutional mission
2. Share your findings with the others on your team and develop one
list of strengths and weakness (10 minutes)
19
APPLICATION
PROGRAM EDUCATIONAL OBJECTIVES
Part B: 10 minutes
• Where you find weaknesses, suggest changes to the
PEOs. You can also add new PEOs if necessary. Based
on your discussion, provide a bullet list of modified
PEOs.
Part C: 10 minutes
• Report out using Nominal Group Process.
28
PROGRAM EDUCATIONAL
OBJECTIVES
• Where do they come from?
• Who decides what they are?
• What is their purpose?
• How do you know if they are still
relevant?
• How do you keep them current?
29
20
ABET NO LONGER REQUIRES
ASSESSMENT OF ATTAINMENT OF
PROGRAM EDUCATIONAL OBJECTIVES
OLD DEFINITIONS NEW DEFINITIONS
Program Program educational objectives are broad statements that describe what graduates
Educational are expected to attain within a few years after graduation. Program educational
Objectives objectives are based on the needs of the program’s constituencies.
Evaluation is one or more processes for Evaluation is one or more processes for
interpreting the data and evidence interpreting the data and evidence
accumulated through assessment accumulated through assessment
Evaluation processes. Evaluation determines the processes. Evaluation determines the
extent to which student outcomes and extent to which student outcomes are
program educational objectives are being attained…
being attained...
30
Review of Results
by Faculty including
alignment with
student outcomes Recommended
Recommended
Program Educational
Changes Objectives
Reviewed by
Constituents
Additional changes
No further changes
recommended
Program Educational
Objectives
21
CRITERION 2 – COMMON ISSUES
PROGRAM EDUCATIONAL OBJECTIVES
• Program educational objectives are not published or
readily accessible to the public.
• Program educational objectives are not related to
institutional mission or are inconsistent with the mission.
• Program educational objectives are not consistent with
the needs of the program’s various constituencies.
• Program educational objectives do not describe what
graduates are expected to attain within a few years after
graduation.
• There is no indication as to who are the program’s
constituents.
32
33
22
COURSE ASSESSMENT
AND PROGRAM
ASSESSMENT
SIMILARITIES AND DIFFERENCES
34
Tensile strength
Terminology Ductility
SUBJECT
Material
Properties Shear force
Beams Bending moment
Strength of Materials
Torsion
Angle of twist
Columns Power transmission
Fatigue
Euler buckling
Crack growth
S-N curves
35
23
Create
(Evaluate)
Evaluate
(Synthesis)
Analyze
(Analysis)
Apply
(Application)
Understand
(Comprehension)
Remember
(Knowledge)
36
COURSE ASSESSMENT
CONCEPTS
Course Context
Subject matter
Faculty member
Pedagogy Stress
Students TOPICS Strain
Facilities
Tensile strength
SUBJECT Terminology Ductility
Material
Properties Shear force
Beams Bending moment
Strength of Materials
Torsion
Angle of twist
Columns Power transmission
Fatigue
Euler buckling
Assessment Focus
• Evaluate individual student performance (grades) Crack growth
• Evaluate teaching effectiveness S-N curves
24
CHANGES TO BLOOM’S TAXONOMY
COURSE ASSESSMENT
Crack growth
performance (cognitive) level S-N curves
25
COURSE ASSESSMENT
MA111 PH113 ME461
HS202 CM201
RH330 ME430
EM104 MA111
HSxxx HSxxx
ES202 ME406
EM203
GE131
ME470
EM120 MEelec
MA223
ME323
MA112
ME303 ECE207
MA221 MEelec
CM202 ME311
PH112 ES205 ME317
EM103 MExxx
ME123 HSxxx
40
PROGRAM
PERFORMANCE
ASSESSMENT INDICATORS
STUDENT
OUTCOMES CONCEPTS
PROGRAM Stress
EDUCATIONAL Strain
OBJECTIVE
TOPICS Tensile strength
Ductility
Terminology
SUBJECT Material Properties Shear force
Beams Bending moment
Strength of Materials Torsion
Columns Angle of twist
Fatigue Power transmission
Euler buckling
Crack growth
S-N curves
26
PROGRAM PERFORMANCE
ASSESSMENT INDICATORS
STUDENT
OUTCOMES
PROGRAM
EDUCATIONAL
Students will
OBJECTIVE demonstrate:
• Appreciation for 1) Demonstrate knowledge of
Graduates will continue and ability to professional code of ethics.
to learn and perform in pursue life-long
a professional and learning 2) Evaluate the ethical
ethical manner
• Understanding of dimensions of a problem in
professional the discipline.
ethical
responsibilities
27
PROGRAM ASSESSMENT
CL100 PH113 ES205 ME302 MExxx HSxxx Student Outcomes:
MA111 ME421 MExxx Technical
EM104 MA222 ECE207 RH330 MExxx HSxxx Ethics
44
45
28
CONTEXT FOR PROGRAM LEVEL ASSESSMENT
Institutional Context
Coursework
and
Curricular
Patterns
Pre-college
Out-of-Class Student
traits of
Experiences Outcomes
students
Classroom
Experience
GRADES ≠ ASSESSMENT
• Grades have limited use for program
assessment as they do not have diagnostic
value.
• Grades can be a ‘flag,’ but do not point to
specific strengths and weaknesses of what
students know or can do.
• A student’s grade in a course or on a project
or exam represents the student’s
performance on an set of aggregated
knowledge/skills.
47
29
DIFFERENCES BETWEEN
CLASSROOM AND PROGRAM
ASSESSMENT
Degree of complexity
Time span
Accountability for the assessment process
Cost
Level of faculty buy-in
Level of precision of the measure
48
High
Degree of interest/commitment
Medium
Low
Assessment Focus©
49
G. Rogers - ABET, Inc.
30
50
STUDENT OUTCOMES
MEASURING STUDENT ABILITIES
51
31
DEFINITION: STUDENT
OUTCOMES
• From ABET Criteria:
53
32
PERFORMANCE
INDICATORS
STUDENT
OUTCOME
PROGRAM
EDUCATIONAL
OBJECTIVE
Students will
demonstrate:
Graduates will solve •an ability to identify,
complex problems and formulate, and solve
participate in a team- complex problems
based environment
•the ability to
function effectively
on a team
PROGRAM
ASSESSMENT
G.Rogers--ABET, Inc.
PERFORMANCE
INDICATORS
STUDENT
OUTCOME
PROGRAM Researches and
EDUCATIONAL
OBJECTIVE
gathers information
Ability to Fulfills duties of team
Graduates will solve function roles
complex problems and
participate in a team- effectively on Shares in work of
based environment a team team
Listens to other
teammates
Makes
contributions
Takes
responsibility
Values other
viewpoints
G.Rogers--ABET, Inc.
33
REMEMBER UNDERSTAND APPLY ANALYZE EVALUATE CREATE
Arrange Classify Apply Analyze Appraise Arrange
Define Compare Change Appraise Argue Assemble
Describe Compute Choose Break down Assess Construct
Duplicate Convert Calculate Calculate Choose Collect
Identify Contrast Classify Categorize Compare Compose
Label Defend Demonstrate Compare Contrast Create
List Describe Determine Contrast Criticize Design
Match Differentiate Employ Criticize Defend Develop
Name Distinguish Examine Debate Discriminate Formulate
Order Estimate Illustrate Diagram Estimate Integrate
Outline Explain Interpret Differentiate Evaluate Manage
Recite Extrapolate Modify Discriminate Explain Organize
Recognize Generalize Operate Distinguish Interpret Plan
Relate Interpolate Practice Examine Judge Prepare
Repeat Locate Predict Experiment Measure Prescribe
Reproduce Paraphrase Prepare Indentify Predict Produce
Select Predict Produce Infer Rank Propose
State Recognize Restructure Inventory Rate Specify
Tabulate Review Schedule Relate Recommend Synthesize
Tell Summarize Sketch Separate Select Write
Translate Solve Subdivide Support
Use Test Validate
34
STUDENT OUTCOMES SHOULD
FOCUS ON:
• What do you want students to know/do by
the time they complete the program?
• Transfer of learning
• What is the cumulative knowledge or skills
you what students to demonstrate by the
time they complete the program?
Adapted from quote by William Bruce Cameron
58
PERFORMANCE INDICATORS
59
35
COMPARABLE TO LEADING
INDICATORS
• Concept used in economics
• Identify specific characteristics of the economy
that are significant indicators of the current state
and predict future trends
• Not everything
• Those that have found to be the most critical in
predicting how well the economy is doing
• Several characteristics taken together
60
DEVELOPING PERFORMANCE
INDICATORS
• Two essential parts
• Subject content
• Content that is the focus of instruction (e.g.,
steps of the design process, chemical
reaction, scientific method)
• Action verb
• Direct students to a specific performance
(e.g., “list,” “analyze,” “apply”)
61
36
WHAT TO AVOID IN WRITING
PERFORMANCE INDICATORS:
• Verbs that describe feelings, emotions, thoughts or
similar features that are not observable or
measurable
• E.g., appreciate, believe, know, learn, realize, think,
understand.
• Descriptions of what the student will do
• E.g., “write a paper on social issues,” “demonstrate
how to use a laser guide . . .”
• REMEMBER: write performance indicators from
the perspective of what the student should be able
to demonstrate by means of the assessment
62
PERFORMANCE INDICATORS
• Students should be able to:
• <<action verb>>
• <<something>>
• Learner-centered
• Specific action-oriented
• Measurable
• Cognitively appropriate for intended level
63
37
Application:
PERFORMANCE INDICATORS
64
CHOOSE AN OUTCOME
1. An ability to communicate effectively (speaking)
(EAC, CAC)
2. Knowledge of contemporary issues (EAC)
3. Recognition of the need for, and an ability to engage
in continued professional development (EAC, CAC,
ETAC)
4. Ability to identify, formulate and solve technical
problems (EAC, CAC, ETAC)
5. Ability to use current techniques, skills, and tools
necessary for practice (EAC, CAC)
6. Knowledge of the impact of … solutions in a societal
and global context (EAC, CAC)
65
38
SILENT BRAINSTORMING
66
AFFINITY PROCESS
Step 3 (20 minutes): Once each person has developed performance
indicators (one performance indicator per post-it note), place all the
post-it notes on the flip chart.
As you have all been working on the same student outcome there will
be some similarities between the performance indicators. Group the
performance indicators by content NOT action verb.
After all the Post-It notes have been grouped, the team should
determine an appropriate action verb for each performance indicator
grouping. Be sure to discuss any “outliers.” It is not unusual that these
are important and should not be overlooked.
67
39
DOCUMENT PERFORMANCE
INDICATORS
Step 4 (5 minutes): The final step is to draft your finalized
performance indicators.
68
IMPORTANCE OF WELL-STATED
PERFORMANCE INDICATORS
• Provides faculty with clear
understanding for implementation in
the classroom
• Makes expectations explicit to
students (great pedagogy)
• Focuses data collection
69
40
CRITERION 3 – COMMON ISSUES
STUDENT OUTCOMES
• Student outcomes are stated such that
attainment is not measurable. (Note: Having
student outcomes whose attainment is not measurable is
not by itself a violation of any criterion, but if attainment
of an outcome is not measurable then the extent to
which it is attained may not be appropriately evaluated,
as required in Criterion 4.)
• There is missing or incomplete justification as to
how the student outcomes prepare graduates to
attain the program educational objectives.
70
71
41
42
Rubrics
RUBRICS
Scoring the level of student performance
72
WHAT IS A RUBRIC?
"Rubrics" are a way of explicitly stating the expectations
for student performance. They may lead to a grade or
be part of the grading process but they are more
specific, detailed, and disaggregated than a grade.
73
43
LEVELS OF
PERFORMANCE COMMUNICATION SKILLS
Unsatisfactory Developing Satisfactory Exemplary
Unsatisfactory Developing Satisfactory Exemplary
1 2 3 4
Performance
Indicator
indicator
#1 Descriptor Descriptor Descriptor Descriptor
Performance
Indicator
indicator
#2 Descriptor Descriptor Descriptor Descriptor
Performance
Indicator
indicator
#3 Descriptor Descriptor Descriptor Descriptor
Performance
Indicator
indicator
#4 Descriptor Descriptor Descriptor Descriptor
DIMENSIONS DESCRIPTORS
74
Exemplary
Descriptor Descriptor Descriptor Descriptor
LEVELS OF
PERFORMANCE DESCRIPTORS
75
44
WHAT IS A RUBRIC?
Tool to score student performance in an assessment
environment (e.g., oral presentation, local exam,
performance observation, etc.)
Can be used for both formative and summative purposes
Defines expectations, and especially useful when
dealing with processes or abstract concepts
Provides a common "language" to help faculty and
students talk about expected learning
Increases reliability of the assessment when using
multiple raters
76
PURPOSE OF RUBRIC
• How you are going to use the results drives decisions
about rubrics
• What kind of feedback do you want?
• Individual student/program
• General/specific
• How will data be collected?
• Formative/summative
• Developmental over time/single point in time
• For whom?
• Student
• Faculty member
• Program
77
45
HOW ARE YOU GOING TO USE
RESULTS?
• Do you want general information
about student performance?
• Do you want specific information
about student competence?
78
79
46
WORK EFFECTIVELY IN TEAMS
UNSATISFACTORY DEVELOPING SATISFACTORY EXEMPLARY
• Collects a great
deal of information
• Collects some
which goes beyond
• Does not collect any information relate to • Collects basic
the basics.
information that the topic but information related
• Performs all duties
relates to the topic. incomplete. the topic.
assigned and
• Does not perform • Inconsistently performs • Performs duties that
actively assists
any duties of duties that are are assigned
others.
assigned team role. assigned • Usually does the
• Always does the
• Always relies on • Rarely does the assigned work--
assigned work
others to do the work. assigned work--often rarely needs
without having to
• Is always talking-- needs reminding. reminding.
be reminded.
never allows anyone • Usually doing most of • Listens most of the
• Consistently listens
else to speak. the talking--rarely time
and responds to
allows others to speak.
others
appropriately.
80
EXAMPLE OF RESULTS -
FORMATIVE
60% 50%
40%
20% HOLISTIC
0%
Percent of students who perform at
or above satisfactory level
n=60 (population)
81
47
TYPE OF RUBRIC - ANALYTIC
• Analytic performance levels focus on
specific dimensions of student
performance related to performance
indicators.
• Dimensions are presented in separate
categories and rated individually.
• Each performance indicator is rated
separately.
82
Collects a great
RESEARCH & Does not collect any Collects very little Collects some basic deal of
GATHER information that relates information--some information--most information--all
INFORMATION to the topic. relates to the topic. relates to the topic. relates to the
topic.
Performs all
Does not perform any
FULFILL TEAM Performs very few Performs nearly all duties of
duties of assigned
ROLE'S DUTIES duties. duties. assigned team
team role.
role.
LISTEN TO Is always talking-- Usually doing most of Listens, but Listens and
OTHER never allows anyone the talking--rarely allows sometimes talks too speaks a fair
TEAMMATES else to speak. others to speak. much. amount.
83
48
TEAMING SKILLS - FORMATIVE
PERCENT STUDENTS WITH SATISFACTORY OR
EXEMPLARY PERFORMANCE
100% N=60 (POPULATION)
81%
80%
ANALYTIC
60% 55%
40%
38%
25%
20%
0%
Research Fulfill Roles Share in work Listening
Information
100%
90% 15% 19%
32%
80%
50%
70% 30%
60%
30%
50%
40% 77%
25%
30% 49%
20% 34%
10% 25%
6% 4% 4%
0%
Researches/gathers info Fulfills roles and duties Shares in work Listens
49
STRENGTH OF ANALYTIC
RUBRIC
• Provides information about relative strengths
and weaknesses of student performance related
to an outcome.
• Provides detailed feedback which can be used
to promote curricular enhancements
• Useful for assessment of abstract concepts or
processes
• Provides students an opportunity to self-assess
their understanding or performance
86
GENERIC OR TASK-SPECIFIC
RUBRIC
• Generic
• Rubric that can be used across similar
performances (used across all
communication tasks or problem-solving
tasks)
• Task-specific
• Rubric which is designed for a single task
• Cannot be generalized across a wide variety
of student work
87
50
HOW MANY LEVELS OF
PERFORMANCE?
• Consider both the nature of the performance and
purpose of scoring
• Recommend 3 to 5 levels to describe student
achievement at a single point in time
• If focused on developmental curriculum (growth
over time) more performance levels are needed
(i.e., 6-???)
• More performance levels, the more difficult it is
to get inter-rater reliability
88
DEVELOPING RUBRICS
• Be clear about how the rubric is to be used
• Program assessment
• Individual student assessment
• Analytic/Holistic
• For process improvement, analytic rubric
provides information that can be used to focus
instruction in areas of weakness
• Can use student work as a guide in developing
rubric
• Start with extremes and work toward middle
• Pilot test
• Rubric development is a process
89
51
PLEASE RATE EACH MEMBER OF THE TEAM ON THE
FOLLOWING SCALE:
UNSATISFACTORY DEVELOPING SATISFACTORY EXEMPLARY
1 2 3 4
NAME ATTRIBUTE 1 2 3 4
Produces research information for team
Demonstrates understanding of team roles when assigned
RESEARCH & Does not collect any Collects some Collects basic Collects a great deal of
GATHER information that relates to information relate to the information related the information which goes
INFORMATION the topic. topic but incomplete. topic. beyond the basics.
SHARE IN Rarely does the assigned Usually does the Always does the
Always relies on others
WORK OF work--often needs assigned work--rarely assigned work without
to do the work.
TEAM reminding. needs reminding. having to be reminded.
LISTEN TO Is always talking--never Usually doing most of the Consistently listens and
OTHER allows anyone else to talking--rarely allows Listens most of the time responds to others
TEAMMATES speak. others to speak. appropriately.
RESEARCH & GATHER FULFILL TEAM ROLE'S SHARE IN WORK OF LISTEN TO OTHER
STUDENT INFORMATION DUTIES TEAM TEAMMATES
52
DEVELOPING RUBRICS
Identify characteristics you want to be
demonstrated by students (Performance
Indicators)
Determine how rubric will be used:
Analytic or holistic, generic or task-
specific
Write narrative description for each
performance level (satisfactory, excellent,
etc.)
92
Application:
DEVELOPMENT OF RUBRICS
93
53
54
RUBRIC TEMPLATE
Student Outcome_______________________________________
Performance Level Performance Level Performance Level Performance Level
(Descriptor) (Descriptor) (Descriptor) (Descriptor)
Performance
Indicator
Performance
Indicator
Performance
Indicator
Performance
Indicator
EXERCISE: RUBRIC DEVELOPMENT
(30 MINUTES)
Step 1: Using the outcome and performance indicators you developed,
create an analytic rubric (at least four rows).
Step 2: Determine how many performance levels
Step 3: Description of each performance level
Should be value free--free from subjective values or standards as best
you can (avoid use of words like “many,” “most,” “few,” “little”)
Step 4: Remember: How will the findings be used? Will findings
enable you to make decisions about program improvement?
Step 5: Use the template provided or develop your own using a blank
piece of paper. Please use a dark pen or fine-tipped marker so that
your rubric can be seen using the document camera.
94
Application:
CALIBRATING RUBRICS
95
55
EXERCISE: RUBRIC
CALIBRATION PROCESS
Constraints:
• We are not grading the papers.
• Do not change the rubrics.
• Don’t overthink your assessment.
• Think globally about the student work and about the
learning skill.
• Start with the high rubric level and work backward. Ask
what is missing here that would bring the score down?
• N/A may exist (meaning that the work is not
intended to meet a particular performance indicator).
96
CALIBRATION PROCESS
An assignment was given asking students to write a one-page
executive summary of a paper on the ethical considerations of the
Bhopal disaster. This assignment was designed to demonstrate the
student outcomes: Professionalism and ethics, and writing
.
* AAC&U, 2013 97
56
EXAMPLE OF SCORE SHEET (ETHICS)
Exceeds
Needs Imp Meets Expect
Expect
Knowledge of
Initials Initials Initials
codes
In ethical/
professional Initials Initials Initials
behavior
Recognize
Initials Initials Initials
ethical
dilemmas
Articulation of
Initials Initials Initials Initials
ideas
Use of
graphs/tables/ Initials Initials Initials Initials
etc.
57
Throughout history there have been many disasters involving chemicals that have affected the
chemical industry. Out of all these disasters there may not be a disaster with more impact on the
industry than the Bhopal Disaster of 1984. The Bhopal Disaster took place on the night of December
3rd, 1984 at a Union Carbide plant in Bhopal, India. This plant was used to produce a pesticide
called Sevin. The main ingredient in Sevin is methyl isocyanate (MIC). MIC reacts violently with
water, and inhalation of its vapors can cause blindness and severe lung damage. Large quantities of
MIC were being held in three large storage tanks. With normal safety measures in place at the plant,
most common accidents would not be a problem. Unfortunately the Bhopal plant did not have all
the necessary safety nets in place.
In the late hours of December 3, 1984, approximately one ton of water leaked into a tank containing
MIC. As the water reacted with the MIC, the tank the pressure built until it blew the top off the tank.
The iron in the tank reacted with the MIC and caused a large secondary explosion, sending large
amounts of MIC into the air. Because MIC is heavier than air, the MIC vapors settled on a high
density of people living around the plant. The secondary explosion resulted in the death of around
3,800 people in the hours following the explosion. As the night wore on more MIC leaked into the
air and more and more people were exposed to MIC. After the first few days of the accident it was
estimated that around 10,000 people had died and 500,000 people had been exposed to MIC.
When a disaster of this magnitude takes place there are always multiple factors that have an effect.
This disaster was no different and ethical considerations are a major part. The basics of the disaster
is the problem of safety violations by plant management, which is the first concern of the Code of
Conduct of engineers. The reports on the disaster show that there were many safety measures that
could have helped stop this disaster. A major failure in safety was that preventative measures in
place in the plant for MIC leaking did not work or had been shut down to save on costs. With all of
these safety measures out of commission there was almost no way to stop the MIC once it got out of
the storage tank. Another contributing factor to the disaster was that the operators of the plant
were not well trained. The operator working that night did not understand the chemical process
and was not trained to deal with what was going on. He called one of the scientists who had
invented the process and was on site. By the time the scientist got to the operation control room it
was too late to stop the disaster. The last factor is political in that at the high density of people lived
as “squatters” in shanty towns around the plant. Once the MIC was in the air it was able to affect a
large amount of people in a short amount of time.
While this horrible disaster is a gross example of the consequences of poor manufacturing
practices, it serves as a lesson in safety and ethics to the chemical industry around the world.
Clearly safety equipment should not be shut down as the harm to the public was significant. Second,
the MIC should have been stored in smaller quantities, a cost consideration that should have been
managed. Training of employees should also be mandatory since the Engineering Code of Conduct
requires that engineers only work in their area of expertise. Citizens living near chemical plants
also have the right to know about the dangers of the chemical produced at the plant. However,
governments should also enforce zoning and possibly make sure a safe border exists around plants
that produce harmful chemicals. The silver lining is that this disaster led to many changes in the
chemical industry and led to a closer look at better safety and ethics in plant management for the
future.
58
59
60
Outcome H: An understanding of professional and ethical
responsibility
Rating Scale
Element Needs Improvement Meets Expectations Exceeds Expectations
Is aware of ethical standards
such as the Code of Applies relevant aspects Evaluates and judges a
Professional Engineers, the of codes of ethics when situation and possible future
Knowledge of professional codes
AIChE Code of Ethics, and considering possible actions in terms of the
of ethics
the MSU Students’ Rights alternative decisions or appropriate professional
and Responsibilities solutions. code of ethics
Document.
Demonstration of professional Student work is Is punctual, professional,
and ethical behavior in the Student work is acceptable, but not and collegial; attends
classroom [attendance, unprofessional; has been exemplary; usually classes regularly; work is
punctuality, professional work caught plagiarizing punctual with fairly regular always neat and
submitted] class attendance professional
Recognition of ethical dilemmas Identifies a situation in which Applies ethical decision- Identifies an ethical
and use of appropriate tools and ethical issues are concerned making tools when dilemma; evaluates and
strategies in making ethical
for the individual or other considering an ethical judges a situation using
decisions [recognizes when an
stakeholder, but does not use issue in engineering or in appropriate analysis tools;
issue is an ethical decision versus
a purely technical decision,
ethical decision-making the campus classroom; evaluates the credibility of
applies decision-making models, models or uses personal simple approach with little information to make sound
applies code(s) of ethics]. opinion to evaluate or no additional analysis judgments
OUTCOME G: An ability to communicate effectively (WRITING)
61
CALIBRATION PROCESS
Step 3 -- Group Assessment (10 minutes): As a group, re-read the
student summary and rate it using the rubrics. Each score should be
reached by consensus. Record any instances where a consensus
cannot be reached or any additional comments you consider pertinent.
Step 4 -- Rubric Critique (15 minutes): Discuss how useful the
rubrics were. Things to consider are:
• Number of performance levels;
• Description of performance levels;
• Language specificity (was the language vague or subjective);
• Usefulness: does the scoring provide useful information about
areas of strength and the need for improvement?
Step 5 -- Recommendations (5 minutes): What recommendations
would you make to improve the rubrics?
103
62
COURSE ASSESSMENT
(WRITING) CONCEPTS
TOPICS
Focus
Supporting Details
SUBJECT
Content Coherence
Transitions
Writing Organization
Style Voice
Word Choice
Sentence fluency
Conventions
104
PERFORMANCE
INDICATORS
PROGRAM ASSESSMENT (ABBREVIATED)
(WRITING)
STUDENT
OUTCOME
PROGRAM
EDUCATIONAL
Supporting
OBJECTIVE Details
Organization
Style 105
63
CONCEPTS
TOPICS
Focus
SUBJECT Supporting Details
Content
Coherence
Writing Organization Transitions
Style Voice
Word Choice
Sentence fluency PERFORMANCE INDICATORS
Conventions (ABBREVIATED)
STUDENT
OUTCOME
PROGRAM EDUCATIONAL
OBJECTIVE
Supporting Details
Audience
Conventions
Graphics
106
64
Writing Skills Rubric
http://www.kent.k12.wa.us/KSD/KR/CP/WritingSkillsRubric.doc
Performance Progressing to
Exceeds standard Meets standard Below standard
Indicators standard
Maintains exceptional focus Maintains consistent focus Provides inconsistent focus Demonstrates little or
Focus on the topic on the topic on the topic no focus
Includes inconsistent
Includes some details, but or few details which
Supporting Provides ample supporting Provides adequate
may include extraneous or may interfere with
Details details supporting details
loosely related material the meaning of the
text
Organizational pattern is Little evidence of
Organizational pattern is Achieves little completeness
logical; conveys organization or any
Coherence logical; conveys & wholeness though
completeness & wholeness sense of wholeness &
completeness & wholeness organization attempted
with few lapses completeness
Provides transitions that Uses poor transitions
Provides transitions which Provides transitions which
Transitions eloquently serve to connect or fails to provide
serve to connect ideas are weak or inconsistent
ideas transitions
Some sense of the person Some sense of the person Little or no sense of
Allows the reader to sense
Voice behind the words is behind the words is the person behind
the person behind the words
evident attempted the words is evident
Has a limited or
Uses effective language; Uses effective language & Limited & predictable
inappropriate
makes engaging, appropriate appropriate word choices vocabulary, perhaps not
Word Choice vocabulary for the
word choices for audience & for intended audience & appropriate for intended
intended audience &
purpose purpose audience & purpose
purpose
Sentences/phrases Sentences/phrases Has little or no
Shows limited variety in
Sentence Fluency appropriately varied in somewhat varied in length variety in sentence
sentence length & structure
length & structure & structure length & structure
Does not follow the
Consistently follows the rules Generally follows the rules Generally does not follow
rules of Standard
Conventions of Standard English for for Standard English for the rules of Standard English
English for
conventions conventions for conventions
conventions
65
66
Ability to write effectively
Progressing to Below
Performance Indicators Exceeds standard Meets standard
standard standard
Includes
Provides supporting details Includes some details,
Provides clarity of detail that Provides details that inconsistent or few
but also includes
which enhances the quality enhances the overall quality support the premise of details which
extraneous or loosely
of the report of the report the report interfere with the
related material
meaning of the text
Uses logical organizational Organizational pattern Evidence of Little evidence of
Organizational pattern is
is logical with only organization but organization or any
pattern which enhances logical and conveys
minor lapses in completeness & sense of wholeness
understanding completeness & wholeness
coherence wholeness is lacking & completeness
Uses effective Limited & predictable Has a limited or
Uses effective language;
language & vocabulary, perhaps inappropriate
Uses language which is makes engaging,
appropriate word not appropriate for vocabulary for the
appropriate to audience appropriate word choices
choices for intended intended audience & intended audience &
for audience & purpose
audience & purpose purpose purpose
Basically follows the
Generally does not Does not follow the
Consistently follows the rules for Standard
Applies the rules of follow the rules of rules of Standard
rules of Standard English English for
standard English Standard English for English for
for conventions conventions with only
conventions conventions
minor lapses
Figures and charts are
Figures and charts
Figures and charts are used to communicate
Uses graphics which Figures and charts are are missing or have
clear and, with a few but lack consistency
appropriate, clear and deficiencies in
enhance audience exceptions, in format and style
communicate well to the formatting and style
understanding communicate clearly detracting from
audience which detract from
to the audience. audience
understanding.
understanding.
Adapted from http://www.kent.k12.wa.us/KSD/KR/CP/WritingSkillsRubric.doc
STUDENT
StudentTOTAL
total possible
POINTS points
= 100= 100
67
COMMUNICATION SKILLS
(60 STUDENTS/2 SECTIONS)
100%
Below Progressing Meets Exceeds
80%
70% 70%
60%
60% 55%
50%
40%
30%
20% 20% 20%
20% 15% 15% 15% 15%
10% 10% 10% 10%
5%
0%
Supporting details Coherence Audience Conventions Graphics
110
COMMUNICATION SKILLS
(60 STUDENTS/2 SECTIONS)
100%
5% 10%
20% 15% 15%
60%
70%
70% 50%
40%
55% 60%
20%
20%
10% 15% 10%
10%
0%
Supporting details Coherence Audience Conventions Graphics
68
USE OF EXCEL TO
SCORE PERFORMANCE
112
113
69
70
Curriculum Map
CURRICULUM MAPPING
Linking results to practice
114
LINKING RESULTS TO
PRACTICE
“I think you should be more
explicit here in Step Two.”
• Development of Curriculum
Map
• Linking curriculum content/
pedagogy to knowledge,
practice and demonstration
of performance indicators.
115
71
PROGRAM ASSESSMENT
CL100 PH113 ES205 ME302 MExxx HSxxx
Student Outcomes:
MA111 ME421 MExxx
Technical
EM104 MA222 ECE207 RH330 MExxx HSxxx
Ethics
RH131 HSxxx ME317 EM203 Elective Elective
Global
MA223 ES204 Elective
Teams
MA112 ES202 MExxx ME430 MExxx Elective
Cultural
PH112 CM202 ME406 ME323 Elective HSxxx Communications
HSxxx ME311 Elective Skills
ME123 ME328 HSxxx ME450 MExxx Elective Contemporary
Issues
116
PURPOSE OF CURRICULUM
MAP
• Demonstrates the alignment of the
curriculum to student
outcomes/performance indicators
• Enhances decisions about where to collect
data for summative assessment
• Guides the evaluation process and
decision-making about curriculum
improvements
117
72
Performance indicator Explicit. This indicator is explicitly stated as performance for this course.
Demonstrate Competence. Students are asked to demonstrate their competence on this performance indicator through homework, projects,
tests, etc.
Formal Feedback. Students are given formal feedback on their performance on this indicator.
Not covered. This performance indicator is not addressed in this course.
Note: Clicking on the link ‘view rubric’ will show you the scoring rubric for that particular performance indicators related to the
outcome.
2. Evaluate the ethical dimensions of professional engineering, mathematical, and scientific practices. YES YES YES
View rubric or make a comment (optional)
2. Fulfill Team Role's Duties . View rubric or make a comment (optional) YES YES YES
3. Share in work of team . View rubric or make a comment (optional) YES YES YES
4, Listen to Other Teammates . View rubric or make a comment (optional) YES YES YES
2. Provide content that is factually correct, supported with evidence, explained with sufficient detail,
and properly documented. View rubric or make a comment (optional) YES YES YES
3. Test readers/audience response to determine how well ideas have been relayed.
View rubric or make a comment (optional) YES YES YES
4. Submit work with a minimum of errors in spelling, punctuation, grammar, and usage.
View rubric or make a comment (optional) YES YES YES
73
74
COMPILE THE MAP:
Curriculum map for communication skills
FIRST YEAR SOPHOMORE JUNIOR SENIOR
Intro to Eng Statics Materials Design I
Gen Ed Seminar
Gen Ed
119
MACRO
MICRO- MICROCO BUS INTERNAT
BUSINESS - WRITING PRE-CAL INTRO TO PRIN PRIN PRIN PRIN BUS LAW MTG
ECONOMI MP APP STATISTI IONAL
ADMINISTRATION MAP ECONO FOR BUS (BUS) BUS MGMT MKTG ACCTG I ACCTG II I FINANCE
C FOR BUS CS BUS
MICS
I = Introduce; R = Reinforce; ECON ECON CS ENG MATH BUSI BUSI BUSI BUSI BUSI BUSI BUSI BUSI BUSI
E = Emphasize 207 208 214 200 1165 201 203 211 231 241 251 252 281 371
WRITING COMPETENCIES
Identify a subject and formulate
a thesis statement. I R
Organize ideas to support a
position. I R R R
Write in a unified and coherent
manner appropriate to the
subject matter. I R R R
Use appropriate sentence
structure and vocabulary. I R R R
Document references and
citations according to an
accepted style manual. I R R
CRITICAL THINKING
COMPETENCIES
Identify business problems and
apply creative solutions. I R R R R E
Identify and apply leadership
techniques. I R E
Translate concepts into current
business environments. I R R R R E
Analyze complex problems by
identifying and evaluating the
components of the problem. I R R E E
QUANTITATIVE REASONING
COMPETENCIES
Apply quantitative methods to
solving real-world problems. I R R R E
Perform necessary arithmetic I = Introduce (knowledge/comprehension)
computations to solve
quantitative problems. R =I Reinforce
R (application/analysis)
R R E
Evaluate information presented
in tabular, numerical, and
E = Emphasize (evaluation/synthesis)
75
76
I = Introduce; R = Reinforce; MA MA CS ENG MATH CS CS CS CS CS CS CS CS CS CS
E = Emphasize 207 208 214 200 365 201 203 211 231 241 310 312 325 412 424
WRITTEN
COMMUNICATION
Identify a subject and
K (F) A A (S)
formulate a thesis statement.
Organize ideas to support a
K A(F) A A A (S)
position.
Write in a unified and
coherent manner appropriate K A (F) A A A (S)
to the subject matter.
Use appropriate sentence
K A (F) A A A (S)
structure and vocabulary.
(S)Document references and
citations according to an K (F) A A A (S)
accepted style manual.
PROBLEM SOLVING
Identify computing problems
K (F) A A A A E (S) E
and apply creative solutions.
Identify and apply leadership
K (F) A E (S) E
techniques.
Translate concepts into
current computing K (F) A A A A E (S) E
environments.
Analyze complex problems
by identifying and evaluating
K (F) A A E E (S) E
the components of the
problem.
QUANTITATIVE
REASONING
Apply quantitative methods to
K (F) A A A E E (S)
solving real-world problems.
Perform necessary K= Knowledge/Comprehension;
computations to solve K (F) A A A E E (S)
quantitative problems. A= Application / Analysis;
Evaluate information
presented in tabular, E= Evaluate/Create 121
K (F) A A A E E (S)
numerical, and graphical
Assessment
Methods
ASSESSMENT METHODS
122
123
77
TYPES OF ASSESSMENT
Formative – those undertaken as Summative – obtained at the end of
students progress through the a course or program; the purpose of
FORMATIVE VS. course/curriculum; the purpose is to which is to document student
SUMMATIVE identify areas of learning that need learning; designed to capture
to be improved before the end of students’ achievement at the end of
the course/program. their program of study
Direct – Provides for the direct Indirect – Ascertains the opinion or
DIRECT VS. examination or observation of self-report of the extent or value of
student knowledge or skills against learning.
INDIRECT
measurable student outcomes.
Objective – one that needs no
professional judgment to score Subjective – yield many possible
OBJECTIVE VS. correctly; examples: answers of varying quality and
SUBJECTIVE multiple‐choice, true‐false, exams require professional judgment to
where there is a finite number of score
“right” answers
Embedded – program assessments Add-on – assessments that are in
EMBEDDED VS.
that are taken as a part of the addition to course requirements
ADD-ON course work
Quantitative –predetermined Qualitative – use flexible, naturalistic
QUANTITATIVE VS. response options that can be methods and are usually analyzed
summarized into meaningful by looking for recurring patterns and
QUALITATIVE
numbers and analyzed statistically themes
ASSESSMENT METHODS
CONTEXT FOR DATA COLLECTION
125
78
DIRECT MEASURES
Provide for the direct examination or observation of student knowledge or skills
against measurable student outcomes
Student
describes
learning
Direct Indirect
Assessment Assessment
Student
demonstrates
learning
INDIRECT MEASURES
126
DIRECT INDIRECT
• Exit and other interviews • Written surveys
• Standardized exams and
• Locally developed exams questionnaires
• Portfolios • Exit and other
• Simulations interviews
• Performance Appraisal • Archival records
• External examiner • Focus groups
• Oral exams
127
79
Application:
128
80
APPLICATION: ASSESSMENT
METHOD RESOURCE
Step 1. Review of methods (25 minutes)
• Meet with the representatives from the other tables who have
been assigned the same methods as yours.
• Spend 20 minutes total discussing together the highlights of
advantages and disadvantages for each assigned method
clarifying any questions that you might have.
• Discuss plans to “teach back” the methods to those at your
team table. You have three minutes per method for the teach
back.
• Remember, in the teach back process it does not make any
difference if you like the method or not. It is only your
responsibility to learn about the method so that you can teach
others about it. You will get an opportunity to lobby
for/against it during Step 3 below.
129
APPLICATION: ASSESSMENT
METHOD RESOURCE
Step 2: Teach back at your team table (33 minutes): Appoint
someone to be a timekeeper. Start with method one and,
whoever studied method #1 will teach the method to the others
at your table. Continue until all methods are covered. Spend no
more than 3 minutes per method.
Step 3: Assignment (10 minutes): After sharing the assessment
methods, choose THREE methods that can be used to assess the
student outcome for which you developed/critiqued
performance indicators. At least one method chosen must be a
direct method. Record your findings so that you can share your
recommendations. Include an example of how the method
could be used to assess the outcome.
130
81
Assessment Methods*
1. Written surveys and questionnaires ‐ Asking individuals to share their perceptions about
a particular area of interest—e.g., their own or others’ skills/attitudes/behavior, or
program/course qualities and attributes.
2. Exit and other interviews ‐ Asking individuals to share their perceptions about a
particular area of interest—e.g., their own skills/attitudes, skills and attitudes of others, or
program qualities—in a face‐to‐face dialog with an interviewer.
5. Focus groups ‐ Guided discussion of a group of people who share certain characteristics
related to the research or evaluation question, conducted by trained moderator.
6. Portfolios (collections of work samples, usually compiled over time and rated using scoring
rubrics).
9. External Examiner ‐ Using an expert in the field from outside your program – usually from
a similar program at another institution – to conduct, evaluate, or supplement the
assessment of students.
10. Archival Records ‐ Biographical, academic, or other file data available from college or other
agencies and institutions.
*Except where noted, materials relating to the advantages and disadvantages of assessment methods have been modified
by Gloria Rogers and used with permission. Prus, J. and Johnson, R., "Assessment & Testing Myths and Realities." New
Directions for Community Colleges, No. 88, Winter 94. These materials cannot be duplicated without the expressed
written consent of the authors.
82
GLOSSARY*
Backload (‐‐ed, ‐‐ing): amount of effort required after the data collection.
Competency: level at which performance is acceptable.
Confounded: confused.
Convergent validity: general agreement among ratings, gathered independently of one another,
where measures should be theoretically related.
Criterion‐referenced: criterion‐referenced tests determine what test takers can do and what they
know, not how they compare to others. Criterion‐referenced tests report how well students
are doing relative to a pre‐determined performance level on a specified set of educational
goals or outcomes included in the curriculum.
Externality: Externality refers to the extent to which the results of the assessment can be
generalized to a similar context.
External validity: External validity refers to the extent to which the results of a study are
generalizable or transferable to other settings. Generalizability is the extent to which
assessment findings and conclusions from a study conducted on a sample population can be
applied to the population at large. Transferability is the ability to apply the findings in one
context to another similar context.
Forced‐choice: the respondent only has a choice among given responses (e.g., very poor, poor, fair,
good, very good).
Formative assessment: intended to assess ongoing program/project activity and provide
information to improve the project. Assessment feedback is short term in duration.
Frontload (‐‐ed, ‐‐ing): amount of effort required in the early stage of assessment method
development or data collection.
Generalization (generalizability): The extent to which assessment findings and conclusions from a
study conducted on a sample population can be applied to the population at large.
Goal‐free evaluation: Goal‐free evaluation focuses on actual outcomes rather than intended
program outcomes. Evaluation is done without prior knowledge of the goals of the
program.
Inter‐rater reliability: the degree to which different raters/observers give consistent estimates of
the same phenomenon.
Internal validity: Internal validity refers to (1) the rigor with which the study was conducted (e.g.,
the study's design, the care taken to conduct measurements, and decisions concerning what
was and wasn't measured) and (2) the extent to which the designers of a study have taken
into account alternative explanations for any causal relationships they explore.
Longitudinal studies: Data collected from the same population at different points in time.
Norm (‐‐ative): a set standard of development or achievement usually derived from the average or
median achievement of a large group.
Norm‐reference: A norm‐referenced test is designed to highlight achievement differences between
and among students to produce a dependable rank order of students across a continuum of
achievement from high achievers to low achievers.
83
Observer effect: the degree to which the assessment results are affected by the presence of an
observer.
Open‐ended: assessment questions that are designed to permit spontaneous and unguided
responses.
Operational (‐‐ize): defining a term or object so that it can be measured. Generally states the
operations or procedures used that distinguish it from others.
Reliability: Reliability is the extent to which an experiment, test, or any measuring procedure
yields the same result on repeated trials
Rubrics: A rubric is a set of categories that define and describe the important components of the
work being completed, critiqued, or assessed. Each category contains a gradation of levels
of completion or competence with a score assigned to each level and a clear description of
what criteria need to be met to attain the score at each level.
Salience: a striking point or feature.
Stakeholder: Anyone who has a vested interest in the outcome of the program/project.
Summative assessment: assessment that is done at the conclusion of a course or some larger
instructional period (e.g., at the end of the program). The purpose is to determine success
or to what extent the program/project/course met its goals.
Third party: person(s) other than those directly involved in the educational process (e.g.,
employers, parents, consultants)
Triangulate (triangulation): The use of a combination of assessment methods in a study. An
example of triangulation would be an assessment that incorporated surveys, interviews,
and observations.
Topology: Mapping of the relationships among subjects.
Utility: usefulness of assessment results.
Variable (variability): Observable characteristics that vary among individuals responses.
Validity: Validity refers to the degree to which a study accurately reflects or assesses the specific
concept that the researcher is attempting to measure. Validity has three components:
relevance ‐ the option measures your educational objective as directly as possible
accuracy ‐ the option measures your educational objective as precisely as possible
utility ‐ the option provides formative and summative results with clear implications for
educational program evaluation and improvement
84
Written Surveys/Questionnaires 1
Definition: Asking individuals to share their perceptions about the curricular/co‐curricular areas
of interest—e.g., their own or others skills/attitudes/behavior, or program/course qualities and
attributes.
Advantages:
- Typically yield the perspective that students, alumni, the public, etc., have of the program that
may lead to changes especially beneficial to improving the program.
- Can cover a broad range of areas of interest within a brief period of time.
- Results tend to be more easily understood by lay persons.
- Can cover areas of interest, which might be difficult or costly to assess more directly.
- Can provide accessibility to individuals who otherwise would be difficult to include in
assessment efforts (e.g., alumni, parents, employers).
When ‘third‐parties’ are completing the survey/questionnaire there are additional advantages,
as follows:
- Can provide unique stakeholder input, valuable in its own right (especially employers and
alumni). How is the program serving their purposes?
- Offer different perspectives, presumably less biased than either student or faculty.
- Can increase both internal validity (through “convergent validity”/”triangulation” with
other data) and external validity.
- Convey a sense of importance regarding the opinions of stakeholder groups.
Disadvantages:
- Results tend to be highly dependent on wording of items, salience of survey or questionnaire,
and organization of instrument. Thus, good surveys and questionnaires are more difficult to
construct than they appear.
- Frequently rely on volunteer samples, which can be biased.
- Mail surveys tend to yield low response rates.
- Require careful organization in order to facilitate data analysis via computer for large samples.
- Commercially prepared surveys tend not to be entirely relevant to an individual institution and
its students.
- Forced response choices (forced‐choice) may not provide opportunities for respondents to
express their true opinions.
- Results reflect perceptions, which individuals are willing to report and thus tend to consist of
indirect data.
- Locally developed instrument may not provide for externality of results.
85
- Use only carefully constructed instruments that have been reviewed by survey experts.
- Include open‐ended, respondent worded items along with forced‐choice.
- If random sampling or surveying of the entire target population is not possible, obtain the
maximum sample size possible and follow‐up with non‐respondents (preferably in person or by
phone).
- If commercially prepared surveys are used, add locally developed items of relevance to the
program.
- If locally developed surveys are used, attempt to include at least some externally‐referenced
items (e.g., from surveys for which national data are available).
- Word reports cautiously to reflect the fact that results represent perceptions and opinions
respondents are willing to share publicly.
- Use pilot or “try out” samples in local development of instruments and request formative
feedback from respondents on content clarity, sensitivity, and format.
- Cross‐validate results through other sources of data through triangulation.
Bottom Lines:
A relatively inexpensive way to collect data on important evaluative topics from a large
number of respondents. Must always be treated cautiously, however, since results only
reflect what subjects are willing to report about their perception of their attitudes and/or
behaviors.
86
Exit and Other Interviews 2
Definition: Asking individuals to share their perceptions of their own attitudes and/or behaviors
or those of others. Evaluating student reports of their attitudes and/or behaviors in a face‐to‐face
dialogue.
Advantages:
Student interviews tend to have most of the attributes of surveys and questionnaires with the
exception of requiring direct contact, which may limit accessibility to certain populations. Exit
interviews provide the following advantages:
- Allow for more individualized questions and follow‐up probes/questions based on the
responses of interviewees.
- Provide immediate feedback to interviewer.
- Include same observational and formative advantages as oral examinations.
- Frequently yield benefits beyond data collection that comes from opportunities to interact with
students and other groups.
- Can include a greater variety of items than is possible on surveys and questionnaires, including
those that provide more direct measures of learning and development.
When ‘third‐parties’ are making the reports there are additional advantages, as follows:
- Can provide unique stakeholder/constituent input, valuable in its own right (especially
employers and alumni). How is the program/course serving the purposes of the
stakeholder group?
- Offer different perspectives, presumably less biased than either student or the faculty.
- Can increase both internal validity (through “convergent validity”/”triangulation” with
other data) and external validity (by adding more “natural” perspective).
Disadvantages:
- Requires direct contact, which may be difficult to arrange.
- May be intimidating to interviewees, thus biasing results in the positive direction.
- Results tend to be highly dependent on wording of items and the manner in which interviews
are conducted.
- Time consuming, especially if large numbers of persons are to be interviewed.
87
- Conduct pilot testing of interview questions and process and request feedback from
interviewee to improve the interview process.
- Utilize focus groups when individual interviewing is not possible or is too costly.
Bottom Lines:
Interviews provide opportunities to cover a broad range of content and to interact with
respondents. Opportunities to follow‐up responses can be very valuable. Direct contact
may be difficult to arrange, costly, and potentially threatening to respondents unless
carefully planned.
88
Commercial, Norm-Referenced, Standardized Exams 3
Definition: Group administered mostly or entirely multiple‐choice, “objective” tests in one or more
curricular areas. Scores are based on comparison with a reference or norm group. Typically must
be purchased from a private vendor.
Target of Method: Used primarily on students in individual programs, courses or for a particular
student cohort.
Advantages:
- Can be adopted and implemented quickly.
- Reduce/eliminate faculty time demands in instrument development and grading (i.e., relatively
low “frontloading” and “backloading” effort).
- Objective scoring.
- Provide for externality of measurement (i.e., externality validity is the degree to which the
conclusions in your study would hold for other persons in other places and at other times—
ability to generalize the results beyond the original test group)
- Provide norm group(s) comparison often required by mandates outside the program/
institution (e.g., accreditation agency, state or federal regulations).
- May be beneficial or required in instances where state or national standards exist for the
discipline or profession.
- Very valuable for benchmarking and cross‐institutional comparison studies.
Disadvantages:
- May limit what is measured.
- Eliminates the process of learning and clarification of goals and objectives typically associated
with local development of measurement instruments.
- Unlikely to completely measure or assess the specific objectives and outcomes of a program,
department, or institution.
- “Relative standing” (i.e., how student performance compares with others) results tend to be less
meaningful than criterion‐referenced (i.e., what students know or can do without comparison
to others) results for program/student evaluation purposes.
- Norm‐referenced data is dependent on the institutions in comparison group(s) and methods
of selecting students to be tested. (Caution: unlike many norm‐referenced tests such as those
measuring intelligence, present norm‐referenced tests in higher education do not utilize, for
the most part, randomly selected or well stratified national samples.)
- Group administered multiple‐choice tests always include a potentially high degree of error,
largely uncorrectable by “guessing correction” formulae (which lowers validity).
- Results unlikely to have direct implications for program improvement or individual student
progress.
- Results highly susceptible to misinterpretation/misuse both within and outside the institution.
- Someone must pay for obtaining these examinations; either the student or program.
- If used repeatedly, there is a concern that faculty may teach to the exam as is done with certain
AP high school courses.
89
- Choose the test carefully, and only after faculty have reviewed available instruments and
determined a satisfactory degree of match between the test and the learning outcomes of the
curriculum.
- Request and review technical data, especially reliability and validity data and information on
normative sample from test publishers.
- Utilize on‐campus measurement experts to review reports of test results and create more
customized summary reports for the institution/program, faculty, etc.
- Whenever possible, choose tests that also provide criterion‐referenced results
- Assure that such tests are only one aspect of a multi‐method approach in which no firm
conclusions based on norm‐referenced data are reached without validation from other
sources (triangulation).
Bottom Lines:
Relatively quick, and easy, but useful mostly where group‐level performance and external
comparisons of results are required. Not as useful for individual student or program
evaluation. May not only be ideal, but many times the only alternative for benchmarking
studies.
90
Locally Developed Exams 4
Definition: Objective and/or subjective assessments designed by faculty in the program or course
sequence being evaluated.
Advantages:
- Content and style can be geared to specific outcomes, objectives, and student characteristics of
the program, curriculum, etc.
- Specific indicators for performance can be established in relationship to curriculum.
- Process of development can lead to clarification/crystallization of what is important in the
process/content of student learning.
- Local scoring by program faculty can provide relatively rapid feedback.
- Greater faculty/institutional control over interpretation and use of results.
- More direct implication of results for program improvements.
Disadvantages:
- Require considerable leadership/coordination, especially during the various phases of
development.
- Cannot be used for benchmarking, or cross‐institutional comparisons.
- Costly in terms of time and effort (more “frontloaded” effort for objective assessments; more
“backloaded” effort for subjective assessments).
- May not provide for externality.
Bottom Lines:
Most useful for individual coursework or program evaluation, with careful adherence to
assessment principles. Must be supplemented for external validity.
91
FOCUS GROUPS** 5
Definition:
Typically conducted with 7‐12 individuals who share certain characteristics that are related to a
particular topic related to a research or evaluation question. Group discussions are conducted by a
trained moderator with participants (several times, if possible) to identify trends/patterns in
perceptions. Moderator’s purpose is to provide direction and set the tone for the group discussion,
encourage active participation from all group members, and manage time. Moderator must not
allow own biases to enter, verbally or nonverbally. Careful and systematic coding and analysis of
the discussions provides information that can be used to evaluate and/or improve the desired
outcome.
Advantages:
- Useful to gather ideas, details, new insights and to improve question design.
- Helpful in the design of surveys.
- Can be used to get more in‐depth information on issues identified by a survey.
- Can inform the interpretation of results from mail or telephone surveys.
- Can be used in conjunction with quantitative studies to confirm/broaden one’s understanding
of an issue.
- Interaction among focus group participants often leads to new insights.
- Allows the moderator to probe and explore unanticipated issues.
Disadvantages:
- Not suited for generalizations about population being studied.
- Not a substitute for systematic evaluation procedures.
- Moderators require training.
- Differences in the responses between/among groups can be troublesome.
- Groups can be difficult to assemble.
- Moderator has less control than in individual interviews.
- Data are complex to code and analyze.
Example of Applications:
- Focus groups can be used as a follow‐up to survey data. In cases where the results of a survey
do not meet the expected standard on a particular outcome, a focus group of participants who
are representative of the population surveyed (e.g., students, alumni, females) could be held to
further investigate the results.
- Focus groups can be used to get input from alumni or business partners on the strengths and
weaknesses in the knowledge and/or skills of graduates. Focus groups are a particularly
helpful tool to use to “triangulate” or validate the results from other assessment methods.
92
Bottom Lines:
Focus groups are a quick and, if locally done, inexpensive method of gathering information.
They should be conducted by someone who has training and experience in conducting
Focus Groups and analysis of Focus Group data. They are very useful for triangulation to
support other assessment methods but they are not a substitute for systematic evaluation
procedures. Focus Groups should meet the same rigor as other assessment methods and
should be developed and analyzed according to sound qualitative practices.
93
Portfolios 6
Definition: Collections of multiple student work samples usually compiled over time and scored
using rubrics. The design of a portfolio is dependent upon how the scoring results are going to be
used.
Advantages:
- Can be used to view learning and development longitudinally (e.g. samples of student writing
over time can be collected), which is a useful perspective.
- Multiple components of a curriculum can be assessed (e.g., writing, critical thinking, research
skills) at the same time.
- The process of reviewing and scoring portfolios provides an excellent opportunity for faculty
exchange and development, discussion of curriculum objectives and outcomes, review of
scoring criteria, and program feedback.
- Greater faculty control over interpretation and use of results.
- Results are more likely to be meaningful at all levels (i.e., the individual student, program, or
institution) and can be used for diagnostic/prescriptive purposes as well.
- Avoids or minimizes “test anxiety” and other “one shot” assessments.
- Increases “power” of maximum performance measures over more artificial or restrictive
“speed” measures on test or in‐class sample.
- Increases student participation (e.g., selection, revision, evaluation) in the assessment process.
Disadvantages:
- Can be costly in terms of evaluator time and effort.
- Management of the collection and scoring process, including the establishment of reliable
and valid scoring rubrics, is likely to be challenging.
- May not provide for externality.
- If samples to be included have been previously submitted for course grades, faculty may be
concerned that a hidden agenda of the process is to validate their grading.
- Security concerns may arise as to whether submitted samples are the students’ own work, or
adhere to other measurement criteria.
Ways to Reduce Disadvantages:
- Consider having portfolios submitted as part of a course requirement, especially a “capstone
course” at the end of a program.
- Investigate the use of electronic portfolios as a means to increase process efficiency.
- Utilize portfolios from representative samples of students rather than having all students
participate (this approach may save considerable time, effort, and expense but be problematic
in other ways).
- Have more than one rater for each portfolio; establish inter‐rater reliability through piloting
designed to fine‐tune rating criteria.
- Educate the raters about the process.
- Recognize that portfolios in which samples are selected by the students are likely represent
their best work.
- Cross‐validate portfolio products with more controlled student work samples (e.g., in‐class
tests and reports) for increased validity and security.
Bottom Lines:
Portfolios are a potentially valuable option adding important longitudinal and “qualitative” data, in
a more natural way. Particular care must be taken to maintain validity. Especially good for multiple‐
learning outcomes assessment.
94
Simulations 7
Definition: A competency based measure where a person’s abilities are measured in a situation
that approximates a “real world” setting. Simulation is primarily used when it is impractical to
observe a person performing a task in a real world situation (e.g., on the job).
Advantages:
- Better means of evaluating depth and breadth of student skill development than tests or other
performance‐based measures (internal validity).
- More flexible; some degree of simulation can be arranged for most student target skills.
- For some skills, can be group administered, thus providing an excellent combination of quality
and economy.
Disadvantages:
- For difficult skills, the higher the quality of simulation the greater the likelihood that it will
suffer from same problems as “Performance Appraisals.”
o Ratings of student performance is typically more subjective than standardized tests.
o Sample of behavior observed or performance appraised may not be typical, especially
because of the presence of others.
o Usually requires considerable “frontloading” effort; i.e., planning and preparation.
- More expensive than traditional testing options in the short run.
Bottom Lines:
An excellent means of increasing the external and internal validity of skills assessment at
minimal long‐term costs.
95
Performance Appraisals 8
Definition: A competency‐based method whereby abilities are measured in most direct, real‐world
approach. Systematic measurement of overt demonstration of acquired skills.
Advantages:
- Provide a more direct measure of what has been learned (presumably in the program).
- Go beyond paper‐and‐pencil tests and most other assessment methods in assessing skills.
- Preferable to most other methods in measuring the application and generalization of learning
to specific settings, situations, etc.
- Particularly relevant to the objectives and outcomes of professional training programs and
disciplines with well defined skill development.
Disadvantages:
- Rating of student performance is typically more subjective than standardized tests.
- Requires considerable time and effort (especially front‐loading), thus being costly.
- Sample of behavior observed or performance appraised may not be typical, especially because
of the presence of observers.
Bottom Lines:
Generally the most highly valued but costly form of student outcomes assessment. However,
it is usually the most valid way to measure skill development.
96
External Examiner 9
Definition: Using an expert in the field from outside your program such as someone from a similar
program at another institution or a capstone project client to evaluate, or supplement assessment
of your students. Information can be obtained from external evaluators using many methods
including feedback forms (including scoring rubrics), surveys, interviews, etc.
Advantages:
- Increases impartiality, third party objectivity (external validity)
- Feedback useful for both student and program evaluation. With a knowledgeable examiner it
provides an opportunity for a valuable program consultation.
- May serve to stimulate other collaborative efforts between business partners or other
programs.
- Incorporate the use of external stakeholders.
- Students may disclose to an outsider what they might not otherwise share.
- Outsiders can “see” attributes to which insiders have grown accustomed.
- Evaluators may have skills, knowledge, or resources not otherwise available.
- Useful in conducting goal‐free evaluation (without prior expectations).
Disadvantages:
- Always some risk of a misfit between examiner’s expertise and/or expectations and program
outcomes.
- For individualized evaluations and/or large programs, can be very costly and time consuming.
- Volunteers may become “donor weary” (tired from being asked multiple times to participate).
Bottom Lines:
Best used as a supplement to your own assessment methods to enhance external validity,
but not as the primary assessment option. Other benefits can be accrued from the cross‐
fertilization that often results from using external examiners.
97
10
Archival Records
Definition: Biographical, academic, or other file data available from the college or other agencies
and institutions.
Advantages:
- Tend to be accessible, thus requiring minimal effort.
- Build upon data collection efforts that have already occurred.
- Can be cost efficient if required date is readily retrievable in desired format.
- Constitute non‐intrusive measurement, not requiring additional time or effort from students or
other groups.
- Very useful for longitudinal studies.
- Good way to establish a baseline for before and after comparisons.
Disadvantages:
- Especially in large institutions, may require considerable effort and coordination to determine
exactly what data are available campus‐wide and to then get that information in desired format.
- To be most helpful, datasets need to be combined. This requires an ability to download and
combine specific information for multiple sources. It may require designing a separate
database for this downloaded information.
- Typically the archived data are not exactly what is required, so that the evaluator must make
compromises. In some cases, it may be a stretch to use such data as surrogates for the desired
measures.
- If individual records are included, protection of rights and confidentiality must be assured;
where applicable, Institutional Review Board approval should be obtained if there is doubt.
- Availability of data may discourage the development of other, more appropriate measures or
data sources.
- May encourage attempts to “find ways to use data” rather than assessment related to specific
outcomes and objectives.
Bottom Lines:
Can be quick, easy, and cost‐effective method, if data are available and accessible. Usually
limited data quality but integral to valuable longitudinal comparisons. Should be a standard
component of all assessment programs.
98
Oral Examination 11
(This method may be inconsistent with campus policies that prohibit the use of oral examinations.)
Definition: An assessment of student knowledge levels through a face‐to‐face dialogue between the
student and examiner—usually faculty.
Advantages:
- Content and style can be geared to specific objectives and outcomes, and student characteristics of the
institution, program, curriculum, etc.
- Specific indicators for performance can be established in relationship to course/curriculum.
- Process of development can lead to clarification/crystallization of what is important in the
process/content of student learning.
- Local scoring by faculty can provide immediate feedback related to material considered meaningful.
- Greater faculty/institutional control over interpretation and use of results.
- More direct implication of results for program improvements.
- Allows measurement of student knowledge in considerably greater depth and breadth through follow‐up
questions, probes, encouragement of detailed clarifications, etc. (increased internal validity and
formative evaluation of student abilities)
- Non‐verbal (paralinguistic and visual) cues aid interpretation of student responses.
- Dialogue format decreases miscommunications and misunderstandings, in both questions and answers.
- Rapport‐gaining techniques can reduce “test anxiety,” helps focus and maintain maximum student
attention and effort.
- Dramatically increases “formative evaluation” of student learning; i.e., clues as to how and why they
reached their answers.
- Provides process evaluation of student thinking and speaking skills, along with knowledge content.
Disadvantages:
- Requires considerable leadership/coordination, especially during the various phases of development.
- Can be difficult to document by note‐taking and providing student feedback with a grade.
- Costly in terms of time and effort (more “frontload” effort for objective; more “backload” effort for
subjective).
- May not provide for externality (degree of objectivity associated with review, comparisons, etc. external
to the program or institution).
- Requires considerably more faculty time, since oral exams must be conducted one‐to‐one, or, at most,
with very small groups of students.
- Can be inhibiting on student responsiveness due to intimidation, face‐to‐face pressures, oral (versus
written) mode, etc. (May have similar effects on some faculty!)
- Inconsistencies of administration and probing across students reduce standardization and
generalizability of results (potentially lower external validity).
Bottom Lines:
Oral exams can provide excellent results, but usually only with significant – perhaps prohibitive –
additional cost. Definitely worth utilizing in programs with small numbers of students, and for the
highest priority objectives in any program and local testing policies do not prohibit the testing
method.
99
SURVEY DEVELOPMENT
131
ENEMIES OF EFFECTIVE
SURVEYS
Lack of Audience
Other
Knowledge
132
100
SURVEY CREATION PROCESS
Question Optimize
State Pilot the Administer Analyze Report
(Item) User
Objectives Survey the Survey the Data Findings
Creation Experience
133
BEST PRACTICE #1
Use concise, common language for questions
and options
State
Objectives
134
101
BEST PRACTICE #1
START WITH OBJECTIVES, THEN DEVELOP QUESTIONS
Ask Yourself
• What do we need to know?
• Make a list of specific informational needs
• Can we develop questions that will provide the
information needed?
• Who will provide the data (target audience)?
• How will the data be used?
• Is approval from the Institutional Review Board
necessary?
135
BEST PRACTICE #2
Use concise, common language for questions
and options
Question
State (Item)
Objectives Creation
136
102
BEST PRACTICE #2
QUESTIONS TO ASK
• What questions do you need to ask
related to your objectives?
• How will the survey be administered?
• When will the survey be administered?
• What is your budget and timeline?
137
Example:
138
103
Alumni Survey - Objectives
Program Objectives:
1. Graduates will engage in engineering design and apply engineering theory.
2. Graduates will work on project-based teams and communicate effectively with various audiences in written,
oral and graphical forms.
3. Graduates will lead effectively and perform with high ethical standards
4. Graduates will continue to learn.
Etc.
104
BEST PRACTICE #2
Be Clear, Targeted, and Concise
UNDERSTANDABLE
Question
(ITEM) Targeted
RELEVANT
Creation
PAINLESS
141
BEST PRACTICE #2
Be Clear, Targeted, and Concise
FOCUS
Question
(ITEM)
Creation
Clear
CLARITY
BREVITY
142
105
DEVELOPING SURVEY ITEMS
• Avoid double negatives:
• “If you have not already been turned
down for positions you’ve applied for,
please skip to item 18.”
• Try: “If you already have a job, skip to
item 18.”
• Avoid jargon and acronyms that might not
be understood by everyone.
143
SURVEY ITEMS
• Avoid asking leading questions:
• Why do you think the laboratories need
to be improved?
• Questions must ask for information that
the respondent can answer.
• First-year students cannot comment on
graduation check-out procedures
144
106
COURSE SURVEY
For each question, indicate your opinion by choosing one of the
following:
(5) Strongly Agree
(2) Agree
(3) Undecided
(4) Disagree
(5) Strongly Disagree
COURSE EVALUATION
Please answer the following on a scale of
1=Least/Worst to 5=Most/Best
1 2 3 4 5
1. Students level of preparedness O O O O O
2. Adequacy of classroom O O O O O
3. etc. O O O O O
--------------------------------------------------------------------------------------
Please indicate whether or not the following abilities formed an
importatnt element of your course from 1=Not at all and 5=Very
important
1 2 3 4 5
1. Apply mathematics, science and engineering principles O O O O O
2. Design and conduct experiments and interpret data
O O O O O
3. etc.
O O O O O
146
107
ALUMNI SURVEY
Rank how well your education prepared you
with speaking and writing skills for your current
position.
_____ Excellent
_____ Good
_____ Fair
_____ Poor
_____ Very Poor
_____ Not Applicable
147
DISAGREE
NEUTRAL
AGREE
STRONGLY AGREE
STRONGLY DISAGREE
DISAGREE
NEUTRAL
AGREE
STRONGLY AGREE
148
108
EMPLOYER SURVEY
149
109
BEST PRACTICE #3
Survey Structure: Optimize the user’s experience
TYPES OF QUESTIONS
• Allows respondents to answer in their own
Open-Ended
words
Matrix and
Rating • Surveys frequency
151
BEST PRACTICE #4
Pilot the Survey: Test, test, test
Usability Functional
Testing Testing
Time Logic
Review test
data against
Readability Actions purpose
Spelling Behavior
Accessibility
152
110
PILOT TEST THE SURVEY
• Find 5-10 people from your target group
• If you can’t get people from your exact target group
then find people who are as close as possible.
• Try to get a range of different people who are
representative of your target group.
• Ask them to complete the survey the same way
that it will be completed by the target population
(e.g., online, phone interview, paper)
• Ask them to respond to the evaluation questions
• Revise the survey
153
BEST PRACTICE #5
Administering the Survey
154
111
BEST PRACTICE #5
Maximize Response Rates: Introduce your survey
155
Dear :
Thank you for agreeing to help us prepare for our accreditation visit
by ABET in the Fall of 2016, an intensive on-campus review and
assessment which occurs every six years. It is critically important that
we receive a positive review of our program. A positive review will
ensure the success of our engineering program, help attract the best
and brightest students to our program, and provide you, the
stakeholders, with the best possible engineering graduates.
John J Smith,
Professor and Chair
156
112
BEST PRACTICE #6
Analyze the data
157
BEST PRACTICE #7
Reporting Your Findings: Know your audience
158
113
Application:
159
160
114
ASSESSMENT METHODS
WRAP-UP
161
VALIDITY
1. Relevance - the assessment option measures
the student outcome as directly as possible
162
115
“BOTTOM LINES”
All assessment options have advantages
and disadvantages
“Ideal” method means those that are best
fit between program needs, satisfactory
validity, and affordability (time, effort, and
money)
Crucial to use multi-method/multi-source
approach to maximize validity and reduce
bias of any one approach
163
TRIANGULATION
“Truth”
Portfolios
164
116
TRIANGULATION*
“Truth”
Portfolios
165
ASSESSMENT METHOD
TRUISMS
There will always be more than one way to
measure any student outcome
No single method is good for measuring a wide
variety of different student abilities
There is generally an inverse relationship
between the quality of measurement methods
and their expediency
It is important to pilot test to see if a method is
appropriate for your program
166
117
USE OF TECHNOLOGY
• Harness technology to enhance the
efficiency and effectiveness of the
assessment process.
• What do you need to think about when making
decisions about the use of technology?
• How would we use technology that increase
the effectiveness of what we are now doing?
• What are the tradeoffs?
• Cost/Benefit, Training, Maintenance,
Quality of data/information
167
118
Efficient
Processes
DEVELOPING EFFICIENT
PROCESSES
168
• Why?
• Understand the focus of program
assessment
169
119
Taxonomy of Approaches to Assessment
A
T
Competency-Based T
Instruction “Gatekeeping” I
Individual T
Assessment-Based Curriculum K B
Individual Perf. Tests Admissions Tests U
N E
D
Rising Junior Exams O S H
Placement E
Comprehensive Exams W K A
Advanced Placement Tests S
Certification Exams L I V
Level
Vocational Preference Tests E L I
of &
Other Diagnostic Tests D L O
Assessment
G S R
(Who?) V
E
A
Program Enhancement Campus and Program L
Evaluation U
Individual assessment E
Group Program Reviews S
results may be aggregated to
serve program evaluation Retention Studies
needs Alumni Studies
“Value-added” Studies
Learning/Teaching Accountability
(Formative) (Summative)
171
120
SAMPLING
• For program assessment, sampling is
acceptable and even desirable for
programs of sufficient size.
• Sample is representative of all students
http://www.surveysystem.com/sscalc.htm
172
121
Define Evaluate Implement
Outcomes Collect Results and Improvements
and Map Data Design and Collect
Curriculum Improvements Data
174
A= Assess; E=AEvaluate;
An ability to communicate effectively in oral,
E C A
written, graphical, and visual forms
C= Change (if necessary) 175
122
STUDENT OUTCOMES 15-16 16-17 17-18 18-19 19-20 20-21
ESTABLISH AN ANNUAL
CYCLE SUMMER
Assessment committee
prepares report for
department chair and
program faculty
SPRING FALL
Program acts on
recommendations of the
faculty
WINTER
177
123
Application:
178
MA111
MA111 PH113
HS202
CM201
RH330 ME430
ME461
ME123 HSxxx
engineering tools necessary for
engineering practice
179
124
MA111 PH113 CM201
ME430 ME461
HS202
RH330 ME430
EM104 MA111
HSxxx HSxxx
ES202 EM203 ME406
RH131 ME470
EM120 MEelec
MA223
Outcomes:
MA112
ME323
(a)apply knowledge of mathematics, science,
ME303
andECE207
engineering MEelec
MA221
(c)an ability to design a system, component, or process
ME311
(d)function
PH112on multidisciplinaryCM202
teams ES205 ME317
(g)ability to communicate effectively MExxx
EM103
(i)recognition of the need for, and an ability to engage in
ME123 HSxxx
life-long learning
(j)knowledge of contemporary issues
180
125
126
For each test/exam item and homework problem, faculty map to outcomes and enter data for
each student on each item/assignment. Acceptable performance level =75%
OUTCOME
COURSE A B C D E F G H I J K
100 77 --- 81 --- --- 90 78 --- 76 82 91
201 75 78 --- 82 81 --- 75 --- --- 75 75
222 79 79 79 --- 79 79 --- 79 --- --- 79
252 --- 82 82 82 82 82 --- 80 --- 82 ---
299 91 --- 87 --- 91 83 --- 76 76 --- 72
301 77 --- 81 --- --- 90 78 --- 74 82 ---
312 81 76 --- 88 83 --- 90 76 --- --- 78
316 --- 73 76 --- 84 82 --- 87 73 77 75
318 76 70 --- 75 81 --- 75 --- 76 76 ---
322 74 77 74 --- 81 88 --- 77 74 --- 89
399 --- --- 77 --- --- --- --- --- 74 --- ---
415 84 82 77 --- 82 77 86 77 --- --- 91
499 --- 80 --- 92 81 76 92 --- 75 92 ---
Average 79.3 77.4 79.3 83.8 82.5 83.0 82.0 78.86 74.8 80.9 81.3
Three different levels of achievement:
• Exceeds Expectations (EE): more than 80% of the students have achieved an average score of 75% or more;
• Meets Expectations (ME): between 70% and 80% of the students have achieved an average score of 75% or more;
• Does Not Meet Expectations (DNME): less than 70% of the students have achieved an average score of 75% or more.
181
EVALUATION
182
EVALUATION
• Assessment is not a controlled experiment
• This is a data-informed, not data-driven process
• Take advantage of faculty wisdom and insight
• NOT just anecdotal, but includes the human element
as well.
• Data tell you WHAT
• Wisdom tells you WHY
• Why is this student group different?
• Improvements should be linked to principles of student
learning
• Focus mainly on student learning
• Evaluation = data + wisdom
• Data are necessary but not sufficient
183
127
IMPROVEMENT DECISIONS
EXAMPLE Direct assessment (e.g. rubrics
applied to design problem)
Feedback
Analysis: Weigh
Hold until next Too
actions,
cycle; results consider expensive
not conclusive resources (time, $)
Information
Improvement
Action
184
Application:
Evaluation
185
128
EVALUATION
20 minutes
1. The following scenario describes the
results of an assessment process.
2. You and your team have been asked to
consult with the program about their
assessment process.
3. Review the scenario and answer the
evaluation questions posed. Be prepared
to report out.
186
129
Application: Evaluation
A technical program at ABC University has 53 students in the cohort that has just
finished participating in internships as a part of their graduation requirements. The
program’s assessment coordinator has designed a simple internship survey which asks
supervisors to evaluate student performance on a Likert scale (5=excellent; 4=very
good; 3=good; 2=needs improvement; 1=poor) in the areas of problem solving,
teamwork, continuous improvement, and the ability to communicate. This is the only
summative assessment data that the program collects related to these skills. The
program has set “3” as a target for performance.
The data from the survey indicate unsatisfactory performance in communication skills
(2.8), and continuous improvement (2.2). Your team has been asked to provide
consultation services to the program to recommend what the next steps should be.
The program has a curriculum map that identifies multiple courses where students are
expected to demonstrate all of the outcomes that are the focus of the survey (see
below). In order to understand why students are not performing at the desired levels
and how to approach improvement strategies, what should be the program’s next steps
to improve student learning?
Curriculum map:
110 201 210 215 220 240 260 301 310 313 330 350 401 490 492
Writing X X X X X X
Teamwork X X X X
Problem solving X X X X X X X X X X
Continuous
X X
Improvement
130
CRITERION 4
CONTINUOUS IMPROVEMENT
187
SELF-STUDY GUIDELINES
CRITERION 4. CONTINUOUS IMPROVEMENT
Student Outcomes: It is recommended that this section include (a table
may be used to present this information):
188
131
REPORTING THE RESULTS
189
190
132
Student Outcome: Students will demonstrate the ability to work effectively in teams.
WHERE
WHERE SUMMATIVE THRESHOLD
PERFORMANCE EDUCATIONAL METHOD(S) OF FORMATIVE TIME OF DATA
SUMMATIVE DATA DATA CYCLE FOR
INDICATORS STRATEGIES ASSESSMENT DATA COLLECTION
ARE COLLECTED (YRS) PERFORMANCE
COLLECTED
1.Produces research 1011, 2001, Peer Evaluations 4092 2001 (y2 of
information for the 2060, 3001, Faculty Evals 4092 cycle), 3001 3 yrs 2012, 2015 80%
team 4092 Senior Surveys On-line survey (y3 of cycle)
Peer Evaluations 4092
2.Demonstrates 1011, 2001, 2001 (y2 of
Faculty 4092
understanding of team 2060, 3001, cycle), 3001 3 yrs 2012, 2015 80%
Evaluations
roles when assigned 4092 (y3 of cycle)
Senior Surveys On-line survey
1011, 2001, Peer Evaluations 4092 2001 (y2 of
3.Shares in the work of
2060, 3001, Faculty Evals 4092 cycle), 3001 3 yrs 2012, 2015 80%
the team
4092 Senior Surveys On-line survey (y3 of cycle)
1011, 2001, Peer Evaluations 4092 2001 (y2 of
4.Demonstrates good
2060, 3001, Faculty Evals 4092 cycle), 3001 3 yrs 2012, 2015 80%
listening skills
4092 Senior Surveys On-line survey (y3 of cycle)
Results Summary (direct measures) 2012: A sample of 56 students (52% of 2009 cohort) were assessed for the summative assessment. This represents 2 of
4 sections of 4092 (which is the second semester of a two-semester team experience.) The percent of the sample that demonstrated each indicator at
satisfactory or exemplary were as follows: Indicator 1 - 72%; Indicator 2 - 65%; Indicator 3 - 62%; Indicator 4 - 89%
Actions 2013: The faculty who integrated teaming into their courses met in the fall of 2010 and 2011 to review the formative data and make recommendations
for changes during those academic years. Based on the analysis of the summative results, the department asked faculty to provide the teaming scoring rubrics
to students with the course assignments where the students were provided opportunities to demonstrate their teaming skills as defined by the outcomes. A sub-
committee of the department Curriculum Committee met to review the outcomes. It was decided not to make any changes at this time. Faculty decided that
they would review their assignments to be sure that students were given adequate opportunities to demonstrate the performance identified for teaming. Faculty
also agreed to make students performance on the performance indicators a part of their grade for the activity. The Teaching/Learning Center will also provide a
seminar for faculty on how to integrate effective teaming into the classroom.
Second-Cycle Results Summary 2015: A sample of 59 students (51% of cohort) were assessed for the summative assessment. This represents 2 of 4
sections of 4092 (which is the second semester of a two –semester team experience.) Based on changes made, the following improvements were seen:
Indicator 1 – +12% (84%); Indicator 2 - +7% (72%); Indicator 3 - +13% (75%); Indicator 4 - +2% (91%).
Actions 2016: The faculty who integrated teaming into their courses met in the fall of 2013 and 2014 to review the formative data and make recommendations
for changes during those academic years. Although progress was made on all indicators, the Curriculum Committee recommended that the department take
another look at all the indicators related to teaming. The Teaching/Learning Center was asked to provide the department faculty some feedback on the
indicators and also provide other examples of teaming indicators. This will be one of the issues that will be discussed at the Department retreat for possible
revisions for the 2017 academic year.
133
134
Student Outcome: Students can work effectively in teams
135
PEER EVALUATIONS
CAPSTONE, 2015
347 Responses Exemplary Satisfactory Developing Unsatisfactory
100%
3% 6% 2%
90% 15% 13%
15%
80% 8% 9%
70%
30%
20%
10%
11% 9% 10% 10%
0%
Produces research info Understanding team Shares in the work of Demonstrates good
roles when assigned the team listening skills
194
60%
50%
40%
30%
20%
10%
0%
Produces research info Understanding team roles Shares in the work of the Demonstrates good listening
when assigned team skills
195
136
SENIOR SURVEY ITEM
EXPERIENCE IN COMPUTER
SCIENCE
2015 N=108
Threshold = 80%
100%
89%
90%
80%
70%
Strongly Agree/Agree
60%
50% Don't Know
40%
30%
Disagree/Strongly Disagree
20%
10% 5% 5%
0%
My experience in the Computer Science program gives me confidence that I will be able to work with
others effectively on project teams.
196
FORMATIVE ASSESSMENT
U=Unsatisfactory D=Developing S= Satisfactory E= Exemplary
Understanding Understanding
team roles team roles when
when assigned 13 10 72 5 assigned 5 16 74 5
Shares in the
work of the Shares in the work
team 25 8 57 10 of the team 8 18 64 10
Demonstrates Demonstrates
good listening good listening
skills 18 5 60 17 skills 9 7 67 17
197
137
FORMATIVE DATA
PEER ASSESSMENTS
Satisfactory/Exemplary Rating
2013-Course 2001; n=378
100%
Threshold = 80% 2014-Course 3001; n=389
90% 85% 84%
79% 77% 79% 77%
80% 74%
70% 67%
60%
50%
40%
30%
20%
10%
0%
Produces research info Understanding team roles Shares in the work of the Demonstrates good
when assigned team listening skills
198
SUMMATIVE DATA
CAPSTONE PEER
ASSESSMENTS Course 2001; n=378
Satisfactory/Exemplary Rating Course 3001; n=389
Threshold = 80% Peer Evaluation, n= 347
100%
91%
90% 85%82% 84%
79% 77%79%
80% 74%77% 77% 77%
70% 67%
60%
50%
40%
30%
20%
10%
0%
Produces research info Understanding team roles Shares in the work of the Demonstrates good
when assigned team listening skills
199
138
CUMULATIVE DATA
ASSESSMENTS
Satisfactory/Exemplary Rating
Course 2001; n=378 Course 3001; n=389
Peer Evaluation, n 347 Faculty Evaluations n=59
100%
91% Threshold = 80% 91%
90% 85%
79% 82%84% 84%
80% 77%79% 74%77%75% 77% 77%
72%
70% 67%
60%
50%
40%
30%
20%
10%
0%
Produces research info Understanding team roles Shares in the work of the Demonstrates good
when assigned team listening skills
200
40%
20%
0%
Produces research Understanding of team Shares in the work of the Demonstrates good
information roles when assigned team listening skills
201
139
TREND DATA
These data can be used for reporting purposes in three
areas:
• Program review: Did the changes/recommendations make any
difference? The answer to this question feeds back to improve
the program.
• Institution: Is the program being effective in documenting
student learning and improving learning over time?
• Accrediting agency: What is the evidence of student learning?
Is there a process in place that enables the program to
determine the level of student learning and the ability to
continuously improve their educational processes?
202
SETTING “THRESHOLDS”
When setting a threshold for a performance indicator, here is what you
should consider:
• Cognitive level: (e.g., is expectation at the knowledge, comprehension,
application, analysis, synthesis, evaluation level): One would anticipate that the
higher the cognitive level, the higher degree of difficulty
• Complexity of application: The more complex the application of the skill, the
more difficulty (e.g., designing a mousetrap car in a 100-level course v. a senior
design project)
• Curriculum support: The more courses that support student performance for
each indicator, the more likely it is that students will achieve the anticipated
performance.
• Student learning is cumulative over time. As students progress through the
curriculum the application of skills are likely to be with more complex
problems.
• This means that there might be different “thresholds" for each of the performance
indicators that make up any one outcome.
203
140
Student Outcome: An understanding of professional and ethical responsibility
Where Where Yr/Sem of
Length of
Educational Method(s) of summative formative summative Threshold for
Performance Indicators assessment
Strategies Assessment data are data are data Performance
cycle (yrs)
collected collected collection
Locally 2001(yr 1 of
1.Knows code of ethics for 2001, 2060, 3001
developed exam cycle), 2060 3 years 2012, 2015 80%
the discipline 3001
Senior Surveys On-line survey (yr2 of cycle)
Case study
2.Ability to evaluate the 4092
review/rubric
ethical dimensions of a 3001, 4092 3001 (yr. w) 3 years 2012, 2015 70%
problem in the discipline Senior Surveys On-line survey
Assessment Results Summary (direct measures) 2012: Summative data were collected in 3001. For the summative assessment (end of program),
the decision was made to focus on the direct assessment of faculty developed examination as the primary assessment data for both indicators. The
assessment of indicator #1 was done in course 3001 after a review of material covered earlier in the program. Because the indicator is at the
“knowledge” level, a multiple choice/true-false exam was give to see how well the student had learned the material. For indicator #2, a case study
was chosen from http://ethics.tamu.edu/ethicscasestudies.htm and was used in the 4092 class. The scoring rubrics were completed by the faculty.
The percent of the students that demonstrated each criterion were as follows: Indicator #1 - 66%; Indicator #2 - 58%.
Evaluation and Actions 2013: The faculty who integrated ethics into their courses met in the fall of 2007 and 2008 to review the formative
data and make recommendations for changes during those academic years The assessment results were evaluated by the faculty at a
retreat held in August of 2013. Indicator #1: Based on the analysis of the results, the faculty who were introducing and/or reinforcing the
code of ethics in their courses were asked to reinforce the importance of knowing the code of ethics for the discipline. They were also
encouraged to review the scores to see if there were common items missed and to reiterate the areas where students’ performance was
weak. Indicator #2: Faculty were asked to provide the scoring rubrics to students with the case study so they could see how they would be
evaluated. A sub-committee of the department Curriculum Committee was assigned to meet and review the performance indicators to be
sure that they were appropriate. The Advisory Committee was also asked to provide feedback. It was recommended not to make any
changes at this time. Faculty integrating ethics agreed to review their assignments to be sure that students were given adequate
opportunities to learn the codes in the context of the discipline and to make students performance on the exam was an adequate portion of
the overall grade for the unit.
Second-Cycle Results Summary (direct measures) 2015: The second cycle summative data were again taken in the 3001 for indicator #1 and 4092
for indicator #2. Based on actions taken as a result of the 2006 evaluation process, the following improvements were seen in 2008: Indicator #1 –
+10% (74%); Indicator #2 - +12% (70%).
Evaluation and Actions 2016: The faculty who integrated ethics into their courses met in the fall of 2013 and 2014 to review the formative
data and make recommendations for changes during those academic years. During the August 2016 department retreat, the faculty agreed
that adequate progress had been made on both of the indicators and no further action would be taken at this time. However, at the end of
the 2018 assessment cycle for ethics if the trend continues upward the committee will review whether or not the thresholds should be
raised in an effort to continually improve student performance.
141
142
An ability to design and conduct experiments, as well as to analyze and interpret data
Where
Where Length of Yr/Sem of data
Educational Method(s) of summative Threshold for
Performance Indicators formative data assessment summative
Strategies Assessment data are Performance
are collected cycle (yrs) collection
collected
40%
20%
0%
Observes good lab practice and operats Determines data that are appropriate to collect Uses appropriate tools to analyze data and
instrumentation with ease and selects appropriate equipment, protocols, verifies and validates experimental results
etc. for measuring the appropriate variables to get including the use of statistics to account for
required data possible experimental error
Display materials available at visit:
• Indicator #1, 2, 3 laboratory assignment sheets with rubrics and samples of lab reports for summative assessment
• Sample of Laboratory reports and results from 2010 formative assessments
• Copies of revised rubrics as a result of 2013 actions
• Senior Survey questions and results with faculty evaluation
• Minutes of department Laboratory sub-committee meetings where recommendations were made 2013
• Minutes of faculty retreat where actions were taken in 2010, 2013
TABLE
Outcome Performance Indicators 2009 2012 2015
Teaming Research and Gather Information 61% 72% 84%
Fulfill team roles 50% 65% 72%
Share work 58% 62% 75%
Listens 70% 89% 91%
207
143
COMMON MISTAKES
• Too many data, not enough information
• Reporting numbers or percentages without
putting them into context
• How many students in cohort
• How many students provided data
• Not describing how the data are evaluated
• Using very complex charts describing your
assessment processes
208
COMMON MISTAKES
• Discussing all outcomes/objectives at
once instead of one at a time.
• Using the terms “objectives” and
“outcomes” interchangeably.
• Referencing the outcomes/objectives by
numbers or letters that refer back to a
chart. Don’t require the reader to go back
in the self-study for the reference.
209
144
COMMON MISTAKES
MAPPING IN
SELF STUDY REPORT
Example
Program Educational
Supporting Student Outcomes
Objectives
1. a, b, c, e, k, j
2. d, g , l
3. e, f, I, j, l
4. h, I, j
210
BEST PRACTICE
MAPPING IN
SELF STUDY REPORT
Example
145
SUMMARY
212
213
146
CRITERION 4 - COMMON ISSUES
CONTINUOUS IMPROVEMENT
Assessment
• Indicators of student performance have not been
defined and/or no a priori level of student
performance has been established. (Although there is
no criteria requirement for performance indicators or a priori levels of
performance, without these or something equivalent it may be
difficult to appropriately evaluate the extent to which student
outcomes are attained, and additional information may be needed to
determine the appropriateness of the evaluation process for
outcomes attainment.)
• The program uses only anecdotal results (versus
measured results).
214
147
CRITERION 4 - COMMON ISSUES
CONTINUOUS IMPROVEMENT
Assessment
• There is an over-reliance on student self-
assessment (e.g., surveys) as opposed to
assessment methods based on actual student
performance. As a rule, student self-assessment
of outcomes attainment is considered much less
reliable than attainment data from actual student
performance relative to each outcome.
• Assessment data are being collected for only
some outcomes.
216
148
CRITERION 4 - COMMON ISSUES
CONTINUOUS IMPROVEMENT
Results
• Program improvement plans are developed but not
implemented.
• There is no documentation of how the results of
assessment and evaluation processes are used to
determine needed program improvements.
• Results of the evaluation of student outcomes are
not used to make needed improvements to the
student outcomes.
• There is no evidence improvement efforts are being
assessed and evaluated.
218
LESSONS LEARNED?
• Cannot measure everything
• “You don’t have to be bad to get better”
• Fear factor impedes risk-taking
• Each outcome should be defined by a few well-
stated performance indicators
• Identify specific attributes required to demonstrate
achievement of the outcome
• Answer the question, “What do we look for as
evidence of outcome achievement?”
• Reflect the uniqueness of individual programs
219
149
CASE STUDY CRITIQUE
At your table: Ask someone to scribe
220
221
150
Leadership
LEADERSHIP
Group
Faculty Culture Communication Sustainability
Dynamics
Change Understanding
Action Agendas People
Dynamics Differences
Evolving
Facilitation
Resistance Assessment Perspective
Tools
Culture
222
FACULTY CULTURE
223
151
CHALLENGE OF FACULTY
INVOLVEMENT
• Continuous quality improvement is a
human process
• Faculty are critical to success
• Own the student outcomes and indicators
• Evaluate results of assessment
• Identify and design areas for improvement
• Implement changes
• Assess impact
224
225
152
THERE IS NO SUCH THING AS A
“FACULTY TYPE”
NO ONE’S GOING
FINE. TO MAKE YOU MOST PEOPLE
THINK THERE’S
DON’T
IT’S YOUR
WHATEVER. LIFE.
BUT MAYBE
YOU’RE JUST
DIFFERENT.
*Dowe, Ronald, Mary Mahony, and Marjorie Olive, “Teaching Reconnections: Assessment-Driven 226
Institutional Renewal,” NCA 2000 Annual Meeting, Chicago, Illinois, Session 174, April 1-4, 2000
227
153
Application:
228
229
154
IDENTIFYING MOTIVATORS AND
BARRIERS TO FACULTY INVOLVEMENT
• Once all of the ideas are on the flip chart, use an affinity process to
organize ideas and identify the major factors on each side. Prioritize
the top three on each side that the team feels are the most
significant. If you have a large number of items (over eight), use a
modified nominal group technique to reduce a large number of
items to a smaller list of high priority items. (10 minutes)
Modified nominal group technique:
1.Count the number of items and divide by 3. This is the number of votes each
person has (Round fractions off to the lower number). Give each team member
as many colored dots as s/he has votes.
2. Have each person use his or her colored dots to votes for the item s/he
wants to keep. Note: the group should decide if they want to allow multiple
voting (i.e., allowing one person to vote for a single item more than once).
3. List your strategies in their new prioritized order.
4. Critically discuss the top alternatives to reach consensus.
230
155
IDENTIFYING MOTIVATORS AND
BARRIERS TO FACULTY INVOLVEMENT
Step 4: Team Consensus (10 minutes): Each small
group report to the full team to get their input on suggested
strategies. The full team should reach consensus and
finalize the strategies.
DECISION MATRIX
Engagement PRIORITY
Little or no
effort =3
#7 #3 #1
Moderate #8 #4 #2
effort = 2
#9 #6 #5
Difficult= 1
233
156
CHANGE DYNAMICS
234
235
157
STAGES OF CHANGE
Denial: inability to picture or
understand impending change
Resistance: wide variety of
behaviors designed to avoid
I. Denial IV. Commitment
dealing with change
Experimentation: employees
begin to make new concepts,
processes, or practices “their own”
Commitment: program II. Resistance III. Experimentation
experiences general, wide
acceptance of the new way of
doing things (usually short-lived
before the next change cycle)
236
158
WAYS TO SUCCESSFULLY ROLL
OUT CHANGE
• Communicate, communicate, communicate
Make it clear what that “future state” looks like
Explicitly define what the change is, who will
be involved, how it will be done and what’s in it
for me?
What, who, how, WIIFM
• Use storytelling/narrative of other successful
programs to help demonstrate that it can be done
(vision of success)
238
239
159
RESISTANCE
240
FACULTY RESISTANCE
• Negative view of
assessment
• Always starting
over
• Lack understanding
• Lack motivation
241
160
TO DEVELOP SUCCESSFUL PROGRAM
ASSESSMENT/CQI PROCESSES:
• Address faculty concerns
Reduce workload of massive data collection
processes
Increase confidence in the process (produce
credible evidence)
• Develop a shared understanding of best practice for
program assessment
Tension between desire to be autonomous and yet wanting
someone to tell us what to do
• Move from focus on individual courses to cumulative
effect of learning at the program level
242
RESISTANCE IS HEALTHY
243
161
RESISTANCE
244
PARADOXICAL THEORY OF
CHANGE
People get “stuck” in
the current state.
They will not be
“unstuck” until they
can fully appreciate
current state.
245
162
LEVELS OF RESISTANCE
1. Need for additional information
“I just need to clarify some things so I have a better
picture of what’s happening.”
2. Emotional attachment to current situation
“I have too much at stake to buy in.”
3. Values-based attachment to current situation
“This is in total conflict with my beliefs about how an
organization should be run.”
246
GROUP DYNAMICS
247
163
WORKING WITH GROUPS
248
Consensus
Summarizing
Seeking
Encouraging
Group Harmonizing
Behaviors Compromising
Bulldozing
Blocking Group-Building
Behaviors
Recognition Deserting
Seeking
Examples?
Self-Oriented
Behaviors Examples?
249
164
UNDERSTANDING
DIFFERENCES
250
“TAKING IT IN”
DIFFERENCES IN LEARNING STYLES
Active Reflective
Sensing Intuitive
Visual Verbal
Sequential Global
251
165
1
Excerpted from: LEARNING STYLES AND STRATEGIES
Richard M. Felder
Hoechst Celanese Professor of Chemical Engineering
North Carolina State University
Barbara A. Soloman
Coordinator of Advising, First Year College
North Carolina State University
2
ACTIVE AND REFLECTIVE LEARNERS
• Active learners tend to retain and understand information best by doing something active with it—
discussing or applying it or explaining it to others. Reflective learners prefer to think about it quietly
first.
• “Let’s try it out and see how it works” is an active learner’s phrase; “Let’s think it through first” is the
reflective learner’s response.
• Active learners tend to like group work more than reflective learners, who prefer working alone.
• Sitting through lectures without getting to do anything physical but take notes is hard for both learning
types, but particularly hard for active learners.
Everybody is active sometimes and reflective sometimes. Your preference for one category or the other
may be strong, moderate, or mild. A balance of the two is desirable. If you always act before reflecting
you can jump into things prematurely and get into trouble, while if you spend too much time reflecting
you may never get anything done.
3
SENSING AND INTUITIVE LEARNERS
• Sensing learners tend to like learning facts, intuitive learners often prefer discovering possibilities and
relationships.
• Sensors often like solving problems by well-established methods and dislike complications and surprises;
intuitors like innovation and dislike repetition. Sensors are more likely than intuitors to resent being
tested on material that has not been explicitly covered in class.
• Sensors tend to be patient with details and good at memorizing facts and doing hands-on (laboratory)
work; intuitors may be better at grasping new concepts and are often more comfortable than sensors
with abstractions and mathematical formulations.
• Sensors tend to be more practical and careful than intuitors; intuitors tend to work faster and to be more
innovative than sensors.
• Sensors don’t like courses that have no apparent connection to the real world; intuitors don’t like “plug-
and-chug” courses that involve a lot of memorization and routine calculations.
Everybody is sensing sometimes and intuitive sometimes. Your preference for one or the other may be
strong, moderate, or mild. To be effective as a learner and problem solver, you need to be able to function
both ways. If you overemphasize intuition, you may miss important details or make careless mistakes in
calculations or hands-on work; if you overemphasize sensing, you may rely too much on memorization
and familiar methods and not concentrate enough on understanding and innovative thinking.
166
VISUAL AND VERBAL LEARNERS
Visual learners remember best what they see—pictures, diagrams, flow charts, time lines, films,
and demonstrations. Verbal learners get more out of words—written and spoken explanations. Everyone
learns more when information is presented both visually and verbally.
In most college classes very little visual information is presented: students mainly listen to lectures and
read material written on chalkboards and in textbooks and handouts.
Unfortunately, most people are visual learners, which means that most students do not get nearly as much
as they would if more visual presentation were used in class. Good learners are capable of processing
information presented either visually or verbally.
4
SEQUENTIAL AND GLOBAL LEARNERS
• Sequential learners tend to gain understanding in linear steps, with each step following logically from the
previous one. Global learners tend to learn in large jumps, absorbing material almost randomly without
seeing connections, and then suddenly “getting it.”
• Sequential learners tend to follow logical stepwise paths in finding solutions; global learners may be able
to solve complex problems quickly or put things together in novel ways once they have grasped the big
picture, but they may have difficulty explaining how they did it.
Many people who read this description may conclude incorrectly that they are global, since everyone has
experienced bewilderment followed by a sudden flash of understanding. What makes you global or not is
what happens before the light bulb goes on. Sequential learners may not fully understand the material but
they can nevertheless do something with it (like solve the homework problems or pass the test) since the
pieces they have absorbed are logically connected. Strongly global learners who lack good sequential
thinking abilities, on the other hand, may have serious difficulties until they have the big picture. Even
after they have it, they may be fuzzy about the details of the subject, while sequential learners may know
a lot about specific aspects of a subject but may have trouble relating them to different aspects of the
same subject or to different subjects.
167
“Well, what I was thinking is that first,
you’d look for direct assessment
examples, such as grades. The data show
these are easiest to find. Next, you’d find
indirect assessments, such as improved
perception of the importance of
mathematics. You said these are more
difficult to find…I’ll have to think about
ways to do that.”
252
XXXX A I Vb S
253
168
SO WHAT?
• As a leader, how does thinking about “style” help
you in engaging others in the process?
• What does someone “need?”
– What strategies would you use to appeal to someone
who was primarily:
Table 1 & 6: Active / Reflective
Table 2: Sensing / Intuitive
Table 3: Visual / Verbal
Table 4 & 5: Sequential / Global
254
EVOLVING AN
ASSESSMENT CULTURE
255
169
PROCESS OF DEVELOPING
A “CULTURE”
Beginning Progressing Maturing
LEVEL ONE LEVEL TWO LEVEL THREE
PROCESS OF DEVELOPING
A “CULTURE”
Beginning Progressing Maturing
LEVEL ONE LEVEL TWO LEVEL THREE
Tolerated Assessment
Anticipated Student learning
Celebrated
findings are has become
There is minimal
beginning to be central to the
evidence that
incorporated into institution
the assessment
Isolated Connected Integrated
program reviews and student
program is stable
and learning,
and will be
the self study of performance, and
sustainable
institutional achievement are
Episodic Periodic
effectiveness Characteristic
celebrated
170
PROCESS OF DEVELOPING
A “CULTURE”
BEGINNING PROGRESSING MATURING
C Isolated Pervasive C
L Temporary Ongoing
U
I Personality Driven Structurally Driven
L
M T
Surface Embedded
A U
External Internal
T R
E Program Accreditation Improvement & Validation
E
Used with permission, Susan Hatfield, Winona State
University
171
COMMUNICATION
260
EFFECTIVE LEADERSHIP
261
172
LISTENING TECHNIQUES
• Critical Listening
Separate fact from opinion.
• Empathetic Listening
Don’t talk- listen.
Don’t give advice- listen.
Don’t judge- listen.
• Creative Listening
Exercise an open mind.
Supplement your ideas with another
person’s ideas and vice versa.
262
LISTENING SKILLS
• Stop talking.
• Engage in one conversation at a time.
• Empathize with the person speaking.
• Ask questions.
• Don’t interrupt.
• Show interest.
263
173
LISTENING SKILLS
264
Application:
265
174
TRIAD PRACTICE
266
DEBRIEF
• What was the hardest part of the exercise?
• How did you feel as the faculty member?
• How did you feel as the leader?
• Did you feel capable of providing feedback?
– Empathetic?
– “Yes” responses?
– Good listener?
267
175
ACTION AGENDAS
268
EFFECTIVE MEETINGS
Elements of a well-developed agenda (meeting
planner)
– Topics in logical order (including a
sentence or two that defines each item
and its relevance)
– Process used for coming to a decision
(e.g., brainstorming, multi-voting, etc.) and
not simply state “discuss...”
– Team roles assigned
269
176
PRIMARY TEAM ROLES
• Leader: Develops the agenda; leads team through
problem solving process; provides structure and guidance
to allow maximum participation; influences team decisions
equally with other members.
• Recorder: Summarizes discussion and material generated
during the working meetings.
• Timekeeper: Makes sure the team stays on its time budget
for the various tasks.
• Issue Bin/Action Items: Records items placed in the issue
bin and all items which need action by team members
270
PRODUCTIVE MEETINGS
• Plan ahead!
• Time guideline (amount of time allotted for
each agenda topic)
• Item type—whether the item requires
discussion or decision, or is just an
announcement
271
177
Agenda Planner
(for each item, know the following things)
Group: (e.g., Assessment Committee, Assessment Sub‐committee)
Meeting purpose: (e.g., Review and approve writing program rubric)
Time Block (minutes) Details
From: To: (for each topic)
Topic
Purpose
Indicate responsibilities:
Leader:
Recorder:
Time Keeper:
Issue Bin/Action Items:
272
Agenda Planner
Group: Program Assessment Committee
Meeting title: Assessment Committee Planning Workshop
Time Block (minutes) Details
From: To: (for each topic)
Timelines,
Topic: Committee Assignments 2016 action items benchmarks,
priorities
Determine
Purpose timelines,
Review and finalize 2016
Finalize committee responsibilities benchmarks,
action items
priorities for
committee work
Time allowed 15 minutes 20 minutes 20 minutes
Review and come with
Pre‐meeting Reading (see attached Review committee with responsibilities and send comments/ suggestions
or location of material) or top three preferences for committee assignments about action items for
Preparation 2 days before meeting (attach committee 2016 (send url for action
document to agenda)
item document)
Decision matrix
Tools or Activity
Poster paper with
Visual/Audio/Other Aids/ tasks listed and
Equipment Poster paper with section for each committee; columns for
post‐its with each committee member names timelines,
benchmarks,
priority; post‐its
Roles: Leader: Jeff Wilson; Recorder: Huan Chow; Time Keeper: Juanita Jones; Issue Bin/Action Items: Mary Simpson
178
PRODUCTIVE MEETINGS
274
PRODUCTIVE MEETINGS
• Tactfully prevent anyone from dominating
or being overlooked
• Bring discussion to a close (e.g.,
summarize)
• Take minutes
– Be sure someone has responsibility to record
key subjects and main points raised,
decisions made including who has agreed to
do what and by when, and items deferred to a
later time - ROTATE THIS DUTY
275
179
AGENDA PREPARATION
Agenda should be sent out at least 5 days before
the meeting
• Time to complete “homework”
• Provide any requested responses
• As the leader, you need to set aside specific time to
plan the meeting
• It may take as much as an hour to prepare the agenda
• Be sure minutes of previous meeting have been sent
• Follow up with any unfinished business
• Review action items
• Prepare materials that need to be sent out with agenda
276
FACILITATION TOOLS
277
180
FACILITATION TOOLS
Use techniques that keep the momentum
• In this Institute, we have used:
– Silent brainstorming/affinity
– Nominal group process
– Force field analysis
– Modified nominal group process
– Decision matrix
– Issue bin
• Tools are designed to maximize the involvement of
participants, structure the conversations, move the
process forward.
277
181
ISSUE BIN
The Tool (sometimes referred to as “Parking Lot”)
Often groups or individuals will get off track ‐ a new topic will come up or an idea
will begin to be discussed that isn't the main focus of the meeting, or might be better
discussed later on. At that point whoever is facilitating the meeting would suggest
that this topic or issue be placed in the Issue Bin. They would then go to a chart on
the wall labeled Issue Bin and write a brief description of the issue so that the idea
won't be lost. In other words, the goal of this tool is to keep a group on track with
their agenda.
Beyond that though, the Issue Bin is a way to help a group "hold that thought" so
that the idea isn't lost ‐ and can be discussed later when the time is right.
The Misuse
The most common misuse of this valuable tool is that facilitators put items into the
Issue Bin or Parking Lot with no real plan to revisit them ‐ they are using the Bin as
a place to put stuff they don't really want to talk about at all. Or, facilitators do have
good intentions, but when the meeting runs long (how many meetings have you
been to that didn't go long?), and time is short, the Issue Bin item(s) get lost in the
rush to finish the meeting.
The Best Use
It isn't hard to use an Issue Bin effectively. It just requires a process and a bit of
discipline.
Make sure that everyone knows the function of the Issue Bin.
Capture items to the Issue Bin as appropriate.
Schedule time in the agenda (typically 2‐3 minutes is all that is required) to
review the Issues near the end of the meeting. This review should answer
three questions: Is this still an issue (or has it been resolved since it was
placed in the Bin?) Is there an action item that can be created from this issue?
If so, what is it? Is this a topic that needs to be on a future meeting agenda?
Don't leave the Issue Bin until something is done with each issue. If nothing
can be done with it at this time, consider saving the issue and having it reside
on the Issue Bin at the start of your next meeting.
The bottom line? Do something with every one of them! Taking this simple approach
to using this tool will make your meetings run more productively and make sure
that all of the best ideas and issues are both raised and considered.
Just like any tool, it is wonderfully valuable when used correctly. And just like any
other tool, it can be damaging and counterproductive when it isn't.
Adapted from: Meeting Tools: Using The Issue Bin By: Kevin Eikenberry
http://www.sideroad.com/Meetings/meeting‐tool‐issue‐bin.html
182
SILENT BRAINSTORMING AND
AFFINITY PROCESS
Silent Brainstorming:
The purpose of silent brainstorming is to generate a number of ideas in a non‐
analytic manner that permits one group member’s ideas to stimulate the ideas of
others. This is also a way for every group member to get involved in the process.
Everyone’s ideas are recorded and valued.
Process:
Each person generates as many responses to the topic as possible.
This should be done in seven words or less and use a verb and a noun.
Only one idea per post‐it.
After everyone is done writing, have all members post their ideas on the flip
chart (or other available surface).
As other members of the group review all the post‐its, new ideas will emerge.
New ideas should be placed on Post‐It notes and put with the rest of the
ideas.
The group should discuss the Post‐It notes to check if there are any questions
about what any of the post‐its say or mean. (Check for understanding)
Affinity Process:
The purpose of the affinity process is to organize a large set of items into smaller
sets of related items.
Process: After there is an understanding of each of the post‐its, team members now
do the following:
SILENTLY move the post‐its around, grouping those which have an affinity.
If disagreement exists when grouping (noted because they keep moving
them from one group to another) make a copy of the item and place it in
more than one group.
After all items have been grouped, discuss each grouping to determine what
it is that relates all the post‐its.
Write a HEADER card that captures the theme and feeling of the group of
items.
If there is are single idea Post‐It notes that do not fit well with other ideas,
the group needs to decide if they want to keep it (“yes” is an okay answer).
183
NOMINAL GROUP TECHNIQUE
This article is online at http://joe.org/joe/1984march/iw2.html.
Footnotes
184
FORCE FIELD ANALYSIS
2. Identify expectations of what the group should accomplish (e.g., a list of realistic
strategies that we can implement to engage faculty)
4. When you try to decide what action to take, look at the forces and choose
strategy.
How do you enhance driving forces or how do you reduce restraining forces?
What actions can be taken to limit restraining forces?
(If you try to enhance the driving forces, the restraining forces may push even
harder.)
5. Quality check (What gets measured, gets valued and what gets done.)
How did we do???? (Evaluate the process)
185
MODIFIED NOMINAL GROUP TECHNIQUE
Purpose
Steps
Count the number of items on the list and divide by three. This is the
1 number of votes each person has. (Round fractions off to the lower number.)
If the items number more than 60, do not go over a vote total of 20. Vote
totals of more than 20 are hard to manage. Give each team member as many
colored dots as s/he has votes.
Have each person use her/his votes (colored dots) to select the items s/he
wants to keep. While each person can vote for any item, it is good to limit
2 the number of votes any one item can receive from a single person to three.
Note: the group can decide if they want to allow more or less multiple
voting.
186
DECISION MATRIX
Effort
Required
Priority
Little or no #7 #3 #1
effort = 3
Moderate #4 #2
effort = 2
#8
Difficult = 1 #9 #6 #5
Impact Created
Little Some Considerable
Impact = 1 Impact = 2 Impact = 3
Considerations
Effort
Resource requirements
Complexity of investigation
Time required
Ability to measure outcomes
Number of decision making levels required
Impact
Effect on quality
Time savings
Morale
Number of people who benefit
187
SUSTAINABILITY
278
WHY SUSTAIN?
279
188
WHAT NOT TO SUSTAIN?
280
281
189
1. MEANINGFUL STRUCTURES
Build assessment into:
• Policies and procedures
• Program governance:
– standing committee
– regular place on agenda
• Strategic planning
– Decision-making tools:
• Departmental plans
• Program Review
282
1. MEANINGFUL STRUCTURES
• Build visibility:
– Resource support
• Be creative
• Connect assessment work to relevant
committees, like Curriculum and Tenure review
• Consider incorporating the scholarship of
teaching and learning as elements in tenure and
promotion decisions
• Consider including collecting, analyzing, and
using assessment data as part of the job
description
283
190
2. MEANINGFUL USES OF
ASSESSMENT DATA
284
3. REASONABLE WORKLOAD
• Proceed from what faculty and staff
already do—but help shape it into good
assessment
• Spread the burden; Share the joy
rotation plans on committees
distributed functions
“champions” that educate, inform, and
coach
• Use pilot approaches to see what
works—and doesn’t—instead of “one
size fits all”
285
191
4. COMMITTED CULTURE
• Build a “community of scholars”
environment where interesting questions
arise from assessment data—a culture of
inquiry and evidence
• Have conversations about assessment
– Intentional
– Regular
– Inclusive
286
4. COMMITTED CULTURE
• Build
– Leadership
– cadre of peer advisors
• Weave assessment into curriculum design
and approval processes
• Map curriculum to determine how
outcomes are developed over time
• Consider integrative assessments
287
192
4. COMMITTED CULTURE
• User-friendly processes and data
systems
• Built-in peer support
• Acknowledgement
• Celebrations
288
289
193
PEOPLE
290
194
PEOPLE – WHO DO WE INVOLVE?
• Students
• Avoid a “stealth” assessment process.
• Students should be knowledgeable about the
STUDENT OUTCOMES.
• Students should know the level of performance that
is expected of them. Students should be given
timely feedback on their performance related to the
student outcomes.
• Research on learning is definitive:
• Students learn best when expectations for their
performance is clear AND they get timely
feedback on their performance.
292
MANAGING UPWARDS
293
195
MANAGING UPWARDS
294
MANAGING UPWARDS
• You need support, cooperation and commitment from
your “supervisor”
• You are in charge of managing her/his expectations
and keeping communication channels open
• This process is not his/her priority
• Plan your communication
Clearly define the issues (What do you need?
Why?)
Present possible solution which not only benefit
your needs but also his/her’s (WIIFM—What’s in it
for me).
LISTEN…be prepared to collaborate
Provide “talking points”
295
196
PERSPECTIVE
296
PERSPECTIVE
• Your most basic beliefs and attitudes
can influence how others view the
process:
– The language you use,
– The passion (or lack thereof) you
demonstrate,
– The seriousness of commitment you
show.
297
197
LANGUAGE THAT IS USED
• A College-level assessment committee was called the
“Overlords.”
• A College desiring to develop a “culture of assessment”
produced two documents to guide the program assessment
process:
– Student Learning Assessment Methods
– Student Learning Assessment Program
• Referred to as “SLAM” and “SLAP”
• A College had an “Assessment Day” where programs made
presentations on their assessment processes. A trophy
was given to the program that best demonstrated “closing
the loop.”
– It was called the “ Loopie” trophy.
298
299
198
WILLINGNESS TO ADMIT YOU
DON’T KNOW SOMETHING
• Don’t let need of “perceived” perfection stand in
the way of progress.
– Strive for excellence, not perfection.
• This is a process and will change over time—need
to put “continuous” back into “continuous
improvement.”
• Listen to your colleagues.
• Critical error to believe that a previous “clean”
accreditation visit means that you don’t have to
change anything—EVER!
300
PASSION THAT IS
DEMONSTRATED
• Enthusiasm IS contagious.
• What you say and HOW you say it is important.
• Do you begin by apologizing when you ask your
colleagues to do something for program
assessment?
• Do you make promises that you can’t keep?
• This won’t take any time!
• Do you blame ABET for the workload that has been
created?
• Do you express doubts about the value of the
process?
301
199
SUMMARY
• Common mistakes in the assessment
process are not just related to data
collection, evaluation, and improvements.
• Assessment is a human process and wise
involvement of human capital is critical to its
success.
• Attitudes and conversations are critical in
establishing an environment for collegial
engagement.
– “The quality of an institution is known by the
quality of its conversations.”
302
200
Self-Assessment: Continuous Improvement of
Program-Level Assessment of Student Learning ¹
© Copyright 2016
0-not in place; 1-beginning stage of development; 2-beginning stage of implementation; 3-in place and implemented;
4-implemented and evaluated for effectiveness; 5-implemented, evaluated and at least one cycle of improvement
RATING
RATING
RATING
RATING
RATING
RATING
____________________
¹ This tool is intended for self-assessment only to assist in understanding areas for improvement in the assessment process development. Assessment Planning
Flowchart © 2004 Revised July 2014
201
202
Appendices
Journal
Participant Journal
Date: _______________
1.
2.
3.
4.
5.
1.
2.
3.
4.
5.
203
204
Participant Journal
Date: _______________
1.
2.
3.
4.
5.
1.
2.
3.
4.
5.
205
206
Participant Journal
Date: _______________
1.
2.
3.
4.
5.
1.
2.
3.
4.
5.
207
208
Participant Journal
Date: _______________
1.
2.
3.
4.
5.
1.
2.
3.
4.
5.
209
210
Participant Journal
Date: _______________
1.
2.
3.
4.
5.
1.
2.
3.
4.
5.
211
212
Capstone
Capstone
DAY ONE
Where do you want your continuous improvement process to be at the end of the Academic Year?
List the “issue(s)” that will need to be addressed (e.g., prioritize activities, faculty involvement, identify resources for
faculty development, etc.)
213
214
Capstone
DAY ONE
What things are you going to continue to do in the same way you are doing them now?
DAY TWO
People:
215
216
Capstone
DAY TWO
DAY THREE
Develop a proposed work plan to achieve your goals:
WHEN?
Work Plan Matrix
Goal Task(s) to achieve goal Who When Person Responsible
217
218
Capstone
EXAMPLE
Goal Task(s) Who When Person responsible
Find examples of measurable January
Sarah Pfledderer Sarah Pfledderer
outcomes 2016
Me in consultation with February Graceanna Cramer
Assemble team of faculty
Assoc. Dean 2016 and Sarah Pfledderer
Provide faculty development on
Center for Teaching
Create measurable how to write measurable March 2016 Tim Mitchell (CTL)
and Learning (CTL)
student outcomes outcomes
Work with faculty to develop Faculty committee on
March/April
performance indicators for assessment of student Sarah Pfledderer
2016
student outcomes outcomes (CASO)
CASO with
Validate performance indicators
Department Head and April 2016 Sarah Pfledderer
by program faculty
program faculty
Identify models for curriculum January
Sarah Pfledderer Sarah Pfledderer
mapping 2016
Review current state of February
CASO Sarah Pfledderer
curriculum mapping 2016
Redesign Curriculum
February
Map Identify needed improvements CASO Sarah Pfledderer
2016
There are many commercial products available for the collection and analysis of assessment
data. ABET does not require the use of technology in program assessment, nor does it
endorse and commercial software products.
This module has been designed to show you how Microsoft Excel can be used to build a
relatively foolproof data entry system that can be used for assessment and/or grading. It is
also possible to use this template for instant analysis and visualization of the collected data.
Donald Sanderson, a senior IDEAL scholar from East Tennessee State University, developed
this module.
219
Exercise: Using Excel for Data Collection
4. Place a border around the merged cells. Click on the border control button on the
home tab and select Thick Box Border from the dropdown list.
220
5. Repeat for row 2,type the student outcome in cell A2
6. Merge cells A2‐I2 together
7. Place a Thick Box Border around the merged cell
8. Merge cells A3 and A4 together and put a Thick Box Border around the merged cell
9. Merge cells B3 and C4 together and put a Thick Box Border around the merged cell
10. Merge cells D3 and E4 together and put a Thick Box Border around the merged cell
11. Merge cells F3 and G4 together and put a Thick Box Border around the merged cell
12. Merge cells H3 and I4 together and put a Thick Box Border around the merged cell
The new result should look like this:
221
13. In the new B3‐C4 box write “1 = Not Acceptable”
14. In the new D3‐E4 box write “2 = Below Expectations”
15. In the new F3‐G4 box write “3 = Meets Expectations”
16. In the new H3‐I4 box write “4 = Exceeds Expectation”
222
17. Highlight cells A5 through I5, select the fill tool (paint bucket) and fill the cells with
any color desired.
18. To make things easier, set the width for each column. Go to the top row of the sheet
and right click on the column letter and select column width from the drop down
menu. Set the column width as shown in the table.
Column Width
A 3”
B 3”
C 0.5”
D 3”
E 0.5”
F 3”
G 0.5”
H 3”
I 0.5”
19. In cell A6 write the first performance indicator for the rubric
223
20. In cells B6, D6, F6 and H6 write the descriptions for work that is not acceptable,
below expectations, meets expectations and exceeds expectations.
21. Repeat this for rows 7, 8 and 9. Feel free to add formatting embellishments as
appropriate. The final result should look like this:
Rubric to assess written communication
Ability to communicate effectively (written)
1 = Not 2 = Below 3 = Meets 4 = Exceeds
Acceptable Expectations Expectations Expectations
Text rambles,
Articulates
points made
ideas, but
are only
Student does writing is Articulates ideas
Articulation of understood
not articulate somewhat clearly and
Ideas ‐ Written with repeated
ideas at all disjointed and concisely
reading, and
difficult to
key points are
follow
not organized
Generally
organized well Organized
but paragraphs written
Little or no Some
combine materials in a
Organization ‐ structure or structure and
multiple logical sequence
Written organization is organization is
thoughts or to enhance the
used used
sections are reader’s
not identified comprehension
clearly
Work is not
presented Work is not Written work
Written work is
neatly; neatly is usually
presented neatly
spelling/gram presented presented
and
Quality of Work mar errors throughout; neatly and
professionally;
‐ Written present one or two professionally;
grammar and
throughout spelling/gram grammar and
spelling are
more than mar errors per spelling are
correct
1/3rd of the page usually correct
paper
Figures, Tables,
and Graphics Use of Figures,
Use of Figures,
No Figures, are present but Tables, and
Use of Tables, and
Tables, or are flawed Graphics that
Graphs/Tables/ Graphics that
graphics are (axes are usually in
etc. ‐ Written are all in proper
used at all mislabeled, no the proper
format
data points, format
etc.)
224
22. Some text may overflow its boundaries so select cells A6 through H9, then click the
wrap text button from the home tab.
225
23. As a last, optional step, check boxes can be added to the criteria so this sheet can be
used as a paper form for data collection. Select the cell where the box will be placed
(say C6) and then select Insert from the menu bar. Click on symbol and then select
the check box from the symbol dialog box.
Once the first check box has been inserted, copy and paste this symbol into the rest of the
cells in columns C, E, G, and I.
24. The master sheet is done! Save the workbook as MyStep1 on your computer.
226
Step 2: Prepare Data Entry Sheets
1. Save the workbook again, this time using the Save As… option and call the file
MyStep2.
2. We are now going to rename our worksheet. Double click on the tab for the sheet
and rename it “Criteria”. If you are using an older version of Excel you might have
two other sheets. If this is the case, right click on the tab for those sheets and choose
Delete from the menu.
3. Right click on the tab for the Criteria sheet and choose the option Move or Copy… A
dialog box will appear. Check the box marked Create Copy and then click OK.
4. The new sheet will be automatically named Criteria (2). Double click the sheet tab
and rename the sheet summary.
5. Select all the cells that have been filled in (A1 through I9), then right click on any of
them and select Clear Contents from the dropdown list. This leaves the formatting
and widths, but no text.
6. Select cell A1 and enter the following formula in the formula bar:
=Criteria!A1
This formula states that this cell will display the current contents of cell A1 on the
Criteria worksheet. Now, if any changes are made to the student outcome or
performance indicators they will automatically be updated in the Summary sheet.
227
7. Select cell A2 and enter the following formula in the formula bar
=Criteria!A2
8. Select cell B3 and enter the following formula in the formula bar
=Criteria!B3
9. Select cell D3 and enter the following formula in the formula bar
=Criteria!D3
10. Select cell F3 and enter the following formula in the formula bar
=Criteria!F3
11. Select cell H3 and enter the following formula in the formula bar
=Criteria!H3
12. Select cell A6 and enter the following formula in the formula bar
=Criteria!A6
13. Select cell A7 and enter the following formula in the formula bar
=Criteria!A7
14. Select cell A8 and enter the following formula in the formula bar
=Criteria!A8
15. Select cell A9 and enter the following formula in the formula bar
=Criteria!A9
16. Move to the top of the sheet, highlight column C, right click and select delete from
the dropdown menu. Repeat this for columns E, G and I.
228
The result should look similar to the criteria worksheet with two important differences;
first, the check boxes are missing and second, any changes made to the criteria
worksheet will be reflected on this worksheet.
Articulation
of Ideas ‐
Written
Organization ‐
Written
Quality of
Work ‐
Written
Use of
Graphs/Table
s/etc. ‐
Written
17. Make a copy of the summary sheet and name the copy “1” (just like steps 3 and 4
above).
18. Worksheets can be reordered by clicking on the page tab and dragging the sheet to
the left or right. Order the sheets as shown below and move to worksheet 1.
229
21. Copy this formula into cells F7 through F9. Select cells F6 through F9 and set the
text color to red or some other attention grabber.
22. Move to cell F10 and enter the following formula:
=COUNTIF(F6:F9,”ERROR”)
The COUNTIF formula will return the number of cells in the range of F6 to F9 that
contain the value ERROR.
23. Move to cell G10 and enter the formula:
=COUNTA($B$6:$E$9)
This counts the number of responses on this sheet.
24. Enter some responses and check these cells; they are keeping track of errors and
entries on the worksheet.
25. The first data collection worksheet is now complete. Make 6 copies of the worksheet
calling them “2”, “3”, “4”, “5”, “6” and “Last”. Save the workbook.
The reason for using Last for the last worksheet instead of 7 is so we can use multi‐sheet
formulas to sum up the data. If the range is given as 1 to Last then it is easy to insert
additional numbered sheets without having to modify our formulas each time.
230
Step 3: Create a Summary Sheet
1. Save the workbook again, this time using the Save As… option and call the file
MyStep3.
2. Put some data in the worksheets
a. Sheet 1 mark all rows Not Acceptable
b. Sheet 2 mark all rows Below Expectations
c. Sheet 3 mark all rows Meets Expectations
d. Sheet 4 mark all rows Exceeds Expectations
e. Sheet 5 mark first row Not Acceptable, second row, Below Expectations, third
row Meets Expectations and fourth row Exceeds Expectations.
f. Sheet 6 mark first two rows only Exceeds Expectations; leave last two rows
empty
g. Sheet Last mark first two rows only Meets Expectations; leave last two rows
empty
3. Return to Summary sheet. Merge cells F3‐F4 and write “Responses”.
4. Move to cell F6 and enter the following formula:
=COUNTA(‘1:Last’!B6:E6)
This counts the number of non‐empty cells in the range B6 to E6 on worksheet 1 to
Last. This gives the number of worksheets that contain a response for this
performance indicator.
5. Copy the formula into cells F7 to F9. The F column should appear as follows:
Responses
7
7
5
5
6. Select cell B6 and enter the formula:
=COUNTA(‘1:Last”!B6)
The value of 2 should appear. This shows that for the first performance indicator,
two responses have been Not Acceptable.
231
Optional step: If you want the number of responses at a given level for the
performance indicator to be displayed as a percentage, use the following formula:
=(COUNTA('1:Last'!B6)/($F6+0.000000000001))
This formula is counting the number of non‐empty B6 cells on sheets 1 through last
and dividing by the value in F6, which is the total number of responses. By adding
0.000000000001 to the value in F6, you are preventing a division by zero error if no
one has responded with making a significant difference to the data. Select cell B6
then, from the home tab, select the % button. This will format the cell as a
percentage.
232
9. Select cell F14 and enter the formula:
=SUM(‘1:Last’!E11)
This will calculate the number of rows on the data sheet that have errors.
10. Now add cells to collect information about the administration of the rubric.
Go to cell A15 and write “Date Assessed”
11. Go to cell A18 and write “Item Assessed”
12. Go to cell A21 and write “Instructor”
13. Using the border control button, format cells A16, A19 and A22 to have a Thick Box
Border
14. Go to cell A16 and right click the cell. Select Format Cells from the drop down list.
15. Select the number tab on the dialog box and choose Date from the category list on
the left. Select your preferred type.
233
18. Enter your name in cell A22, giving results like those shown below.
Date Assessed
August 7, 2013
Item Assessed
IDEAL scholars
Instructor
J. Warnock
234
Step 4: Format for Easy Entry
1. Save the workbook using the Save As… option and call the file MyStep4.
2. Go to worksheet 1.
Note: These next steps should have been completed before the copies of worksheet
1 were made but that would not have made sense. When done with this section you
can duplicate the new worksheet 1 and replace sheets 2 through Last.
When the data entry sheets were created the ERROR flag was built in. However, this
may not be enough to call attention to an error in data entry. We will use conditional
formatting to aid with data input.
3. Clear the data from sheet 1.
4. Highlight cells B6 through E6
5. Choose conditional formatting from the home tab and click on New Rule.
6. In the New Formatting Rule dialog box, under style select “Classic”. In the next drop
down menu select “Use a formula to determine which cells to format”. Then enter
the following formula:
235
=(COUNTA($B$6:$E$6))=0
Next, select custom format from the “Format with” dropdown menu.
Select the Fill tab and choose a light color (I use light blue). Remember the color you
choose, as it will be used to format all “data needed” cells. Click OK. You have now
formatted the sheet so the cells to be completed by the user are filled with your
chosen color. We will use a similar technique to highlight errors in data entry.
7. Highlight cells B6 to E6 again and select Conditional Formatting. This time select
Manage Rules. Click on the plus sign to add a new rule.
8. As before, select “Classic” from the Style menu and “Use a formula to determine
which cells to format”. Enter the following formula:
=$F$6=”ERROR”
236
Again, select Format with and Fill and choose a highly visible “error” color such as
red. Click OK. Two rules will now display. Click OK again.
9. Highlight cells B6‐E6 and copy them to cells B7‐E7, B8‐E8 and B9‐E9. Each of these
rows will now be displaying the “data needed” color. Type in some correct data (i.e.
one value per row). The cells should turn to white. You can also enter data into two
cells in the same row and the cells should turn red.
10. The summary sheet uses the values in cells F10 and G10 but these do not need to be
seen. The columns could be hidden, but that would hide the error flags as well. To
hide them in plain sight, simply set the fill and text colors to white.
237
11. As a last aid to usability, lock this sheet so that changes cannot be made EXCEPT
those needed for data entry. We must specify the cells to remain un‐locked. Select
cell B6 through E9, right click and choose Format Cells from the dropdown list. In
the dialog box, select the Protection tab and uncheck both boxes (locked and
Hidden), and click OK.
238
12. Select the Review tab from the ribbon and click Sheet.
13. A dialog box will open. We can use the default settings so just select OK. You could
set a password to prevent anyone else from unlocking these cells.
You can now only alter values in the rubric that the end user will be filling in. You might
also consider protecting the criteria and summary sheets.
Now save this workbook.
239
240
Curriculum Map
Please fill out this form rating the level to which Department Learning Objectives are being met in your
courses. Use the Definition Of Levels For Meeting Learning Goals In Classes document that was sent to
you (if you do not have this there is a copy in ET 204) to rate your class on a 0-5 scale, as per the defini-
tions that are provided. Rate every class you are teaching this year and any additional classes that you
taught last year that you expect to teach again. If your course falls in between two levels pick the lower
score. This is not a contest, and there are no prizes for having the highest score, so please rate your
courses as honestly as possible.
Please return this document to ............., either via e-mail or as a hard copy in my mailbox in ...... by
the end of the day on ........ It is very important that all surveys are completed and returned by this date,
as the Assessment Committee would like to have draft reports on every program ready by the middle of
exam week.
Courses
Student
Learning
Objectives
Analytical Skills
Business Skills
Teamwork Skills
Technology Skills
Programming Skills
Thank you.
241
An equal opportunity university
242
Definition Of Levels For Meeting Learning Goals In Classes
Analytical Skills – Ability to: logically analyze and solve problems from different points of view; trans-
late scientific and mathematical theory into practical applications using appropriate techniques and tech-
nology.
5 – Course contains significant development of analytical skills, and reinforcement and practice of ana-
lytical skills developed in earlier classes.
4 – Course contains significant development of analytical skills, with little reinforcement or practice of
analytical skills developed in earlier classes.
3 – Course contains some development of new analytical skills, and some utilization of previously devel-
oped skills.
2 – Course contains significant utilization of previously developed skills, but little or no development of
new skills.
1 – Course utilizes some previously learned analytical skills, but develops no new ones.
Visual Communication Skills – Ability to: utilize appropriate technology to create drawings, illustra-
tions, models, computer animations, or tables to clearly convey information; interpret and utilize similar
information created by others.
3 – Course contains multiple assignments in visual communication with some instruction on the devel-
opment of new skills and feedback.
2 – Course utilizes students' existing skills in visual communication for multiple smaller or one large as-
signment, and provides examples and limited feedback, but little to no instruction on topic.
OR
Course contains significant instruction in the understanding and utilization of visual information pro-
duced by someone or something else, such as a computer analysis packages.
1 – Course requires one or two small assignments utilizing visual communication, but provides limited
feedback and no instruction on the subject.
OR
Course contains some instruction in the understanding and utilization of visual information produced
by someone or something else, such as a computer analysis packages.
0 – Course contains no visual communication by students of any kind (faculty are still visible).
243
Oral Communication Skills – Ability to: verbally present ideas in a clear, concise manner; plan and de-
liver presentations; speak and listen effectively in discussions based upon prior work or knowledge.
5 – Course contains at least two formal presentations that every student participates in, including presen-
tation preparation instruction, supervised practice or review of content before the presentation, and
written feedback. Formal presentations are defined as being given to an external audience (external
to the course) and having a rigid time limit. In addition, the course content includes consistent in-
class discussion with participation of all students, and part of the course grade is based on this discus-
sion. For this purpose, in-class discussion is the exploration of concepts or ideas based upon out of
class assignments such as reading, not “how do I solve problem 2?”
4 – Course contains at least two formal presentations with instruction and feedback as described above,
and course has a significant, but ungraded in-class discussion component.
3 – Course has at least one formal presentation, but with limited preparation instruction and feedback,
and no external audience. Course should also have some in-class discussion.
2 – Course has one or two informal presentations that may not include every student speaking. Course
should also have some in-class discussion.
1 – Course has some in-class discussion, but no student presentations of any kind.
Written Communication Skills – Ability to: present ideas in clear, concise, well-structured prose;
choose appropriate style, form, and content to suit audience; utilize data and other information to support
an argument.
5 – Formal, written reports/papers are the majority of the grade for the course. Multiple rough drafts are
required during the quarter and feedback is given. The reports/papers are critiqued for both technical
content, grammar, and spelling. This may also include large portions of exams that are essay format.
4 – Written reports/papers comprise a large portion of the course grade. This includes both formal (pre-
scribed format) and informal (unprescribed format) reports/papers that are graded on technical con-
tent, grammar, and spelling. This may also include large portions of exams that are essay format.
One assignment with drafts returned with feedback.
3 – Written reports/papers comprise a portion of the course grade. This includes both formal (prescribed
format) and informal (unprescribed format) reports/papers that are graded on technical content,
grammar, and spelling. This may also include portions of exams that are essay format.
2 – Written reports/papers comprise a small portion of the course grade. Report/paper grade is based on
content. Students receive feedback on grammar or spelling. This may also include portions of exams
that are essay format.
1 – Essay questions on exams. Written reports/papers are not part of the course grade.
244
Project Management Skills – Ability to: Set goals; create action plans and timetables; prioritize tasks;
meet project milestones; complete assigned work; seek clarification of task requirements and take correc-
tive action based upon feedback from others.
5 – Projects require scheduling resources (such as tools, supplies, machines, or assistance from others),
setting goals, writing procedures, and verifying progress towards meeting deadlines established by the
student. Instruction in project management techniques is provided and assessed as part of course.
Projects require the use of formal or informal teams, and establishing human resource roles. Teams
are required to use modern project management tools and make a presentation that includes project
management information to an external audience.
4 – Projects require scheduling resources (such as tools, supplies, machines, or assistance from others),
setting goals, writing procedures, and verifying progress towards meeting deadlines established by the
student. Instruction in project management techniques is provided and assessed as part of course.
Projects require the use of formal or informal teams, and establishing human resource roles.
3 – Projects require scheduling resources (such as tools, supplies, machines, or assistance from others),
setting goals, writing procedures, and verifying progress towards meeting deadlines established by the
student.
2 – Some student projects, weekly or longer, require a procedure, process plan, or timeline before any
other work can be performed. In addition, intermediate deadlines and an analysis of success in meet-
ing the plan are part of a final report and grade.
1 – Some student projects, weekly or longer, require a procedure, process plan, or timeline before any
other work can be performed.
Business Skills – Ability to: accurately estimate production costs; calculate the cost effects of alternative
designs; predict the effects of quality control, marketing, and finance on product or process cost.
5 – A major class team project requires establishing a mock business to design and produce a product,
and the grade is based at least partly upon the financial analysis and/or success of the endeavor.
4 – A major portion of the course discusses production costs, cost effects of alternative designs, and the
interaction of functions in a business that relate to these costs, such as quality control, marketing, and
finance. Case studies, outside reading assignments, and guest speakers are used to reinforce concepts.
3 – A major portion of the course discusses production costs, cost effects of alternative designs, and the
interaction of functions in a business that relate to these costs, such as quality control, marketing, and
finance.
2 – Cost implications and their affect on other functions of a business are discussed. Appropriate manu-
facturers catalogs are used to verify cost.
245
Teamwork Skills – Ability to: work together to set and meet team goals; encourage participation among
all team members; listen and cooperate; share information and help reconcile differences of opinion when
they occur.
5 – Students work in a structured team during the entire quarter. Roles and responsibilities of each team
member are detailed. Students are graded and given feedback on the “output” of the team (written or
oral report or completed project). Students are also graded by observations made by the instructor on
the team work skills of each student. The majority of the grade is based on this team project. In-
cludes significant instruction on teamwork.
4 – Students work in a structured team during the entire quarter. Roles and responsibilities of each team
member are detailed. Students are graded and given feedback on the “output” of the team (written or
oral report or completed project). Students are also graded by observations made by the instructor on
the team work skills of each student. The majority of the grade is based on this team project. Course
contains some instruction on teamwork and how to define roles.
3 – Students work in teams on a majority of the course assignments. Most of the course grade is based on
assignments worked on in teams (>50%).
2 – Students are in teams for laboratory work, lab reports/papers, and HW assignments. Assignments
worked on in teams are not the majority of the course grade (<50%).
0 – Students may study for exams together, but all graded assignments are individual efforts.
Creative Problem Solving – Ability to: apply a design process to solve open-ended problems; generate
new ideas and develop multiple potential solutions; challenge traditional approaches and solutions.
5 – Course revolves around design. Course should contain one significant design problem or several
smaller design problems so that design is part of the course during the entire quarter. This course
should include the application of a design process and the consideration of multiple solutions to any
given problem. The course should also include consistent guidance and feedback.
4 – Course contains many open-ended problems and a large design project, or course contains many
open-ended problems and several smaller design projects. Design component does not last for entire
quarter, and instruction on design and design process, and guidance and feedback are limited.
3 – Course contains many open-ended problems and a small multi-week design project.
OR
Course contains one significant design project with multiple solutions considered and some open-
ended problems.
OR
Course revolves around open-ended problem solving.
2 – Course contains many open-ended problems or course contains a small, multi-week design project.
246
System Thinking Skills – Ability to: understand how events interrelate; synthesize new information with
knowledge from previous courses and experiences to solve problems.
5 – Course relies on previous student experiences. Students are given assignments where they analyze all
of the possible effects and interactions of process or product variables that affect the outcome, such as
realistic problem solving or troubleshooting. Students are given assignments that rely heavily on ma-
terial presented in prerequisite courses. System thinking and development of such skills are the major
focus of the course. Course contains significant instruction on the development of system thinking
skills.
4 – Course relies on previous student experiences. Students are given assignments where they analyze all
of the possible effects and interactions of process or product variables that affect the outcome. Real-
istic problem solving and troubleshooting are possible assignments. Course contains significant in-
struction on the development of system thinking.
3 – Course relies on previous student experiences. Assignments rely on the students’ existing system
thinking ability. Students are given instruction on system thinking, but it is not a major focus of the
course.
2 – Course relies on minimal previous student experiences. Students are given instruction on system
thinking, but focus on the development of skills is limited.
1 – Course relies on no previous student experiences. System thinking is mentioned in the course, but
course does not focus on the development of skills.
0 – Course relies on no previous student experiences. Course covers topics without discussing relation-
ships with other fields or areas.
Technology Skills – Ability to: properly use industrial-quality technology appropriate to field; adapt to
new technology; integrate existing technology to create new possibilities.
5 – Students make significant use of modern, industrial-quality technology of the kind that students
would be expected to use if they were hired today.
4 – Students make some use of modern, industrial-quality technology, or make significant use of older
industrial technology of the type that once was common in industry, but is now out of date.
3 – Students make some use of older industrial-quality technology, or make significant use of instruc-
tional technology that mimics the type of technology used in industry, but is not of the quality or ca-
pability to be considered for an industrial environment.
2 – Students make use of some instructional technology, or course exposes students to industrial-quality
technology through trips to companies, but students are not allowed to operate equipment.
1 – Students are exposed to some instructional technology through demonstrations or course exposes stu-
dents to some industrial-quality technology through videos or pictures, but they do not visit the actual
technology.
247
Self-learning Skills – Ability to: learn independently; continuously seek to acquire new knowledge; ac-
quire relevant knowledge from outside sources to solve problems.
5 – Entire grade is based on independent work. Student performs the complete investigations, and also
has selected the problem and written the goals and objectives.
4 – Entire grade is based on independent work. Student performs the complete investigations, but the
problem and potentially the goals and objectives come from another source.
3 – A major part of the grade is based on oral or written reports, or student designed lab projects that re-
quire individual investigation.
2 – A small part of the course grade is based on oral or written reports, or student designed lab projects
that require individual investigation.
1 – Homework and lab assignments require some research by the student in areas that are not covered in
the lecture or text.
0 – All course material is covered in a lecture. All homework assignments and lab projects follow an as-
signed procedure.
Ethics and Professionalism – Ability to: understand and demonstrate professional and ethical behavior;
understand social and ethical implications and interrelations of work, and respond in a responsible and
professional manner.
5 – Subject of the class centers on ethics/human values/professionalism/and technology. Most class dis-
cussions and lectures are directly targeted at ethics/technology/engineering. [Technology and Human
Values, for example]. Students write/discuss/research/present on these issues multiple times. The
class is for this purpose. Students consider the issues each class and have time to reflect upon these
difficult issues. Students often present their views to an audience.
2 – The instructor includes ethical/value issues multiple times during the quarter as part of a lec-
ture/discussion, but the focus of the lectures still remain on other aspects of technology.
1 – The topics addressing ethics/human values/technology are brought up occasionally in lecture or dis-
cussion. No particular focus or structure, but the instructor makes sure the issue is addressed occa-
sionally when appropriate.
248
Programming Skills – Ability to: use higher level, structured programming languages to write effective
and efficient code to complete a task such as modeling or calculation, or control equipment; under-
stand and adapt existing structured programs.
Note: A higher level programming language is one that includes structured concepts such as decision
operators (if, etc.), looping operators (while, for, etc.), and subroutines or functions.
5 – Learning higher level programming language and writing programs is the main purpose of the class.
Students write multiple programs with significant instruction and feedback. Course contains at least
one assignment where program is improved over a period of time.
4 – Students are required to learn a higher level programming language to complete a significant project
or projects in the class. Programming instruction is limited, but completing the programming is an
essential component of the class.
3 – Students are required to utilize a higher level programming language in the course on some assign-
ment, but do not necessarily have to learn any new aspects of it or require it to complete major pro-
jects.
OR
Students are required to learn a new, lower level programming language and utilize it for a significant
amount of the course assignments, or on a large project or projects.
2 – Students are required to utilize lower level program language to complete some assignments. Instruc-
tion on the topic is limited.
1 – Students are required to utilize programs where they can write small macros to complete assignments.
No instruction on the programming aspect is provided in the course.
249
250
Surveys
Guidelines: Protocol for pilot testing
Time required: Approximately one hour
Subjects: 5‐10 undergraduate students. The students should have the characteristics of your
target group (e.g., seniors, first‐year, program, etc.)
The students will pilot test a survey that will be administered to their cohort. The purpose
is to provide feedback on their understanding and perception of the survey items. The
responses of the students participating in the pilot test are not going to be recorded or
reported to anyone except those who are designing the survey.
Process:
1. Fold the questionnaire on the dotted line so that the respondents ONLY see the survey
itself.
2. After explaining the value of their participation, hand out the questionnaire so they can
only see the questions.
3. Indicate to the participants that we would like them to take the survey seriously and
respond to the items thoughtfully. They MAY NOT ask questions as they go through the
items but need to take the survey as they would under normal circumstances.
4. Take note of how long it takes students to respond to all items (it may vary from one
student to another, but should not vary by much).
5. After the students have completed the survey, have them open the survey and indicate
that we would like them to respond to each survey item in four ways. Review with them
the meaning of each of the headers.
A. Understanding: Was the item “understandable.” That is, did you have to read the
item more than once to understand what it was asking? Was the meaning of the
question clear and straightforward?
B. Scaling: Was the scale (very little…..very much) adequate? That is, do you feel the
scale provided you with an appropriate way to respond?
C. Only one response: Was the item written in such a way that you could have
answered it more than one way? That is, could you have said BOTH “very little” and
“very much”?
D. Loading: In your opinion, was the item written in such a way that it led you to
ONLY one OBVIOUS answer? In other words, the way the item is worded, it was
highly likely that students, regardless of year in college, would respond the same
way.
251
6. Have the students respond to each item by circling “yes/no” for each item. This will
allow us to document their responses for cross‐campus comparisons.
7. After all the students have independently responded to the four items for each of the 12
survey questions, ask them to discuss with you any of the items that have a “no”
response. Start with “understanding” and discuss any item that has a “no” rating in that
column. Have them to discuss why they responded that way. TAKE NOTES!!! We will
evaluate their responses to make modifications on the items. Do this for each of the
four areas of focus.
8. Ask students to explain what they believe we meant by each item, especially 5, 6, 9, 11,
and 12. As I remember, these are the items I think we had the most difficulty with the
wording. If you have already focussed on some of these questions during step #5 and
you feel you have a good grasp on any “misconnect” between what they think we are
asking and what we think we are asking, then you can skip those items.
9. Ask students if they found any of the questions to be “emotionally laden.” For example,
did they find any of the items offensive or insulting?
10. Prepare a summary of all concerns about survey items to guide developers in improving
the quality of the survey.
252
SURVEY QUESTIONS
For items 1‐12, please use the scale below to indicate the response that most
closely represents your experience.
1 2 3 4
Very Little Some Quite a bit Very Much
To what extent has your college education contributed to your Scale Only one
learning in the following areas: Understandable? Adequate? response possible? Loading? COMMENTS
1. Using mathematical methods and procedures to solve the types of technical 1. Yes No Yes No Yes No Yes No
problems I will face in my career.
2.
2. Using scientific research methods and procedures to solve the types of Yes No Yes No Yes No Yes No
problems I will face in my career. 3.
Yes No Yes No Yes No Yes No
3. Using engineering methods and procedures to solve the types of problems I
will face in my career. 4.
Yes No Yes No Yes No Yes No
4. Conducting scientific investigation, including problem identification and
setting up and interpreting an experiment or investigation. 5.
Yes No Yes No Yes No Yes No
5. Designing a system, component, or process to meet a need.
6.
6. Using team process skills necessary to be an effective member of a team. Yes No Yes No Yes No Yes No
8.
8. Understanding the code of ethics for my chosen profession. Yes No Yes No Yes No Yes No
253
Mistakes To Avoid With Your Survey
Adapted from:
(http://www.kdv.com/management/nonprofit/12mistakes.pdf.)
Surveys are possibly the best way for a program to assess its constituents’ attitudes and needs. If a
statistically significant number of constituents respond, you can use the results to make decisions about
services and programs. Often, however, surveys are not successful. When surveying, be careful to avoid
these common mistakes that invalidate results and decrease response rates, wasting your already
limited time and resources.
Mistake No. 1: Not surveying. Not surveying at all is the very first mistake. Knowing your audience will
enable you to keep your program continuously updated. Surveying (and, as we will later discuss,
resurveying) is the key to opening the lines of communication with your constituents.
Mistake No. 2: Pilot test the survey. Before initiating the survey to your constituent group, conduct a
small pilot test. Use this opportunity to troubleshoot the survey before spending time and resources on
the larger survey. You may discover, for example, that your survey is not eliciting useful data or that it is
too complicated.
Mistake No. 3: Overloading the survey. Too many questions, complicated ranking systems or being
asked to rate dozens of items all require too much time and effort for survey participants. Make
completing the survey as painless as possible. A good response rate to a survey covering a few
important areas is better than a dismal response rate to a comprehensive survey.
Mistake No. 4: Bad timing. When constituents are either vacationing or in the middle of a busy time,
they will not have the time nor the inclination to fill out a survey. By sending a survey during one of
these periods, you not only risk a lower response rate, but also the appearance of being indifferent to
the time constraints of your constituents.
Mistake No. 5: Including the survey with something else. By including the survey link in with other
information, such as a newsletter, you risk a lower response rate — only those who read that particular
mailing will respond to the survey. This creates a bias in the responses and skews the results.
Mistake No. 6: Failing to focus. Prioritize the areas you want to know about. It is impossible for your
survey to cover everything, and if you try to make it, you will lower your response rate. Instead, focus
only on the most important issues.
Mistake No. 7: Not following up on your survey. Survey experts are unanimous in the opinion that the
most effective technique for increasing the response rate of a survey is follow‐up requests. Send a
carefully worded request to remind recipients of the survey and how valuable their input is. This is a
sure way to get more responses. The acceptable response rate for your survey is up to you, but most
experts agree that to draw valid conclusions, you need at least a 50% response rate and, ideally,
somewhere in the neighborhood of 80%. How low or high your response rate is will dictate how many
follow‐ups you need to do in order to reach the acceptable level of response on which you have
decided.
254
Mistake No. 8: Not acting on the results. Use the information from the survey as a guide to implement
changes. If your survey is conducted correctly, you can assume that your results are valid and must be
acted on.
Mistake No. 9: Not communicating results. Be sure and thank your constituents and let them know
what you learned and what you did with the information. This will encourage them to respond to future
surveys and also keep them connected to the program.
255
256
Establishing
Timelines
Establishing Timelines and Responsibilities
‐An Example‐
In program assessment planning, it is important that common sense prevail. Processes must be established
that capitalize on what is already being done and complement the work of the faculty. Decisions will need to
be made. Just as faculty cannot teach the universe of all concepts and skills related to a single course,
programs cannot assess everything that they believe students should know or be able to do by the time of
graduation. As decisions are made and as assessment and evaluation process are developed, planning should
be systematic and for the long term.
The timeline illustrated in Table 1 demonstrates a three year cycle where each outcome is assessed every
three years. Because there are only six outcomes, this means that the data collection process takes place on
only two outcomes per year. The timeline provides for two cycles of data collection every six years.
Learning Outcomes ’12‐13 ’13‐14 ’14‐15 ’15‐16 ’16‐17 ’17‐18
(each with measurable performance indicators):
A recognition of ethical and professional responsibilities
The table above can be misleading in that during the year where data collection is taking place on some of the
outcomes, activities are taking place related to other outcomes. Table 2 below represents an assessment and
evaluation timeline for multiple processes for a single outcome.
257
To get a general view of what one cycle of an assessment program might look like, Table 3 represents three
academic years of activity for six learning outcomes by assessment and evaluation activities.
Contemporary Issues
Contemporary Issues
Contemporary Issues
Communication
Communication
Communication
Cultural
Cultural
Cultural
Activities
Teams
Teams
Teams
Global
Global
Global
Ethics
Ethics
Ethics
Review of performance indicators
defining that outcome
Map educational strategies
related to performance indicators
Review mapping and identify
where data will be collected
Develop or review assessment
methods related to outcome
Collect and analyze data
Evaluate assessment data
including processes
Report findings
Take action where necessary
Table 3. Three‐year cycle of assessment and evaluation activity
Although this appears to require considerable effort, not all assessment activities need to be done by the
same person or group. Table 4 suggests that there are multiple parties involved in the assessment and
evaluation cycle. It is important to plan strategically and systematically so that the workload is reasonable
and appropriately distributed.
Adapted from Assessment Planning Flow Chart©2004, Gloria M. Rogers, Ph.D., ABET, Inc. ([email protected]) Copyright 2008
258
These tables are for illustrative purposes only. In order to close the loop on the assessment and
evaluation process, it is important to plan with the end in mind. Creating a multi‐year timeline will
help to shape thinking about the activities involved in program assessment. It will also help to avoid
taking on too much in the beginning and encourage systematic planning over time.
Creating these types of tables should only be seen as tools to assist in administering and
communicating the process. At any time it is found that the processes need to be altered, the
information in the tables should change. For example, it may be found after multiple data collection
and analysis processes that one or more of the outcomes are consistently of high quality whereas
there are other outcomes where the program cannot demonstrate adequate achievement. This
could lead to more frequent data collection and evaluation process for some outcomes and less for
others. The overall process needs to be designed to answer questions that are of interest to the
program. “Systematic” does not mean “etched in stone.” If you need to change your processes
and/or cycles of activity, then it should be done.
Adapted from Assessment Planning Flow Chart©2004, Gloria M. Rogers, Ph.D., ABET, Inc. ([email protected]) Copyright 2008
259
260
Learning Styles
LEARNING STYLES AND STRATEGIES
Richard M. Felder
Hoechst Celanese Professor of Chemical Engineering
North Carolina State University
Barbara A. Soloman
Coordinator of Advising, First Year College
North Carolina State University
Everybody is active sometimes and reflective sometimes. Your preference for one
category or the other may be strong, moderate, or mild. A balance of the two is
desirable. If you always act before reflecting you can jump into things prematurely
and get into trouble, while if you spend too much time reflecting you may never get
anything done.
If you are an active learner in a class that allows little or no class time for discussion
or problem-solving activities, you should try to compensate for these lacks when you
study. Study in a group in which the members take turns explaining different topics to
each other. Work with others to guess what you will be asked on the next test and
figure out how you will answer. You will always retain information better if you find
ways to do something with it.
If you are a reflective learner in a class that allows little or no class time for thinking
about new information, you should try to compensate for this lack when you study.
Don't simply read or memorize the material; stop periodically to review what you
have read and to think of possible questions or applications. You might find it helpful
261
to write short summaries of readings or class notes in your own words. Doing so may
take extra time but will enable you to retain the material more effectively.
Sensing learners tend to like learning facts, intuitive learners often prefer
discovering possibilities and relationships.
Sensors often like solving problems by well-established methods and dislike
complications and surprises; intuitors like innovation and dislike repetition.
Sensors are more likely than intuitors to resent being tested on material that has
not been explicitly covered in class.
Sensors tend to be patient with details and good at memorizing facts and doing
hands-on (laboratory) work; intuitors may be better at grasping new concepts
and are often more comfortable than sensors with abstractions and
mathematical formulations.
Sensors tend to be more practical and careful than intuitors; intuitors tend to
work faster and to be more innovative than sensors.
Sensors don't like courses that have no apparent connection to the real world;
intuitors don't like "plug-and-chug" courses that involve a lot of memorization
and routine calculations.
Everybody is sensing sometimes and intuitive sometimes. Your preference for one or
the other may be strong, moderate, or mild. To be effective as a learner and problem
solver, you need to be able to function both ways. If you overemphasize intuition, you
may miss important details or make careless mistakes in calculations or hands-on
work; if you overemphasize sensing, you may rely too much on memorization and
familiar methods and not concentrate enough on understanding and innovative
thinking.
Sensors remember and understand information best if they can see how it connects to
the real world. If you are in a class where most of the material is abstract and
theoretical, you may have difficulty. Ask your instructor for specific examples of
concepts and procedures, and find out how the concepts apply in practice. If the
teacher does not provide enough specifics, try to find some in your course text or
other references or by brainstorming with friends or classmates.
Many college lecture classes are aimed at intuitors. However, if you are an intuitor
and you happen to be in a class that deals primarily with memorization and rote
262
substitution in formulas, you may have trouble with boredom. Ask your instructor for
interpretations or theories that link the facts, or try to find the connections yourself.
You may also be prone to careless mistakes on test because you are impatient with
details and don't like repetition (as in checking your completed solutions). Take time
to read the entire question before you start answering and be sure to check your results
Visual learners remember best what they see--pictures, diagrams, flow charts, time
lines, films, and demonstrations. Verbal learners get more out of words--written and
spoken explanations. Everyone learns more when information is presented both
visually and verbally.
In most college classes very little visual information is presented: students mainly
listen to lectures and read material written on chalkboards and in textbooks and
handouts. Unfortunately, most people are visual learners, which means that most
students do not get nearly as much as they would if more visual presentation were
used in class. Good learners are capable of processing information presented either
visually or verbally.
If you are a visual learner, try to find diagrams, sketches, schematics, photographs,
flow charts, or any other visual representation of course material that is predominantly
verbal. Ask your instructor, consult reference books, and see if any videotapes or CD-
ROM displays of the course material are available. Prepare a concept map by listing
key points, enclosing them in boxes or circles, and drawing lines with arrows between
concepts to show connections. Color-code your notes with a highlighter so that
everything relating to one topic is the same color.
Write summaries or outlines of course material in your own words. Working in groups
can be particularly effective: you gain understanding of material by hearing
classmates' explanations and you learn even more when you do the explaining.
Sequential learners tend to gain understanding in linear steps, with each step
following logically from the previous one. Global learners tend to learn in large
jumps, absorbing material almost randomly without seeing connections, and
then suddenly "getting it."
263
Sequential learners tend to follow logical stepwise paths in finding solutions;
global learners may be able to solve complex problems quickly or put things
together in novel ways once they have grasped the big picture, but they may
have difficulty explaining how they did it.
Many people who read this description may conclude incorrectly that they are global,
since everyone has experienced bewilderment followed by a sudden flash of
understanding. What makes you global or not is what happens before the light bulb
goes on. Sequential learners may not fully understand the material but they can
nevertheless do something with it (like solve the homework problems or pass the test)
since the pieces they have absorbed are logically connected. Strongly global learners
who lack good sequential thinking abilities, on the other hand, may have serious
difficulties until they have the big picture. Even after they have it, they may be fuzzy
about the details of the subject, while sequential learners may know a lot about
specific aspects of a subject but may have trouble relating them to different aspects of
the same subject or to different subjects.
Most college courses are taught in a sequential manner. However, if you are a
sequential learner and you have an instructor who jumps around from topic to topic or
skips steps, you may have difficulty following and remembering. Ask the instructor to
fill in the skipped steps, or fill them in yourself by consulting references. When you
are studying, take the time to outline the lecture material for yourself in logical order.
In the long run doing so will save you time. You might also try to strengthen your
global thinking skills by relating each new topic you study to things you already
know. The more you can do so, the deeper your understanding of the topic is likely to
be.
If you are a global learner, it can be helpful for you to realize that you need the big
picture of a subject before you can master details. If your instructor plunges directly
into new topics without bothering to explain how they relate to what you already
know, it can cause problems for you. Fortunately, there are steps you can take that
may help you get the big picture more rapidly. Before you begin to study the first
section of a chapter in a text, skim through the entire chapter to get an overview.
Doing so may be time-consuming initially but it may save you from going over and
over individual parts later. Instead of spending a short time on every subject every
night, you might find it more productive to immerse yourself in individual subjects for
large blocks. Try to relate the subject to things you already know, either by asking the
instructor to help you see connections or by consulting references. Above all, don't
264
lose faith in yourself; you will eventually understand the new material, and once you
do your understanding of how it connects to other topics and disciplines may enable
you to apply it in ways that most sequential thinkers would never dream of.
http://www4.ncsu.edu/unity/lockers/users/f/felder/public/ILSdir/styles.htm
265