0% found this document useful (0 votes)
37 views138 pages

Research Methods

Uploaded by

AishwaryaAshar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views138 pages

Research Methods

Uploaded by

AishwaryaAshar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 138

Research Methods in Psychology

Syllabus
 Module 1: Theoretical Background (meaning,
objectives, types, ethics)
 Module 2: The Research Process (steps,
sampling, variables)
 Module 3: Research Designs (experimental,
non-experimental, n=1 designs, other
approaches)
 Module 4: Test construction, measurement,
reliability, validity
 Module 5: Data analysis and presenting your
research
2
What is Research?
 Stresses a certain way of thinking about the
world
 Emphasizes causality, criticality and
objectivity
 Establishes Psychology as a science, not a
pseudoscience

3
What is Research?
 Begins with a problem, is reflected in a
hypothesis
 Unfolds through a research design
 Ends with data analysis, conclusions,
implications

4
What Distinguishes Research in
Psychology?
Relatively...
 Approach – natural science vs human science
 Purpose – pure vs applied
 Objects of study – substances vs humans,
animals
 Understanding of causality – linear vs complex
 Results and interpretation – varying degrees
of conclusiveness

5
Why Study Research in Psychology?
 To gather and interpret information, add to
knowledge
 To conduct research ourselves
 To understand and evaluate research
 To make informed, evidence-based decisions

6
Five Classic Experiments
 Asch’s Conformity Experiment
 Milgram’s Peer Shock Administration
 Stanford Prison Experiment
 Bandura’s Bobo Doll Experiment
 Not so classic, yet...Joshua Bell

7
Definition, Objectives and Types
of Research

8
Process of Research
 Answering a certain kinds of questions
 Within a theoretical framework
 Using standard procedures and methods
 In a manner free of bias and subjectivity

9
Definitions of Research

Burns (1994)
“a systematic investigation to find answers to
a problem”

Kerlinger (1986)
“scientific research is a systematic, controlled
empirical and critical investigation of
propositions about the presumed relationships
between various phenomena.”

10
Definitions of Research

Slesinger & Stephenson (1930) define research


as
“the manipulation of things, concepts or
symbols for the purpose of generalizing to
extend, correct or verify knowledge, whether
that knowledge aids in construction of theory
or in the practice of an art”

11
A Word About Knowledge
 Knowledge as the overarching goal of “re-
search”
 Knowledge as ever changing
 Different ways to arrive at knowledge

12
A Word About Knowledge
 Knowledge as the overarching goal of “re-
search”
 Knowledge as ever changing
 Different ways to arrive at knowledge

13
A Word About Knowledge
Methods of Knowing
 Method of Tenacity
 Method of Intuition
 Method of Authority/Faith
 The Rational Method
 The Empirical Method
 The Scientific Method

14
Features of Research
 Controlled
 Rigorous
 Systematic
 Valid
 Replicable
 Empirical
 Critical

15
Objectives of Research
 To gain familiarity or new insights into a
phenomena
 To accurately describe features of an
individual/event
 To note the frequency of criterion behaviours
and allied factors
 To assess the validity of cause-effect
relationships

16
Types of Research
Research can be divided on the basis of:
Applications
pure/fundamental vs applied

 Objectives
descriptive vs exploratory
correlational vs explanatory/experimental

Mode of enquiry
quantitative vs qualitative

17
Types of Research - Applications

Pure/Fundamental Applied
 Theory construction  Practical implications
 Improvements in  Study a situation,
methodology parts of it
 Intellectually  Real-world
challenging implications
 Highly specialized  Policy-making,
 No direct practical administration
value

18
Types of Research - Objectives

Descriptive Exploratory
 Results in a  Little information or
systematic to investigate
description possibilities
 “as is”  Feasibility or pilot
study

19
Types of Research - Objectives

Correlational Explanatory
 Establish a  Clarifies the nature
relationship, of the relationship
association,  Answers the question
interdependence “why and how
 Two or more aspects
of a situation

20
Types of Research – Mode of Enquiry

Quantitative Qualitative
 Structured approach  Clarifies the nature
 Predetermined of the relationship
parameters  Answers the question
“why and how

21
Steps in Research – An Eight Step
Model
Operational Steps Theoretical Intermediary
Background Skills/
Knowledge
Formulating a Variables, Literature review*
research problem hypothesis, steps,
definitions
Conceptualizing a Types and functions
research design of designs
Tool construction Methods, reliability, Pilot study for tool
validity
Sample selection Sampling theory,
types
Writing a research Contents
proposal
Data collection Editing, coding,
Data analysis Different statistical entering data
22 methods, using
Ethics in Research
Definitions
 “The study of proper action” – Ray (2000)
 “..in accordance with principles of conduct
that are considered correct, especially those
of a given profession or group”
Context
 Differs from profession to profession
 Ever-changing dynamics of each field
 Emphasis on accountability and clarity

23
Ethics in Research
Nature
 responsibility of researchers
 emphasis on honest and respectful treatment
 governed in proper ethical actions by the APA
guidelines
 apply to all aspects of research, mainly two
sections:
I. welfare and dignity of human/non-human
participants
II. accurate and honest public reporting of
research
24
A Timeline

25
APA Code of Ethics
I. No harm
II. Privacy and Confidentiality
III. Institutional Approval
IV. Competence
V. Record Keeping
VI. Informed Consent to Research
VII. Dispensing with Informed Consent
VIII. Offering Inducements for Research
Participation
IX. Deception in Research
X. Debriefing
26
Research Ethics -
Researcher-related/Planning
I. Avoiding bias
II. Provision/deprivation of a treatment
III. Using inappropriate research methodology
IV. Incorrect reporting
V. Possibility of harm

27
Research Ethics –
Participant-related/Conduction
I. Seeking consent
II. Collecting information
III. Providing incentives
IV. Seeking sensitive information
V. Maintaining confidentiality

28
Research Ethics –
Participant-related/Conduction
I. Seeking consent
II. Collecting information
III. Providing incentives
IV. Seeking sensitive information
V. Maintaining confidentiality

29
Research Ethics – Presenting/Reporting
Findings
I. Reporting findings “as is” is essential
II. Restrictions brought on by sponsoring
bodies
III. Plagiarism

30
Plagiarism
 “representation of someone else’s ideas as
one’s own”
 Using content without crediting source or “as
is”
 Unintentional plagiarism & self-plagiarism
 In-text citations
 References – APA, MLA etc
 Block Quotes
 Checkers

31
Review, Problem, Hypothesis

32
Literature Review
 Brings in clarity about the research problem
 Improves methodology
 Broadens your knowledge base specifically in
the problem area
 Provides a context to your study

33
Steps in a Literature Review
 Search for studies: books, journals
 Review selected studies: sorting, assessing if
enough
 Develop a theoretical framework: workable
model
 Develop a conceptual framework: final
research frame

34
Research Problem
 Any question, any assumption
 Researchability?
 Establishes a sense of direction, a first step
 Idea of the area, not the findings
 Confusion

35
Research Problem - Sources

Aspect About Study of What


occurs?
Study People Individuals, Those from
Population Organization whom you
s, Groups collect
information
Subject Area Problem Issues, Information
Needs that you
Program Contents, need to
Structure, answer your
Attributes research
problem
Phenomena Causality,
the working
of the
phenomena
itself
36
Formulating a Research Problem -
Considerations
 Interest
 Magnitude
 Measurement of concepts
 Level of expertise
 Relevance
 Availability of data
 Ethical issues

37
Formulating a Research Problem -
Steps
 Identify the broad field of subject area of
interest
 Dissect the broad area into sub areas
 Select what interests you most among the sub
areas
 Raise research questions
 Formulate and assess objectives
 Double check

38
Research Objectives
 Types: broad/main and specific/subobjectives
 Objectives should be:

I. Clear
II. Complete
III.Specific
IV.Identify and combine main variables
V. Provide a sense of direction to the proposed
relationship

39
Operational Definitions
 Breaking it down to specifics
 Establishing clear indicators of what concepts
mean
 Differ from actual dictionary meanings or day-
to-day understanding of a word
 Sometimes developed by the researcher

40
Hypothesis
 Testable statement of the proposed
relationship between two or more variables
 Null vs Alternative

I. Features of a good hypothesis


II. Logical
III.Testable
IV.Refutable
V. Positive

41
Null Hypothesis Testing - Errors

42
What About This?

Married couples who regularly attend religious


services have more stable relationships than
couples who do not.

43
Sampling – Basics & Strategies

44
Sampling in a Nutshell
 The process of selecting individuals for a study

 Estimating population parameters from sample


statistics

 Population parameters = sample statistic +


sampling error

 Estimation accuracy depends on


I. sample size
II. representativeness
III. variability within the data

45
Sampling Basics
Population/Universe
Entire set of individuals of interest to the
researcher

Sample
Set of individuals selected from a population for
the purpose of representing the population in
a research study

46
Sampling Basics
Representativeness
The extent to which the characteristics of the sample
accurately reflect the characteristics of the population

Representative sample
A sample with the same characteristics as the population

Sampling/Selection bias
When participants are selected in a manner that
increases
the likelihood of obtaining a biased sample, i.e., with
different from those of the population

47
Sample Size
The Basic Principle
Larger the sample size, the more likely that the values
obtained from the sample will be similar to the population

Law of Large Numbers


“Discrepancy between a sample and its population
decreases in relation to the square root of the sample”

Leads to n=30 being a standard when planning reserach

Adequacy of data in qualitative studies judged by


saturation

48
Randomization
The Basic Principle
I. Any one out of a set of possible outcomes
occurs
II. Basis of occurrence is purely random,
unpredictable
III.Equality and independence
IV.Three standard techniques
V. More an ideal to strive for than completely
ensure

49
Sampling Strategies

Probability Non-Probability

Exact size and members No knowledge of


of population - finite population size
Equal and known chance Hence, no knowledge
of inclusion of probability of
Selection through a inclusion
random process
No strict emphasis on
Rarely used in an unbiased process
behavioural research
Common sense or
An ideal, sets standards
ease as criteria

50
Probability Sampling: Simple Random
Sampling
 The most basic approach
 Equal and independent chance of inclusion in
the sample
I. Sampling with replacement
II. Sampling without replacement
 An ideal, a key assumption in statistical
analysis
 Define the population, list its members, select
a random process

51
Probability Sampling: Simple Random
Sampling

52
Probability Sampling: Systematic
Sampling
 Similar to random sampling
 Sample selected from a list of the population
 Based on a value denoted by ‘k’ or ‘n’ and,
k= population/sample
 From a random starting point, every k-th
member is picked
 Undermines independence but increases
representativeness

53
Probability Sampling: Systematic
Sampling

54
Probability Sampling: Stratified
Random Sampling
 Attempts to curtail the variability of the sample
 Through a process of careful stratification into
subgroups
 Ensures that each stratum is homogeneous vis-a-
vis the criteria (age, gender, SES)
 These variables are related to main/study
variables
 Ensures non-overlap between various
characteristics
 Types based on size of stratum

I. Proportionate stratified sampling


II. Disproportionate stratified sampling

55
Probability Sampling: Stratified
Random Sampling

56
Probability Sampling: Cluster Sampling
 Suitable for large populations with well-
defined clusters
 Divided into clusters based on geography,
known relationship to the variable etc
 Clusters can be made at different levels
 Sample from different clusters in drawn using
the simple random sampling
 Independence maybe compromised, difficult
to establish water-tight clusters

57
Probability Sampling: Cluster Sampling

58
Non-Probability Sampling: Quota
Sampling
 Based on ease of access + certain visible
features
 Quotas are created to represent certain
population features proportionately in the
sample
 Continues until predetermined quotas are
filled
 Ensures a broadly representative sample
 Least demanding technique but high risk of
biased sample

59
Non-Probability Sampling: Quota
Sampling

60
Non-Probability Sampling: Accidental
Sampling
 AKA convenience/haphazard/availability
sampling
 No attempt to ensure representativeness,
unlike QS
 Ease, availability and willingness
 Not a strong technique but widely used
 Ways to reduce bias:

I. Select a broad cross-section of the population


II. describe clearly the method adopting to get
from N to n
III.determine proportionate quotas

61
Non-Probability Sampling: Accidental
Sampling

62
Non-Probability Sampling: Purposive
Sampling
 AKA judgmental sampling
 Researcher identifies the best sources of data
 Ideal for lesser-known phenomena,
constructing personal or historical realities

63
Non-Probability Sampling: Snowball
Sampling
 Selecting a sample using social networks
 Few individuals selected, assessed and requested
to identify others – link-tracing methodologies
 Process repeated till sample size met or data
saturation
 Suited for groups about which little is known, or
among whom you want to disseminate
information
 Also sheds light on other facets of the group
 Highly dependent on choice of people and success
at stage 1, limited to small sample size

64
Non-Probability Sampling: Snowball
Sampling

65
And After All That...

66
Variables: Definition & Types

67
Variables

68
Variables - Definitions
 Any image, perception or concept capable of
being measured or taking on different values
is a variable.

 Kerlinger (1986) defined a variable as “a


property that takes on different values”

 Black and Champion (1976) define variables


as “rational units of analysis that can assume
any one of a number of designated sets of
values.”

69
Variables - Types
Based on causation
 Independent, intervening, extraneous,
dependent

Based on study design


 Active and attribute variables

Based on units of measurement


 Quantitative and qualitative variables
 Continuous and categorical/discrete variables

70
Variables – Based on Causation
 Factor that cause the change
 Effects of the change show in outcomes
 Variables that affect the causal link
 Variables that are intermediate in the causal
chain

71
Variables – Based on Study Design

Active Attribute
 Amenable to change  Cannot be changed
 Typically, take the or manipulated
form of various  Reflect the
aspects of characteristics of the
intervention study populationn
 Change and control  Very often, fixed
are key features properties

72
Variables – Based on Units of
Measurement

73
Variables – Based on Units of
Measurement
 Categorical/qualitative variables: nominal +
ordinal scales
I. Constant
II. Dichotomous
III.Polytomous
 Continuous/quantitative variables: interval +
ratio scales

74
Research Design – Basics

75
Research Design

 A procedural plan that is adopted by the


researcher to answer questions validly,
objectively, accurately and economically.
 Based on key determinants like

I. Group vs individual
II. Same individuals vs different individuals
III.Number of variables under study

76
Research Design

 “A traditional research design is a blueprint or


detailed plan for how a research study is to be
completed—operationalizing variables so they
can be measured, selecting a sample of
interest to study, collecting data to be used as
a basis for testing hypotheses, and analysing
the results.”

Thyer, 1993

77
Research Design

 “A plan, structure and strategy of


investigation so conceived as to obtain
answers to research questions or problems.
The plan is the complete scheme or program
of the research. It includes an outlines of what
the investigator will do from writing the
hypotheses and their operational implications
to the final analysis of data.”

Kerlinger, 1986

78
Different Research Strategies
I. Descriptive
II. Correlational
III. Experimental
IV. Quasi-Experimental
V. Non-experimental

79
Internal and External Validity

Internal External
 Questions about  Measure of
results generalizability
 Requires a single  Place, time, settings,
clear explanation measures, features
 Any alternative  Sample to population
explanation is a  One study to another
threat

80
Threats to Internal Validity

All Studies Studies Comparing Studies Comparing


Groups Groups Over Time
Environmental Assignment Bias History
Variables
Instrumentation
Maturation
Testing Effects
Statistical Regression

81
Threats to External Validity

Participants Features Measures

Selection Bias Novelty Effect Sensitization


College Students Multiple Treatment Generality
Interference
Volunteer Bias Experimenter Time of
Characteristics Measurement
Participant
Characteristics
Cross-species
Generalization

82
Three Basic Principles (Ostle &
Mensing, 1975)
 Replication
provides an estimate of experimental error
 Randomization

ensures that the estimate is statistically valid


 Local Control

reduces experimental error by increasing


efficiency

83
Variance
 Experimental Variance: Max
experimental effect in “A leading to B”
 Error Variance: Min

uncontrollable variables, random errors


 Extraneous /Control Variance: Con

relevant variables controlled in 5 ways

84
Experimental Design - Basics
 Establish causality through
I. Manipulation
II. Measurement
III.Comparison
IV.Control

85
Experimental Design - Elements
 Experiment
 Independent variable
 Treatment condition
 Levels
 Dependent variable
 Extraneous variables

86
87
Experimental Design - Elements
 True experiment?
 Third-variable problem
 Directionality problem

88
Experimental Design: Between &
Within
Between Within
 Different groups of  Different groups of scores
scores from different from the same group
groups  Compares two or more
 levels in a single group
Compares groups of
 Repeated measures
individuals
design
 One score/participant*  Establishes equivalence,
 Individual differences, requires fewer members
sample size  Time-related factors
 Equivalent groups  Attrition
needed
89
90
91
92
Experimental Design
Experimental Design
Between Group
I. Randomized Groups (Two or more than two)
II. Matched Groups
III.Factorial Design
Within Group
I. Complete
II. Incomplete

93
Experimental Design: Between Groups

Two-Randomized Groups* Matched Groups


 Captive and/sequential  Matched to form
assignment (complete equivalent groups or
and block) block
 Matching variable &
Multi-group Randomized methods
 Three or more levels of  Twin disadvantages
IV
 Establishes
 Degrees of freedom
the
influence of the IV
better
94
Blocking

95
Counterbalancing

96
Balancing

97
Experimental Design – Factorial Design
 Complex interrelationships in realistic
situations
 Factors, levels and interactions
 Simple or complex
 Analysis occurs through an ANOVA

98
Experimental Design – Factorial Design
 Complex interrelationships in realistic
situations*
 Factors and levels
 Main effects and interactions
 Analysis

99
Experimental Design – Factorial Design

No Light Dim Light Adequate


Light
No Music M=25 M=50 M=55

Music M=35 M=60 M=75

100
101
Experimental Design – Single Subject
Design
 Single-subject or single case design
 N=1 as a research paradigm or philosophy
 Individual vs group
 Pioneered by Skinner (1953), Sidman (1960)
 Clinical, counselling, applied and educational
set ups - E
 ABA as a basic template: baseline, treatment,
withdrawal/reversal
 Attribution of experimental effect and ethical
concerns

102
Experimental Design – Single Subject
Design
 Individual as a unit of analysis
 Participant and setting description
 IV & DV
 Baseline
 Experimental control=replication; occurs in three
ways:
I. Introduce and withdraw treatment
(reversal/withdrawal)
II. Staggered introduction of IV (multiple baseline)
III. Iterative manipulation of IV at different points
(alternating treatments)
Visual & statistical analysis
103
Experimental Design – Quasi-Experimental
Design
 All features of a experiment but for control
 Not a true experiment but a valid alternative
 Varied types

104
Experimental Design – Quasi-Experimental
Design
One-Group Posttest-Only Design
 AKA one-shot case study
 No comparison group at all

Particpants---IV/Treatment---DV/Posttest

One-Group Pretest-Posttest Design


 Index of change from a pretest to a posttest

Particpants---Pretest/DV---IV/Treatment---DV/
Posttest

105
Test Construction

106
Definition of a Test
 Anastasi & Urbina (1997)
“essentially an objective and standardized
measure of sample of behaviour.”
 Kaplan & Saccuzzo (2001)
“a set of items designed to measure
characteristics of human beings that pertain
to behaviour.”

107
Features of a Test
I. Organized succession of stimuli
II. Both quantitative and qualitative stimuli
III. Based on a limited sample of behaviour
IV. Result in scores, interpreted against norms
V. Testing vs assessment

108
Classification of Tests

109
Classification of Tests

110
Criteria of a Good Test
I. Objectivity
II. Reliability
III. Validity
IV. Norms
V. Feasibility
VI. Ethical considerations

111
General Steps in Test Construction
 Planning
 Item writing
 Preliminary administration/Experimental try-
out
 Reliability
 Validity
 Preparation of norms
 Preparation of manual

112
Item Writing
 Item is “a single question or task that is not
often broken down into any smaller units” –
Bean, 1953.
 Knowledge of the following factors is
imperative:
 Subject matter, knowing your construct
 Target group
 Different testing strategies
 Fluency and vocabulary
 Arrive at an item pool
 Reviewed by experts

113
Item Writing: Desirable Features

NO! Ideally...
 Ambiguity  Clearly define
 Double negatives
 One idea-one question
 Appropriate item format
 Convoluted phrasing
 Concerned with actual
 Only features of the construct
positive/negative  Not understood in
items reference to another
 Too long item
 Moderate difficulty
 Tough reading ability
 Ability to discriminate

114
Preliminary Administration
A pilot/preliminary administration/experimental
try-out to:
 Identify weaknesses, redundancies and
ambiguities
 Determine difficulty values
 Estimate a time limit
 Decide the final length
 Minimize overlap

 Conrad (1952): Pre try-out, Try-out proper,


Final Trial Administration
115
Item Analysis
 A set of procedures to determine the validity
of items
 It provides an idea of:
 Difficulty level
 Discriminatory potential
 Distractor analysis
 Weak items, modifiable items

116
Item Analysis
Difficulty level
 Empirical or judgment methods
 Desirable range varies and depends on type
 Some say .69 to .86; 100% correct vs chance

Index of Discrimination/Item Validity Index


 “..the extent to which success and failure on that
item indicates the possession of the trait or
achievement being measured.” – Marshall &
Hales, 1972.
 Statistical estimates like extreme group method
or point biserial correlation
117
Item Analysis

118
Item Analysis
Item Characteristics Curve
 Graphical representation of performance in
relation to proportion of sample
 Plot performance on X axis and proportion of
people getting those scores on Y axis
 Gives an idea of the “items that work”

119
Item Analysis
Item Response Theory
 Newer approach, branches away from
classical test theory
 Based on the chance of getting an item right
or wrong
 Extensive use of item analysis
 Computers identify the specific items that
reflect a particular skill level; each item has its
IOC curve
 Basis of computer-adaptive testing

120
Item Analysis
Item Response Theory
 Touted to be the biggest development in test
construction
 Provides information on various facets of
items
 Defined by test-taker’s difficulty, not the other
way around
 Basis of computer adaptive testing

121
Reliability, Validity, Norms, & Manual
 Reliability: self-correlation of the test
 Validity: correlation of the test with an outside
independent criterion
 Norms: average performance/score of a large
representative sample of a specified
population or performance of defined groups
on a particular test
 Manual: procedures, psychometric properties

122
Reliability & Validity

123
Reliability
 Measure of stability or consistency of a
measure
 Measurement and sources of error
 Reliability in different contexts
 Increases with items and measures
 Measured indirectly

124
Reliability
 Based on different sources of error, we have:
 Time sampling: Test-Retest Method
 Item Sampling: Parallel Forms Reliability
 Internal Consistency: Cronbach’s Alpha,KR 20
 Observation bias: Inter-rater reliability
Cohen’s Kappa

125
Reliability
Time sampling: Test-Retest Method
 Measures stable characteristics
 Administer the test on two well-specified
occasions
 Time interval between the two administrations
vital
 Carryover problems
 Explanations of poor correlations vary
 Can use alternate forms*

126
Reliability
Item Sampling: Parallel Forms Reliability
 Examines if test scores themselves are biased
in representation
 Assesses error variance attributable to
selection of a certain set of test items
 Compares two equivalent forms of the test
that assess the same attribute
 Odd-even method

127
Reliability
Internal consistency
Split-half consistency
 Scores of two halfs
 Uses the Spearman-Brown split-half reliability

Cronbach’s Alpha
 Each item with every other item
 All possible split-half inter correlations averaged
 Important item characteristics

Inter-rater reliability
 Cohen’s Kappa

128
Validity
 The extent to which our test tends to be
consistent with other recognized measures of
that construct indicates construct validity.

129
Validity
 Content validity refers to the extent to which
the test reflects the content represented in
curriculum statements (and the skills implied
by that content).

 When test results are compared with an


agreed external criterion such as a direct
measure of actual performance of tasks in the
‘real’ world, this type of validity is called
criterion-related validity.

130
Validity
Under criterion-related validity:
 If there is little time delay between the test
and the actual performance, the criterion-
related validity may be referred to as
concurrent validity.
 If there is a longer time delay between the
test and subsequent actual performance, the
criterion-related validity may be referred to as
predictive validity.

131
Scaling
Scaling
 Involves sorting people according to certain
defined attributes
 According to Babbie (2004), a scale is “a type
of composite measure composed of several
items that have a logical or empirical
structure among them.”
 Could be of two major types: psychophysical
or psychological

133
Scaling

Psychophysical Psychological
 Method of: Psychological
 Methods of Rank
 Limits
Order, Successive
 Average Error
Categories and Pair
 Constant Stimuli Comparison
 Category Scaling Attitude Scaling
 Magnitude  Equal-Appearing

Estimation Intervals
 Summated Ratings
 Cumulative Scale

134
Attitude Scaling
 Thurstone Scales, developed in 1929
 Measures core attitude regarding an issue which
is complex and arouses multiple opinions and
concerns
 Begins by developing a wide rage or statements
from all standpoints – about 100
 Assessing inter-item agreement by having a
panel of judges categorizing them into 11
categories with bipolar anchor
 Mean or median ratings of agreement are used
to arrive at the final scale of 20-30 items from
across the scale
135
Attitude Scaling
 Likert, developed in 1932
 Statements typically rated on a five point,
graded scale
 Choices are weighted, distance between
scales even
 Example of ordinal transformed to interval
data
 SA—A—50/50—D—SD
 Advisable to use wider categories to ensure
greater limits of reliability

136
Attitude Scaling
 Guttman Scales/Scalogram, developed in
1940s
 Series of question about one topic presented
in their level of intensity along a single
continuum
 Questions about the construct are mixed up
with irrelevant questions
 Progressively stronger attitudes about an
issue are encountered
 Although popular, concerns exist over its
validity and difficulty in development

137
Attitude Scaling
 Semantic Differential Scales, developed by
Osgood Suci, & Tannenbaum (1957)
 Assess emotions or feelings by presenting
them as dichotomies
 Common presentation is a statement with two
bipolar anchors
 Measures three factors activity, potency and
evaluation

138

You might also like