0% found this document useful (0 votes)
10 views51 pages

Final Notes For Research

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views51 pages

Final Notes For Research

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Chapter 1: The Building Blocks of Social Scientific

Research: Hypotheses, Concepts, and Variables

I. Defining Concepts

One of the primary purposes of political science research is to


provide scientific explanations for political phenomena. In this
section, we will consider what these phenomena are and how our
research can be sharpened so that the knowledge we acquire is
transmissible, empirical, and general.

II. The Importance of Concept Definitions

Concepts are developed through a process by which some human


group agrees to give a phenomenon or property a particular name.
The process is ongoing and somewhat arbitrary and does not ensure
that all peoples everywhere will give the same phenomena the
same names.

III. Examples of Concept Definitions

Consider the concept of democracy. To some, a country is


democratic if it has competing political parties, operating in free
elections, with some reasonable level of popular participation in the
process. To others, a country is democratic only if there are legal
guarantees protecting free speech, press, religion, and the like.

IV. Clarifying Concept Definitions

Researchers can clarify the concept definitions they use simply by


making the meanings of key concepts explicit. This requires
researchers to think carefully about the concepts used in their
research and to share their meanings with others.

V. Reviewing Existing Literature


Another way in which researchers get help in defining concepts is by
reviewing the definitions used by others and revising or borrowing
those definitions.

VI. Formulating Hypotheses

Hypotheses are explicit statements that indicate how a researcher


thinks the phenomena of interest are related. They are guesses (but
of an "educated" nature) that represent the proposed explanation
for some phenomena and that indicate how an independent variable
is thought to affect, influence, or alter a dependent variable.

VII. Characteristics of Hypotheses

To test a hypothesis adequately and persuasively, it must be


properly formulated. A hypothesis must be explicit, indicating how
the researcher thinks the phenomena of interest are related. It must
also be testable, allowing the researcher to assess its validity.

VIII. Conclusion

In this chapter, we have discussed the importance of defining


concepts and formulating hypotheses in political science research.
By carefully defining concepts and formulating hypotheses,
researchers can develop a clear and testable explanation for
political phenomena.

Types of Hypotheses

There are many ways of stating hypotheses, but there are


essentially four different types of hypotheses, depending upon what
the researcher is willing to propose about the relationship between
concepts. Hypotheses can be null, correlative, directional, or causal.

Null Hypotheses

A null hypothesis is simply a hypothesis that states that there is no


relationship between two variables. When we analyze data to test a
hypothesis, we often work with the null hypothesis and attempt to
disprove it. A null hypothesis posits the absence of a relationship
between two or more variables and usually represents the opposite
of the hypothesis we are actually trying to confirm.

Correlative Hypotheses

A correlative hypothesis states that there is a relationship between


two (or among more) concepts. However, it does not specify the
nature of this relationship.

Directional Hypotheses

With a directional hypothesis, the researcher makes a guess about


the nature or direction of the relationship between concepts. This
type of hypothesis contains more information about the proposed
relationship than a correlative hypothesis.

Causal Hypotheses

A causal hypothesis makes the boldest claim about the relationship


between two or more variables, yet it is also the most difficult to
confirm. Causal hypotheses may take a number of forms.

Characteristics of Good Hypotheses

Empirical Statements

Hypotheses should be empirical statements; that is, they should be


educated guesses about relationships that exist in the real world,
not statements about what ought to be true or about what a
researcher believes should be the case.

Generality

A hypothesis should explain a general phenomenon rather than one


particular occurrence of the phenomenon.

Plausibility
There should be some logical reason for thinking that a hypothesis
might be confirmed.

Specificity

A hypothesis should be stated in a manner that corresponds to the


way in which the researcher intends to test it—that is, it should be
consistent with the data.

Testability

A good hypothesis is testable; there must be some evidence that is


obtainable and that will indicate whether the hypothesis is correct.

Conclusion

Formulating hypotheses is a crucial step in the research process. A


well-crafted hypothesis provides a clear direction for research and
helps to ensure that the research is focused and productive. By
understanding the different types of hypotheses and the
characteristics of good hypotheses, researchers can develop
hypotheses that are informative, testable, and contribute to the
advancement of knowledge in their field.

The Building Blocks of Social Scientific Research: Hypotheses,


Concepts, and Variables

I. Specifying Units of Analysis

In addition to proposing a relationship between two or more


variables, a hypothesis also specifies the types or levels of political
actor to which the hypothesis is thought to apply. This is called the
unit of analysis of the hypothesis.

II. Examples of Units of Analysis

The unit of analysis can be an individual, group, state, governmental


agency, region, or nation. For example, the individual member of
the House of Representatives is the unit of analysis in the
hypothesis: "Members of the House of Representatives who belong
to the same party as the president are more likely to vote for
legislation desired by the president than members who belong to a
different party."

III. The Importance of Correspondence between Units of Analysis

A discrepancy between the unit of analysis specified in a hypothesis


and the entities whose behavior is actually empirically observed can
cause problems. An ecological fallacy occurs when the attributes of
groups lead to a misleading assessment of the attributes of
individuals.

IV. Avoiding Ecological Fallacies

To avoid ecological fallacies, the unit of analysis of the measures


and of the hypothesis should be the same. Sometimes data from a
group can be used as evidence regarding the characteristics of
individual members of the group, but generally, the unit of analysis
should be consistent.

V. Avoiding Mixed Units of Analysis

Another mistake sometimes made by researchers is to mix different


units of analysis in the same hypothesis. A researcher must be
careful about the unit of analysis specified in a hypothesis and its
correspondence with the unit measured.

VI. Conclusion

In this chapter, we have discussed the importance of specifying


units of analysis in hypotheses and the potential problems that can
arise when there is a discrepancy between the unit of analysis
specified in a hypothesis and the entities whose behavior is actually
empirically observed.

CHAPTER 2: Conducting a Literature Review


I. Introduction

All sound research involves reviewing what has been written about a
research topic. In this chapter, we will discuss the reasons for
conducting a literature review and explain how to conduct one.

II. Where Do Research Topics Come From?

Potential research topics about politics come from many sources:


personal experiences, class readings, lectures, discussions,
newspapers, television, and magazines.

III. Sources of Research Topics

To become aware of current or recent issues in public affairs,


researchers can read journals and magazines with a government
and policy focus, such as Congressional Quarterly Weekly Report,
Environment, and Governing: the states and localities.

IV. Reasons for a Literature Review

A literature review serves several purposes:

1. Develops general explanations for observed variations in a


behavior or phenomenon.
2. Identifies potential relationships between concepts and
researchable hypotheses.
3. Helps researchers learn how others have defined and measured
key concepts.
4. Identifies data sources that other researchers have used.
5. Develops alternative research designs.
6. Discovers how a research project is related to the work of others.

V. Conducting a Literature Review

A literature review involves reading and analyzing previous research


reports on a topic. This helps researchers develop a deeper
understanding of the topic, identify gaps in existing research, and
develop a research question or hypothesis.

VI. Example: The Impact of Television News on Political Efficacy

A literature review conducted by Richard Joslyn on the impact of


television news on political efficacy revealed four main
considerations:

1. The concept of political efficacy had been developed and


measured by previous researchers.
2. Political efficacy had been divided into two types: internal and
external.
3. The literature review revealed ways in which political efficacy had
been measured.
4. The review turned up numerous studies that had tested different
explanations for variations in people's political efficacy.

VII. Conclusion

Conducting a literature review is an essential step in the research


process. It helps researchers develop a deeper understanding of a
topic, identify gaps in existing research, and develop a research
question or hypothesis.

Conducting a Literature Review

I. Introduction

Conducting a literature review is an essential step in the research


process. It helps researchers develop a deeper understanding of a
topic, identify gaps in existing research, and develop a research
question or hypothesis.

II. Strategies for Conducting a Literature Review

1. Start with a broad search: Use indexes, bibliographies, and


abstracts to identify relevant sources.
2. Focus on key works: Identify the most important and relevant
research reports and focus on those first.
3. Use citation indexes: Use citation indexes to identify sources that
have cited key works in your area of research.
4. Take notes and organize sources: Take notes on relevant sources
and organize them in a way that makes sense for your research
project.

III. Sources for Conducting a Literature Review

1. Professional journals: Use journals such as the American Journal of


Political Science, American Political Science Review, and Journal of
Politics.
2. Indexes and bibliographies: Use indexes such as the Social
Sciences Citation Index, Public Affairs Information Service Bulletin,
and Sociological Abstracts.
3. Abstracts: Use abstracts such as the International Political
Science Abstracts and Urban Affairs Abstracts.
4. Compact disc databases: Use compact disc databases such as
the Social Sciences Citation Index and Sociological Abstracts.
5. Newspaper indexes: Use newspaper indexes such as the New
York Times Index and the National Newspaper Index.

IV. Exercises

1. Use the citation index of the 1992 Social Sciences Citation Index
to determine the number of times Richard Fenno's book, Home
Style: House Members in Their Districts, has been cited.
2. Use ABC Pol Sci to identify sources related to specific topics.
3. Use the Social Sciences Index to identify sources related to
specific topics.

V. Conclusion

Conducting a literature review is an essential step in the research


process. It helps researchers develop a deeper understanding of a
topic, identify gaps in existing research, and develop a research
question or hypothesis. By using a variety of sources and strategies,
researchers can conduct a thorough and effective literature review.
Conducting a Literature Review

Introduction

Conducting a literature review is an essential step in the research


process. It helps researchers develop a deeper understanding of a
topic, identify gaps in existing research, and develop a research
question or hypothesis.

Concepts Used as Independent Variables

1. Personality
2. Political cynicism
3. Opinion intensity
4. Interpersonal trust
5. Self-competence/personal efficacy
6. Political interest
7. Cosmopolitanism
8. Social Status
9. Education
10. Region
11. Size and place of residence
12. Age
13. Income
14. Occupation
15. Religion
16. Sex
17. Race
18. Relative deprivation
19. IQ
20. Social Cohesion
21. Marital status
22. Number of children
23. Years of residence
24. Church attendance
25. Organization membership/leadership
26. Political, social participation
27. Size of community
28. Political Environment/Experiences/Interaction
29. Ideology
30. Newspaper exposure

Professional Journals in Political Science and Related Fields

1. American Journal of Political Science


2. American Political Science Review
3. Canadian Journal of Political Science/Revue Canadien de
Science Politique
4. Journal of Politics
5. Polity
6. Social Science Quarterly
7. Western Political Quarterly

Specialized or Multidisciplinary Journals

1. Academy of Political Science Proceedings


2. American Politics Quarterly
3. Annals (American Academy of Political and Social Sciences)
4. The Brookings Review
5. Campaigns and Elections: The Journal of Political Action
6. Comparative Political Studies
7. Comparative Politics
8. Comparative Strategy: An International Journal
9. Congress and the Presidency
10. Electoral Studies

Major Journals in Related Disciplines

1. American Economic Review


2. American Journal of Sociology
3. American Psychologist
4. American Sociological Review
5. JAPA: Journal of the American Planning Association

Indexes, Bibliographies, and Abstracts


1. Use indexes to periodical literature, books, and government
publications.
2. Check the description of a particular index before using it.
3. Use abstracts to identify relevant sources.

Conducting a Literature Search

1. Start with a broad search using indexes and bibliographies.


2. Focus on key works and relevant research reports.
3. Use citation indexes to identify sources that have cited key works.
4. Take notes and organize sources systematically.

Chapter3: Research Design

I. Introduction

In the previous two chapters, we discussed how to formulate a


testable research hypothesis and how to measure variables with
accuracy and precision. In this chapter, we consider how to observe
the relationship between independent and dependent variables to
draw conclusions about their relationship.

II. What is Research Design?

A research design is a plan that shows how a researcher intends to


fulfill the goals of a proposed study. It indicates what observations
will be made, how they will be made, and the analytical and
statistical procedures to be used.

III. Experimental Research Designs

Experimental research designs differ from nonexperimentaldesigns


in that they allow the researcher to have more control over the
independent variable, units of analysis, and environment.

IV. Characteristics of an Ideal Experiment


An ideal experiment has five basic characteristics:

1. Experimental and control groups


2. Researcher determines composition of groups
3. Researcher controls introduction of experimental treatment
4. Measurement of dependent variable before and after treatment
5. Control over environment

V. Threats to Internal Validity

Internal validity deals with whether manipulation of the independent


variable makes a difference in the dependent variable. Threats to
internal validity include:

1. History
2. Maturation
3. Testing
4. Statistical regression
5. Experimental mortality
6. Instrumentation

VI. Threats to External Validity

External validity refers to the representativeness of research


findings. Threats to external validity include:

1. Population
2. Experimental treatment
3. Artificiality of experimental setting
4. Testing

VII. Control over Assignment of Subjects

Researchers attempt to exclude extraneous factors by assigning


subjects to control and experimental groups in three ways:

1. Random assignment
2. Precision matching
3. Frequency distribution control
VIII. Conclusion

Experimental research designs provide control over the subjects and


their exposure to various levels of the independent variable.
However, threats to internal and external validity can affect the
outcome of an experiment.

Research Design

I. Introduction

In the previous two chapters we discussed how to formulate a


testable research hypothesis that will serve as the basis for inquiry,
and how to measure the variables named in the hypothesis with
accuracy and precision. In this chapter we consider how to observe
the relationship between the independent and dependent variables
in a manner that enables us to draw appropriate conclusions about
the way, and extent to which, they are related. These decisions are
what we mean when we refer to a study's research design.

II. What is Research Design?

A research design is a plan that shows how a researcher intends to


fulfill the goals of a proposed study. It indicates what observations
will be made to provide answers to the questions posed by the
researcher, how the observations will be made, and the analytical
and statistical procedures to be used once the data are collected. If
the goal of the research is to test hypotheses, a research design will
also explain how the test is to be accomplished.

III. Experimental Research Designs

Experimental research designs differ from nonexperimentaldesigns


in that they allow the researcher to have more control over the
independent variable, the units of analysis, and the environment in
which behavior occurs. Consequently, experimental designs allow
researchers to establish causal explanations for political behavior
more easily than nonexperimental designs do. However, very few
types of significant political behaviors are studied by political
scientists with experimentation.

IV. Characteristics of an Ideal Experiment

An ideal experiment has five basic characteristics. First, there are


experimental groups that receive an experimental or test stimulus
(the independent variable in a research hypothesis) and control
groups that do not. Second, the researcher determines the
composition of the experimental and control groups by choosing the
subjects and assigning them to one of the groups. In other words,
the researcher can control who experiences the independent
variable and who does not. Third, the researcher has control over
the introduction of the experimental treatment-the independent
variable; that is, the researcher can determine when, and under
what circumstances, the experimental group is exposed to the
experimental stimulus. Fourth, the researcher is able to measure the
dependent variable both before and after the experimental stimulus
is given. And finally, the researcher is able to control the
environment of the subjects to control or exclude extraneous factors
that might affect the dependent variable.

V. Threats to Internal Validity

Internal validity deals with whether the manipulation or variation in


the independent variable makes a difference in the dependent
variable. The internal validity of experimental research may be
adversely affected by history, maturation, testing, statistical
regression, and several other factors.

VI. Threats to External Validity

External validity refers to the representativeness of research


findings. Threats to external validity include population,
experimental treatment, artificiality of experimental setting, and
testing.

VII. Control over Assignment of Subjects


Researchers attempt to exclude extraneous factors by assigning
subjects to control and experimental groups in three different ways.
One way is to assign subjects to the groups at random under the
assumption that extraneous factors will affect all groups equally.
Assignment at random is the practical choice when the researcher is
not able to specify possible extraneous factors in advance or when
there are so many that it is not possible to assign subjects to
experimental and control groups equally.

VIII. Conclusion

Experimental research designs provide control over the subjects and


their exposure to various levels of the independent variable.
However, threats to internal and external validity can affect the
outcome of an experiment.

Research Design

I. Introduction

In the previous two chapters we discussed how to formulate a


testable research hypothesis that will serve as the basis for inquiry,
and how to measure the variables named in the hypothesis with
accuracy and precision. In this chapter we consider how to observe
the relationship between the independent and dependent variables
in a manner that enables us to draw appropriate conclusions about
the way, and extent to which, they are related.

II. What is Research Design?

A research design is a plan that shows how a researcher intends to


fulfill the goals of a proposed study. It indicates what observations
will be made to provide answers to the questions posed by the
researcher, how the observations will be made, and the analytical
and statistical procedures to be used once the data are collected.

III. Experimental Research Designs


Experimental research designs differ from nonexperimentaldesigns
in that they allow the researcher to have more control over the
independent variable, the units of analysis, and the environment in
which behavior occurs.

IV. Characteristics of an Ideal Experiment

An ideal experiment has five basic characteristics.

V. Threats to Internal Validity

Internal validity deals with whether the manipulation or variation in


the independent variable makes a difference in the dependent
variable.

VI. Threats to External Validity

External validity refers to the representativeness of research


findings.

VII. Control over Assignment of Subjects

Researchers attempt to exclude extraneous factors by assigning


subjects to control and experimental groups in three different ways.

VIII. Experimental Designs

A. Simple Post-test Design

B. Classic Pre-test and Post-test Design

C. Multigroup Design

IX. Conclusion

Experimental research designs provide control over the subjects and


their exposure to various levels of the independent variable.
However, threats to internal and external validity can affect the
outcome of an experiment.
Research Design

I. Introduction

In the previous two chapters we discussed how to formulate a


testable research hypothesis that will serve as the basis for inquiry,
and how to measure the variables named in the hypothesis with
accuracy and precision. In this chapter we consider how to observe
the relationship between the independent and dependent variables
in a manner that enables us to draw appropriate conclusions about
the way, and extent to which, they are related.

II. Experimental Designs

Experimental research designs differ from nonexperimentaldesigns


in that they allow the researcher to have more control over the
independent variable, the units of analysis, and the environment in
which behavior occurs.

III. Classic Pre-test and Post-test Design

To test the effect of the hypothesized independent variable


(exposure to television news), researchers employed a classic
experimental research design.

A. Methodology

1. Participants were recruited and randomly divided into


experimental and control groups.
2. The experimental group was exposed to videotape recordings of
the preceding evening's network newscast, which included stories
about a particular public policy issue.
3. The control group did not watch the newscast.
4. Participants completed a questionnaire before and after the
experiment to measure their attitudes toward the policy issue.

B. Results
The researchers found that exposure to television news coverage
can significantly alter the public's sense of the importance of
different political issues.

IV. Multigroup Design

Multigroup designs involve multiple experimental and control groups


to test the effects of several independent variables on a dependent
variable.

A. Example

An experiment tested the effectiveness of token gifts and follow-up


reminders on the response rate in a mail survey.

B. Methodology

1. Four hundred respondents were randomly assigned to four


treatment groups.
2. The groups received different combinations of token gifts and
follow-up reminders.
3. Response rates were measured for each group.

C. Results

The results showed that token gifts and follow-up reminders can
increase response rates, and that the combination of both has a
greater effect than either one alone.

V. Multigroup Time Series Design

This design involves multiple measurements of the dependent


variable before and after the introduction of the independent
variable.

A. Example

An experiment tested the effect of a presidential debate on support


for the candidates.
B. Methodology

1. Participants were randomly assigned to experimental and control


groups.
2. The experimental group watched the debate, while the control
group did not.
3. Support for the candidates was measured before and after the
debate.

C. Results

The results showed that the debate had no effect on support for the
candidates, but that the rate of decline in support was consistent for
both groups.

VI. Factorial Designs

Factorial designs involve multiple independent variables and


measure their effects on a dependent variable, both singly and in
combination.

A. Example

An experiment tested the effect of token gifts and follow-up


reminders on response rates in a mail survey.

B. Methodology

1. Four hundred respondents were randomly assigned to four


treatment groups.
2. The groups received different combinations of token gifts and
follow-up reminders.
3. Response rates were measured for each group.

C. Results

The results showed that token gifts and follow-up reminders can
increase response rates, and that the combination of both has a
greater effect than either one alone.
VII. Solomon Four-Group Design

This design measures the interaction between a pre-test and an


experimental treatment.

A. Methodology

1. Participants were randomly assigned to four groups.


2. Two groups received a pre-test, while the other two did not.
3. One group from each pair received the experimental treatment.
4. All groups received a post-test.

B. Results

The results showed that the pre-test and experimental treatment


interacted to produce a greater effect than either one alone.

VIII. Conclusion

Experimental research designs provide control over the subjects and


their exposure to various levels of the independent variable.
However, threats to internal and external validity can affect the
outcome of an experiment.

Research Design

I. Experimental Designs

Experimental research designs differ from nonexperimentaldesigns


in that they allow the researcher to have more control over the
independent variable, the units of analysis, and the environment in
which behavior occurs.

II. Field Experiments

Field experiments are experimental designs applied in a natural


setting. Researchers try to control the selection of subjects, their
assignment to treatment groups, and the manipulation of the
independent variable.
A. Example: New Jersey Income-Maintenance Experiment

The experiment tested the effects of a guaranteed minimum income


system on the work effort of poor families. The design included two
experimental factors: income guarantee level and tax rate.

III. Challenges of Field Experiments

Field experiments face challenges such as:

- Generalizability: The experiment was limited to New Jersey, which


may not be representative of the entire country.
- Instrumentation difficulties: Families had trouble distinguishing
between net and gross income, and control group families were
asked to report income less frequently than experimental group
families.
- Uncontrolled environment: The experiment was affected by
external factors such as the introduction of a new welfare program.
- Ethical issues: The researchers were concerned about the effect of
terminating the experiment on families that had been receiving
payments.

IV. Major Findings

The experiment found that:

- There was only a 5-6% reduction in average hours worked by male


heads of families who received negative income tax payments.
- Experimental families made larger investments in housing and
durable goods than control families.

V. Limitations of Experimental Designs

Experimental designs are often impractical or impossible in political


science research due to:

- Limited control over variables or subjects


- Ethical concerns
- Difficulty in generalizing findings to real-world settings
VI. Nonexperimental Designs

Nonexperimental designs are often used in political science research


because they:

- Allow researchers to study phenomena that cannot be manipulated


experimentally
- Provide a way to analyze data from natural settings
- Can be used to test hypotheses and accumulate knowledge about
political phenomena

Research Design

I. Introduction

In the previous two chapters, we discussed how to formulate a


testable research hypothesis and how to measure the variables
named in the hypothesis with accuracy and precision. In this
chapter, we consider how to observe the relationship between the
independent and dependent variables in a manner that enables us
to draw appropriate conclusions about the way and extent to which
they are related.

II. Experimental Designs

Experimental research designs differ from nonexperimentaldesigns


in that they allow the researcher to have more control over the
independent variable, the units of analysis, and the environment in
which behavior occurs.

III. Nonexperimental Designs

Since experimental designs are often impractical or impossible in


political science research, political scientists have developed a
number of nonexperimental research designs. In these designs, only
a single group is used, or the researcher has no control over the
application of the independent variable.
A. Cross-Sectional Design

The nonexperimental research design used most often by political


scientists is the cross-sectional or correlational design. In a cross-
sectional research design, measurements of the independent and
dependent variables are taken at the same point in time, and the
researcher does not have any control over the introduction of the
independent variable, the assignment of subjects to treatment or
control groups, or the conditions under which the independent
variable is experienced.

B. Example: New Jersey Income-Maintenance Experiment

The experiment tested the effects of a guaranteed minimum income


system on the work effort of poor families. The design included two
experimental factors: income guarantee level and tax rate.

C. Challenges of Cross-Sectional Designs

While cross-sectional designs have several advantages, they also


have some limitations. Since the researcher does not control who is
exposed to the independent variable, the groups being compared
may not be equivalent. Additionally, since measurements are taken
at one point in time, it is difficult to determine the direction of
causality.

D. Panel Study Design

A panel study is a cross-sectional design that introduces a time


element. In a panel study, the researcher takes measurements of
the variables of interest on the same units of analysis at several
points in time.

E. Time Series Design

Another way of introducing time is with a time series design. In this


design, numerous measures of the dependent variable are taken
both before and after the introduction of the independent variable
for one or more groups.
F. Case Study Design

The final nonexperimental research design we will discuss is the


case study. In a case study, the researcher examines one or a few
cases of a phenomenon in considerable detail, typically using a
number of data collection methods.

Conclusion

In this chapter, we have discussed several research designs that are


commonly used in political science research. Each design has its
strengths and limitations, and the choice of design depends on the
research question, the availability of data, and the level of control
the researcher has over the variables.

Research Design

I. Introduction

In the previous two chapters, we discussed how to formulate a


testable research hypothesis and how to measure the variables
named in the hypothesis with accuracy and precision. In this
chapter, we consider how to observe the relationship between the
independent and dependent variables in a manner that enables us
to draw appropriate conclusions about the way and extent to which
they are related.

II. Case Study Designs

Case studies are a type of research design that involves an in-depth


examination of a single case or a small number of cases. Case
studies can be used for exploratory, descriptive, or explanatory
purposes.

A. Exploratory Case Studies

Exploratory case studies may be conducted when little is known


about some political phenomenon. Researchers initially may observe
only one or a few cases of that phenomenon. Careful observation of
a small number of cases may suggest possible general explanations
for the behavior or attributes that are observed.

B. Descriptive Case Studies

In the descriptive case, the purpose of a case study may be to find


out and describe what happened in a single or select few situations.
The emphasis is not on developing general explanations for what
happened.

C. Explanatory Case Studies

Explanatory case studies are used to test hypotheses and theories.


According to Yin, case studies are most appropriately used to
answer "how" or "why" questions. These questions direct our
attention toward explaining events.

III. Types of Case Study Designs

There are several types of case study designs, including single


holistic case studies, embedded single-case designs, and multiple
case study designs.

A. Single Holistic Case Study Design

In the single holistic case study design, the research is focused on a


single unit of analysis, such as a single group, neighborhood,
bureaucracy, or program.

B. Embedded Single-Case Design

An embedded single-case design involves studying subunits within


the single case.

C. Multiple Case Study Design

A comparative or multiple case study design is more likely to have


explanatory power than a single case study design because it
provides the opportunity for replication.
IV. Criticisms of Case Study Designs

Some researchers avoid using case study designs and do not give
researchers who use the case study as much recognition as those
who use other research designs. There are three main criticisms of
case study designs:

A. Lack of Rigor

One concern about case studies is the "lack of rigor" in presenting


evidence and the possibility for bias in the use of evidence.

B. Limited Generalizability

A second criticism of case studies is that one cannot generalize from


a single case.

C. Length and Complexity

The third criticism one often hears about case studies is that they
take ages to conduct and result in lengthy, unreadable reports.

V. Conclusion

In this chapter, we have discussed why choosing a research design


is an important step in the research process. A research design is a
plan that enables the researcher to achieve his or her research
objectives. A good research design produces definitive answers to
research questions and tests hypotheses in a way that minimizes
bias and error.

CH A P T E R 4
The Building Blocks of Social Scientific Research: Measurement

Introduction
In this chapter, we show how to test empirically the hypotheses we
have advanced. This entails understanding a number of issues
involving the measurement of the variables we have decided to
investigate.

The Importance of Measurement

Scientific knowledge is based upon empirical observation. To test


empirically the accuracy and utility of a scientific explanation for a
political phenomenon, we will have to observe and measure the
presence of the concepts we are using to understand that
phenomenon.

Examples of Measurement in Political Science Research

Researchers have measured a variety of political phenomena,


including campaign spending, public opinion, judges' political
attitudes, land inequality, and democracy.

Devising Measurement Strategies

To measure a concept, researchers must provide an operational


definition of their concepts - deciding what kinds of empirical
observations should be made to measure the occurrence of an
attribute or behavior.

Examples of Operational Definitions

For example, a researcher interested in explaining the existence of


democracy in different nations might define literacy as "the
completion of six years of formal education" and democracy as "a
system of government in which public officials are selected in
competitive elections."

Measurement Decisions and Substantive Conclusions

Our choice of measures, especially of abstract phenomena, is a


crucial part of the entire research process. Different measures of the
same concept can lead to different conclusions.
Conclusion

To be useful in providing scientific explanations for political


behavior, measurements of political phenomena must correspond
closely to the original meaning of a researcher's concepts. They
must also provide the researcher with enough information to make
valuable comparisons and contrasts.

References

Jacobson, Gary C. Money in Congressional Elections. New Haven,


Conn.: Yale University Press, 1980.
Page, Benjamin I., and Robert Y. Shapiro. "Effects of Public Opinion
on Policy." American Political Science Review 77 (March 1983): 175-
90.
Segal, Jeffrey A., and Albert D. Cover. "Ideological Values and the
Votes of U.S. Supreme Court Justices." American Political Science
Review 83 (June 1989): 557-65.

=============== Measurement

Specifying Units of Analysis

In addition to proposing a relationship between two or more


variables, a hypothesis also specifies the types or levels of political
actor to which the hypothesis is thought to apply. This is called the
unit of analysis of the hypothesis, and it also must be selected
thoughtfully.

Units of Analysis in Hypotheses

Political scientists are interested in understanding the behavior or


properties of all sorts of political actors: individuals, groups, states,
governmental agencies, regions, and nations. The particular type of
actor whose political behavior is named in a hypothesis is the unit of
analysis for the research project.

Examples of Units of Analysis


For example, the individual member of the House of Representatives
is the unit of analysis in the following hypothesis: Members of the
House of Representatives who belong to the same party as the
president are more likely to vote for legislation desired by the
president than members who belong to a different party.

Mixing Units of Analysis

Another mistake sometimes made by researchers is to mix different


units of analysis in the same hypothesis. "The more education a
person has, the more democratic his country is" doesn't make much
sense since it mixes the individual and country as units of analysis.

The Accuracy of Measurements

Since we are going to use our measurements to test the validity of


explanations for political phenomena, those measurements must be
as accurate as possible. Inaccurate measurements may lead to
erroneous conclusions since they will interfere with our ability to
observe the actual relationship between two or more variables.

Reliability

Reliability "concerns the extent to which an experiment, test, or any


measuring procedure yields the same results on repeated trials....
The more consistent the results given by repeated measurements,
the higher the reliability of the measuring procedure; conversely,
the less consistent the results, the lower the reliability."

Validity

Essentially, a valid measure is one that measures what it is


supposed to measure. Unlike reliability, which depends on whether
repeated applications of the same or equivalent measure yield the
same result, validity involves the correspondence between the
measure and the concept it is thought to measure.

Construct Validity
A third way to evaluate the validity of a measure is by empirically
demonstrating construct validity. When a measure of a concept is
related to a measure of another concept with which the original
concept is thought to be related, construct validity is demonstrated.

Interitem Association

A fourth way to demonstrate validity is through interitemassociation.


This is the type of validity test most often used by political
scientists. It relies on the similarity of outcomes of more than one
measure of a concept to demonstrate the validity of the entire
measurement scheme.

Conclusion

In this chapter, we have discussed the importance of specifying the


unit of analysis in a hypothesis. A researcher must be careful about
the unit of analysis specified in a hypothesis and its correspondence
with the unit measured.

Glossary

- Unit of analysis: The type of actor (individual, group, institution,


nation) specified in a researcher's hypothesis.

Chapter 4: Measurement
Problems with Reliability and Validity in Political Science
Measurement

An example of research performed at the Survey Research Center at


the University of Michigan illustrates the numerous threats to the
reliability and validity of political science measures. In 1980, the
Center conducted interviews with a national sample of eligible
voters and measured their income levels with the following
question:
Please look at this page and tell me the letter of the income group
that includes the income of all members of your family living here in
1979 before taxes. This figure should include salaries, wages,
pensions, dividends, interest, and all other income.

Table 4-2: Interitem Association Validity Test of a Measure of


Liberalism

| | Welfare | Military Spending | Government Spending | Social


Security | Urban Renewal | Income Tax | Busing |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Welfare | | .56 | .71 | .80 | .63 | .48 | .28 |
| Military Spending | .56 | | .60 | .51 | .38 | .67 | .08 |
| Government Spending | .71 | .60 | | .75 | .59 | .83 | .14 |
| Social Security | .80 | .51 | .75 | | .69 | .59 | .19 |
| Urban Renewal | .63 | .38 | .59 | .69 | | .45 | -.12 |
| Income Tax | .48 | .67 | .83 | .59 | .45 | | .18 |
| Busing | .28 | .08 | .14 | .19 | -.12 | .18 | |
| Rights of Accused | | | | | | | |

Reliability and Validity

Both the reliability and the validity of this method of measuring


income are questionable. Threats to the reliability of the measure
include the following:

1. Respondents may not know how much money they make and
therefore incorrectly guess their income.
2. Respondents may also not know how much money other family
members make and guess incorrectly.
3. Respondents may know how much they make but carelessly
select the wrong categories.
4. Interviewers may circle the wrong categories when listening to
the selections of the respondents.
5. Data entry personnel may touch the wrong numbers when
entering the answers into the computer.
6. Dishonest interviewers may incorrectly guess the income of a
respondent who does not complete the interview.
7. Respondents may not know which family members to include in
the income total; some respondents may include only a few family
members while others may include even distant relations.
8. Respondents whose income is on the border between two
categories may not know which one to pick. Some pick the higher
category, some the lower one.

Each of these problems may introduce some error into the


measurement of income, resulting in inaccurate measures that are
too high for some respondents and too low for others. Therefore, if
this measure were applied to the same people at two different
times, we could expect the results to vary.

In addition to these threats to reliability, there are numerous threats


to the validity of this measure:

1. Respondents may have illegal income they do not want to reveal


and, therefore, may systematically underestimate their income.
2. Respondents may try to impress the interviewer, or themselves,
by systematically overestimating their income.
3. Respondents may systematically underestimate their before-tax
income if they believe too much money is being withheld from their
paycheck.

The Precision of Measurements

Measurements should be not only accurate but also precise; that is,
measurements should contain as much information as possible
about the attribute or behavior being measured. The more precise
our measures, the more complete and informative can be our test of
the relationships between two or more variables.

Suppose, for example, that we were measuring the height of


political candidates to see if taller candidates usually win elections.
Height could be measured in many different ways. We could have
two categories of the variable height, tall and short, and assign
different candidates to the two categories based on whether they
were of above-average or below-average height. Or we could
compare the heights of candidates running for the same office and
measure which candidate was the tallest, which the next tallest, and
so on. Or we could take a tape measure and measure each
candidate's height in inches and record that measure. Clearly, the
last method of measurement captures the most information about
each candidate's height and is, therefore, the most precise measure
of the attribute.

Levels of Measurement

When we consider the precision of our measurements, we refer to


the level of measurement. The level of measurement involves the
type of information that we think our measurements contain and the
type of comparisons that can be made across a number of
observations on the same variable. The level of measurement also
refers to the claim we are willing to make when we

4: Measurement

Problems with Reliability and Validity in Political Science


Measurement

Measurement is a crucial aspect of political science research.


However, researchers often face challenges in ensuring the
accuracy and precision of their measurements. This chapter
discusses the importance of reliable and valid measurements, the
different levels of measurement, and various techniques for
constructing multi-item measures.

The Precision of Measurements

Measurements should be not only accurate but also precise.


Precision refers to the amount of information contained in a
measurement. A precise measurement provides more information
about the attribute or behavior being measured. For example,
measuring the height of political candidates in inches provides more
information than simply categorizing them as tall or short.

Levels of Measurement
There are four levels of measurement: nominal, ordinal, interval,
and ratio. Nominal measurements involve categorizing observations
into mutually exclusive categories. Ordinal measurements involve
ranking observations in order of magnitude. Interval measurements
involve assigning numerical values to observations, with equal
intervals between consecutive values. Ratio measurements involve
assigning numerical values to observations, with equal intervals
between consecutive values, and a true zero point.

Multi-Item Measures

Many political science concepts are complex and multifaceted,


requiring multiple items to measure them accurately. Multi-item
measures can enhance the accuracy and precision of
measurements. There are several types of multi-item measures,
including indexes, scales, and factor analysis.

Indexes

An index is a method of accumulating scores on individual items to


form a composite measure of a complex phenomenon. Indexes are
constructed by assigning a range of possible scores for a number of
items, determining the score for each item for each observation,
and then combining the scores for each observation across all of the
items.

Scales

Scales are also multi-item measures, but they differ from indexes in
that the selection and combination of items are more systematically
accomplished. There are several types of scales,
including Likert scales, Guttman scales, Thurstone scales, and the
semantic differential.

Likert Scales

A Likert scale score is calculated from the scores obtained on


individual items. Each item asks a respondent to indicate a degree
of agreement or disagreement with the item. A Likertscale differs
from an index in that only some of the items are selected for
inclusion in the calculation of the final score.

Guttman Scales

A Guttman scale presents respondents with a range of attitude


choices that are increasingly difficult to agree with. Respondents
who agree with one of the "more difficult" attitude items will also
generally agree with the "less difficult" ones.

Thurstone Scales

A Thurstone scale attempts to ensure that the items used in the


calculation of a scale score represent equally spaced intervals
across the range of an attitude.

Semantic Differential

The semantic differential presents respondents with a series of


adjective pairs to bring out the ways in which people respond to
some particular object.

Factor Analysis

Factor analysis is a statistical technique that may be used to


uncover patterns across a number of measures. It is especially
useful when a researcher has a large number of measures and when
there is uncertainty about how the measures are interrelated.

Conclusion

Measurement is a crucial aspect of political science research.


Researchers must ensure that their measurements are accurate,
precise, and valid. Multi-item measures, such as indexes, scales, and
factor analysis, can enhance the accuracy and precision of
measurements. However, researchers must carefully consider the
selection and combination of items, as well as the level of
measurement, to ensure that their measurements are reliable and
valid.
Chapter 7: Sampling

Introduction

"Democracy is in the blood of the Muslims, who have always


organized their affairs by consultation and representation." (Quaid-
e-Azam Muhammad Ali Jinnah)

This chapter addresses the task of making empirical observations to


implement research design and test hypotheses. To test
hypotheses, researchers must decide what observations are
appropriate and whether to measure concepts for all or only some
pertinent observations.

Population or Sample?

A researcher's decision to collect data on a population or sample is


usually made on practical grounds. Collecting data on a population
provides accurate results, but is often costly and time-consuming.
Sampling is a more practical approach, but may yield less accurate
results.

The Basics of Sampling

A sample is a subset of a larger population. If selected properly,


information collected about the sample can be used to make
statements about the whole population.

Important Terms

- Element: The entity about which a researcher collects information


or the unit of analysis.
- Population: The collection of elements of interest to a researcher.
- Sampling frame: The population from which a sample is actually
drawn.
- Strata: Subdivisions or groups of similar elements within a
population.

Importance of Sampling Frames

A sampling frame should accurately represent the target population.


Lists of elements, such as university student lists or conference
attendee lists, may constitute a sampling frame, but may be out of
date, incorrect, or incomplete.

Conclusion

Sampling is a crucial aspect of research, allowing researchers to


make empirical observations and test hypotheses. Understanding
the basics of sampling, including key terms and the importance of
sampling frames, is essential for conducting accurate and reliable
research.

Types of Samples

Researchers make a basic distinction among types of samples


according to the amount of control over sample bias they provide.

Probability Samples

A probability sample is a sample for which each element in the total


population has a known probability of being selected. This allows a
researcher to calculate how accurately the sample reflects the
population from which it is drawn.

Non probability Samples


A nonprobability sample is a sample for which each element in the
population has an unknown probability of being selected, thus
preventing the calculation of how accurate the sample is.

Simple Random Samples

In a random sample, each element and combination of elements


must have an equal chance of being selected. A list of all the
elements in the population must be available, and a method of
selecting those elements must be used that ensures that each
element has an equal chance of being selected.

Systematic Samples

Systematic sampling, an alternative to simple random sampling,


also requires a list of the target population. Elements are chosen off
the list systematically rather than randomly.

Stratified Samples

Stratified sampling takes advantage of the principle that the more


homogeneous the population, the easier it is to select from it a
representative sample.

Proportionate Sampling

To ensure a sample with each color represented in proportion to its


presence in the population, we would first stratify the balls
according to color.

Disproportionate Sampling

There may be occasions when we wish to take a disproportionate


sample. For example, suppose we are conducting a survey of 200
students at a college.
Cluster Samples

Cluster sampling involves dividing the population into groups or


clusters and then randomly selecting some of these clusters to be
included in the sample.

Telephone Samples

Telephone sampling involves selecting a sample of telephone


numbers from a telephone directory or a list of telephone numbers.

Non probability Samples

Nonprobability samples are used when it is not possible to select a


probability sample. There are several types of nonprobability
samples, including convenience samples, quota samples, and
snowball samples.

Types of Samples

Probability Samples

A probability sample is a sample for which each element in the total


population has a known probability of being selected.
Nonprobability Samples

A nonprobability sample is a sample for which each element in the


population has an unknown probability of being selected.

Specific Types of Samples

Simple Random Samples

In a random sample, each element and combination of elements


must have an equal chance of being selected.
Systematic Samples

Systematic sampling, an alternative to simple random sampling,


also requires a list of the target population.

Stratified Samples

Stratified sampling takes advantage of the principle that the more


homogeneous the population, the easier it is to select from it a
representative sample.

Cluster Samples

Cluster sampling involves dividing the population into groups or


clusters and then randomly selecting some of these clusters to be
included in the sample.

Telephone Samples

Telephone surveys are becoming a common sample survey practice.

Disproportionate Stratified Samples

Disproportionate stratified samples allow a researcher to represent


more accurately the elements in each stratum and ensure that the
overall sample is an accurate representation of important strata
within the target population.

Nonprobability Samples

Nonprobability samples are used when it is not possible to select a


probability sample.

Types of Nonprobability Samples

- Purposive or judgmental samples


- Convenience or haphazard samples
- Quota sampling
- Snowball sampling

Sample Information

Sample information provides estimates of attributes of, and


relationships within, the target population.

Sampling Error

Sampling error is the difference between the sample estimate and


the true population parameter.

Factors Affecting Sampling Error

- Sample size
- Type of sample
- Distribution of the attribute being measured
- Sampling fraction

Conclusion

When deciding whether to rely on a sample, consider the following


guidelines:

1. If cost is not a major consideration and the validity of one's


measures will not suffer, it is generally better to collect data for
one's entire target population.
2. If cost or validity considerations dictate that a sample be drawn, a
probability sample is usually preferable to a nonprobability sample.
3. Remember that probability samples yield estimates of the target
population.
4. Keep in mind that the accuracy of sample estimates is expressed
in terms of the margin of error and the confidence level.
Chapter 8: Making Empirical Observations: Direct
and Indirect Observation

Types of Data and Collection Techniques

Political scientists use three broad types of observations or data:


interview data, data from archival records, and data collected
through direct observation.

Interview data involves collecting information through verbal or


written questioning. This method may involve interviewing a
representative sample of people or a select group of individuals,
such as politicians.

Data from archival records involves analyzing existing data collected


by government agencies, private organizations, or other entities.
This data may include statistics, documents, or other records.

Direct observation involves collecting data by observing people's


behavior or physical traces of behavior. This method does not rely
on verbal responses to verbal stimuli.

Advantages and Disadvantages of Data Collection Methods

The choice of data collection method depends on several factors,


including validity, reactivity, population coverage, cost, and
availability.

Validity refers to the accuracy of the measurements obtained


through a particular method.
Reactivity refers to the effect of the data collection process on the
phenomena being observed.

Population coverage refers to the extent to which a data collection


method allows researchers to observe the behavior of a particular
group of people.

Cost and availability refer to the financial and practical feasibility of


using a particular data collection method.

Observation as a Method of Data Collection

Observation is a method of data collection that involves recording


the behavior of people or physical traces of behavior.

Observation can be classified into four types: direct or indirect,


participant or nonparticipant, overt or covert, and structured or
unstructured.

Direct observation involves observing behavior firsthand, while


indirect observation involves observing physical traces of behavior.

Participant observation involves participating in the activities being


observed, while nonparticipant observation involves observing from
the outside.

Overt observation involves making the observation explicit, while


covert observation involves disguising the observation.

Structured observation involves recording specific behaviors, while


unstructured observation involves recording all behavior.

Direct Observation

Direct observation involves observing behavior firsthand.


This method can be used in laboratory settings or natural settings.

Laboratory settings provide control over the environment, but may


result in artificial behavior.

Natural settings provide a more realistic environment, but may be


more difficult to control.

Field Studies

Field studies involve observing behavior in natural settings.

This method allows researchers to observe people for longer periods


and in more realistic environments.

Field studies can be time-consuming and may require researchers to


spend extended periods in the field.

Examples of Field Studies

William F. Whyte's study of life in an Italian slum, Street Corner


Society, was based on three years of observation.

Marc Ross's study of political participation in Nairobi, Kenya, took


more than a year of field observation.

Richard Fenno's study of the behavior of U.S. representatives in


their districts involved intermittent visits over almost 7 years.

Ruth Horowitz spent 3 years researching youth in an inner-city


Chicano community.

Chapter 8 of "Political Science Research Methods" discusses


observation as a method of data collection in political science
research. Here's an organized summary of the remaining text:
Types of Observation

1. Direct Observation: Observing behavior firsthand, either in a


laboratory or natural setting.
2. Indirect Observation: Observing physical traces of behavior.

Participant Observation

1. Participant Observer: The investigator participates in the activities


being observed.
2. Nonparticipant Observer: The investigator observes from the
outside.

Overt and Covert Observation

1. Overt Observation: The investigator's presence and intentions


are known to the observed.
2. Covert Observation: The investigator's presence and intentions
are hidden or disguised.

Structured and Unstructured Observation

1. Structured Observation: The investigator looks for and records


specific behaviors.
2. Unstructured Observation: All behavior is considered relevant and
recorded.

Advantages and Limitations of Observation

Advantages:

- Natural setting
- Opportunity to observe people for lengthy periods
- Accuracy and completeness of data
Limitations:

- Many significant instances of political behavior are not accessible


for observation
- Lack of control over the environment
- Small number of cases
- Potential for biased or invalid data

Conducting Fieldwork

1. Gaining Access: Obtaining permission to observe and participate


in the group or community.
2. Learning the Ropes: Understanding the norms, values, and
behaviors of the group or community.
3. Maintaining Relations: Building and maintaining relationships
with informants and other group members.
4. Leaving the Field: Ending the observation period and leaving the
group or community.

Ethical Considerations

1. Deception: The use of covert observation raises ethical concerns


about deception and informed consent.
2. Harm to Subjects: Researchers must consider the potential harm
to subjects and take steps to minimize it.
3. Native: Overidentifying with subjects or informants can lead to
biased or invalid data.

Making Empirical Observations: Direct and Indirect Observation

Types of Data and Collection Techniques


Political scientists use three broad types of observations or data:
interview data, data from archival records, and data collected
through direct observation.

Advantages and Disadvantages of Data Collection Methods

The choice of data collection method depends on several factors,


including validity, reactivity, population coverage, cost, and
availability.

Observation as a Method of Data Collection

Observation is a method of data collection that involves recording


the behavior of people or physical traces of behavior.

Direct Observation

Direct observation involves observing behavior firsthand.

Participant Observation

Participant observation involves participating in the activities being


observed, while nonparticipant observation involves observing from
the outside.

Limitations of Participant Observation

Participant observation is limited by the small number of cases that


are usually involved.

Field Studies

Field studies involve observing behavior in natural settings.

Examples of Field Studies


Several examples of field studies are provided, including William F.
Whyte's study of life in an Italian slum and Marc Ross's study of
political participation in Nairobi, Kenya.

Conducting Fieldwork

Conducting fieldwork involves several stages, including gaining


access, learning the ropes, maintaining relations, and leaving the
field.

Gaining Access

Gaining access to those you wish to study can be challenging and


may require permission or the assistance of a member of the
subject group.

Learning the Ropes

Learning the ropes involves learning appropriate behavior and


gaining acceptance from those being studied.

Maintaining Relations

Maintaining relations involves building upon the rapport established


with the observed and taking care not to speak or act in a way that
will damage this rapport.

Leaving the Field

Leaving the field can be problematic and may involve terminating


relationships and coping with personal obligations that may have
developed during the study.

Problems with Leaving the Field


Several problems associated with leaving the field are discussed,
including the possibility that research findings will offend the
observed and the need to honor promises of anonymity and
confidentiality.

Making Empirical Observations: Direct and Indirect Observation

Types of Data and Collection Techniques

Political scientists use three broad types of observations or data:


interview data, data from archival records, and data collected
through direct observation.

Advantages and Disadvantages of Data Collection Methods

The choice of data collection method depends on several factors,


including validity, reactivity, population coverage, cost, and
availability.

Observation as a Method of Data Collection

Observation is a method of data collection that involves recording


the behavior of people or physical traces of behavior.

Direct Observation

Direct observation involves observing behavior firsthand.

Participant Observation

Participant observation involves participating in the activities being


observed, while nonparticipant observation involves observing from
the outside.

Field Studies
Field studies involve observing behavior in natural settings.

Conducting Fieldwork

Conducting fieldwork involves several stages, including gaining


access, learning the ropes, maintaining relations, and leaving the
field.

Taking Field Notes

Note taking can be divided into three types: mental notes, jotted
notes, and field notes.

Indirect Observation

Indirect observation, the observation of physical traces of behavior,


is essentially detective work.

Erosion Measures

An example of an erosion measure is the selective wearing of floor


tiles in a museum.

Accretion Measures

Accretion measures, created by the deposition and accumulation of


materials, can also be used.

Validity Problems with Indirect Observation

Although physical-trace measures generally are not as subject to


reactivity as are participant observation and survey research,
threats to the validity of these measures do exist.
Ethical Issues in Observation

Ethical dilemmas arise primarily when there is a potential for harm


to the observed.

Harm to the Observed

The potential for harm to those being observed is the greatest in


covert observation studies, which, by definition, involve an invasion
of privacy.

Protecting the Observed

Protecting the observed against harm and assessing the potential


for harm to the observed prior to starting observation may be
difficult.

Conclusion

Observation is an important research method for political scientists.


Observational studies may be direct or indirect. Indirect observation
is less common but has the advantage of being a nonreactive
research method.

You might also like