0% found this document useful (0 votes)
7 views26 pages

Shivanand RM Chapter 1 2 and 4 Notes

The document provides an overview of research methodology, focusing on key concepts such as research problems, hypotheses, variables, and sampling techniques. It discusses various methods of data collection, observational methods, and survey research designs, highlighting their applications, strengths, and limitations. Additionally, it covers experimental designs, internal and external validity, and threats to validity in research.

Uploaded by

MUDRA DHUMAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views26 pages

Shivanand RM Chapter 1 2 and 4 Notes

The document provides an overview of research methodology, focusing on key concepts such as research problems, hypotheses, variables, and sampling techniques. It discusses various methods of data collection, observational methods, and survey research designs, highlighting their applications, strengths, and limitations. Additionally, it covers experimental designs, internal and external validity, and threats to validity in research.

Uploaded by

MUDRA DHUMAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

1

EP 203: RESEARCH METHODOLOGY


CHAPTER 1, CHAPTER 2, & CHAPTER 4 NOTES

Shivanand R. Thorat
[email protected]
https://youtube.com/shivanandthorat
2

UNIT 1
OVERVIEW OF RESEARCH PROCESS AND SURVEY RESEARCH

1. Overview of Basic Research Concepts:


1.1 Research Problem:

It is an interrogative statement which express relationship between two or more than two
variables.

1.2 Hypothesis: Hypothesis is a tentative statement/explanation for a phenomenon. It attempts


to answer the questions like why and how.

Characteristics of a good scientific hypothesis: Testability and Verifiability.

3 types of hypotheses fail to pass the testability test:

i) When constructs are inadequately defined.


ii) When hypothesis refers to circularity (e. g., the chicken or the egg causality dilemma).
iii) When it appeals to ideas not recognized by science.

1.3 Goals of Scientific Research (DPEA)

i) Description
ii) Prediction
iii) Explanation
iv) Application

1.4 Variables:

A variable is something that can be changed and varied, such as a characteristics or value.

Variables are generally used in psychology experiments to determine if changes to one thing
result in changes to another.

• Independent variable (IV): the variable that is controlled and manipulated by the
experimenter.
3

• Dependent variable (DV): the variable that is measured by the experimenter.


• Controlled variables (CV): the variables that can impact the dependent variable but are not
the main interest of the researcher. Thus, they are held constant in the experiment.
• Extraneous variables: If researcher fails to control a variable that can impact the dependent
variable, then that variable is called as extraneous variable.
• Confounding variable: A variable that can impact not just dependent variable but also an
independent variable.
• Intervening variable: hypothetical internal state/variable that are used to explain
relationships between the IV and DV. Intervening variables are not real things. They are
interpretations of observed facts, not facts themselves. But they create the illusion of being
facts.

Example:

Poor
Low life
Poverty healthcare
expectancy
(IV) (Intervening
(DV)
variable)

Poverty results into low life expectancy but poverty in fact does not directly impacts life
expectancy. Poverty leads to poor healthcare and in turn poor healthcare leads to low
expectancy. In other words, to explain the impact of poverty on life expectancy, researcher
takes help of intervening variable (poor healthcare).

• Latent variable: A latent variable is a variable that cannot be observed. The presence of
latent variables, however, can be detected by their effects on variables that are observable.
For example: confidence, dedication, perseverance, morality, ethics, etc.

1.5 Operational Definition:

A statement of the procedure or ways in which a researcher is going to measure behaviors or


qualities. Operational definition assigns meaning to a construct or a variable by specifying the
activities or operations necessary to measure it.
4

1.6 APA style of preparing research (TAI che MR DR Ahe):

Title page
Abstract
Introduction
Method
Results
Discussion
References
Appendices (if any)

2. Sampling techniques:

Meaning of population: Set of all cases of interest.


Meaning of sampling frame: List of the confirmed members of the population.
Meaning of sample: A subset of population drawn from the sampling frame.
5

Selection bias: occurs when the procedures used to select the sample result in the
overrepresentation of some segment of the population or, conversely, in the exclusion or
underrepresentation of a significant segment.

Approaches to sampling
A) Probability Sampling Methods:

Each element has the same chance/probability of being included in the sample.

i) Simple random sampling: the fishbowl draw method. Overrepresentation or


underrepresentation of stratum may occur. Stratified random sampling overcomes this
problem.

ii) Stratified random sampling:


Proportionate stratified random sampling

60 males 6 males
100 N 10 (sample)
40 females 4 females

Disproportionate/equal proportionate stratified random sampling

60 males 5 males
100 N 10 (sample)
40 females 5 females

iii) Systematic sampling: Every nth person is selected in the sample (as long as the starting
point is randomized, it is a probability sampling).

iv) Area or cluster sampling: Cluster sampling is a probability sampling method in which you
divide a population into clusters, such as districts or schools, and then randomly select some of
these clusters as your sample.
6

B) Non-probability Sampling Methods:


Each element necessarily does not possess an equal chance/probability of being included in the
sample.

i) Accidental sampling: Selecting a response primarily on the basis of their availability and
willingness to participate in the study.

ii) Purposive sampling or judgmental sampling: Elements in the sample are included on the
basis of their characteristics. Sample characteristics and the purpose of the study match with
each other.

iii) Systematic sampling: Every nth person is selected in the sample (with preassigned numbers)

iv) Snowball sampling: Existing study subjects recruit future subjects from among their
acquaintances.

v) Quota sampling: Same as stratified random sampling but the selection from the strata in the
sample is not random but with researcher’s convenience.

vi) Dense sampling: When researcher selects more than 50% of cases from the sampling frame
as sample, it is called as dense sampling.

vii) Saturation sampling: Refers to the point in data collection when no additional issues or
insights are identified and data begin to repeat so that further data collection is redundant,
signifying that an adequate sample size is reached.

viii) Double sampling: Double sampling is taking a second set of samples in a one-stage survey
because the retrospective power of the test did not meet design objectives.
7

3. Methods of data collection: (OM-PTI)


i) Observation – naturalistic, controlled, participatory, non-participatory

ii) Mail surveys (offline/hard copies)

Although mail surveys are quick and convenient, there may be a problem with the
response rate when individuals fail to complete and return the survey.

Due to problems with the response rate, the final sample for a mail survey may not
represent the population.

iii) Personal interviews

Although costly, personal interviews allow researchers to gain more control over how
the survey is administered.

Interviewer bias occurs when survey responses are recorded inaccurately or when
interviewers guide individuals’ responses.

iv) Telephonic interviews

Despite some disadvantages, telephone interviews are used frequently for brief surveys.

v) Internet survey

The Internet offers several advantages for survey research because it is an efficient,
low-cost method for obtaining survey responses from large, potentially diverse, and
underrepresented samples.

Disadvantages associated with Internet survey research include the potential for
response rate bias and selection bias, and lack of control over the research
environment.
8

4. Observational Methods

4.1 Observation without Intervention

• Direct observation of behavior in a natural setting without any attempt by the observer to
intervene is frequently called naturalistic observation.
• The goals of naturalistic observation are to describe behavior as it normally occurs and to
examine relationships among variables.
• Naturalistic observation helps to establish the external validity of laboratory findings.
• When ethical and moral considerations prevent experimental control, naturalistic
observation is an important research strategy.
• Online behavior can be observed without intervention.

4.2 Observation with Intervention

• Most psychological research uses observation with intervention.


• The three methods of observation with intervention are participant observation, structured
observation, and the field experiment.
• Whether “undisguised” or “disguised,” participant observation allows researchers to
observe behaviors and situations that are not usually open to scientific observation.
• If individuals change their behavior when they know they are being observed (“reactivity”),
their behavior may no longer be representative of their normal behavior.
9

• Participant Observation: In participant observation, observers play a dual role: They


observe people’s behaviour and they participate actively in the situation they are observing.
• In undisguised participant observation, individuals who are being observed know that the
observer is present for the purpose of collecting information about their behavior. This
method is used frequently by anthropologists who seek to understand the culture and
behavior of groups by living and working with members of the group.
• In disguised participant observation, those who are being observed do not know they are
being observed.

• Structured Observation: There are a variety of observational methods using intervention


that are not easily categorized. These procedures differ from naturalistic observation
because researchers intervene to exert some control over the events they are observing.

• Field Experiments: When a researcher manipulates one or more independent variables in


a natural setting in order to determine the effect on behavior, the procedure is called a field
experiment.

4.3 Indirect (unobtrusive) observational methods

• An important advantage of indirect observational methods is that they are nonreactive.


• Indirect, or unobtrusive, observations can be obtained by examining physical traces and
archival records.

Details of indirect observation:


10

5. Survey Research Designs


Uses of Survey
• Survey research is used to assess people’s thoughts, opinions, and feelings.
• Surveys can be specific and limited in scope or more global in their goals.
• Careful selection of a survey sample allows researchers to generalize findings from the
sample to the population.

i) Cross sectional design: (different sample; same time)


• One or more samples are drawn from the population “at one time”.
• Allow researchers to decide the characteristics of a population or the differences between
2 or more populations.
• Correlational findings from cross sectional designs allow researcher to make predictions.

ii) Successive independent samples design: (different sample; different time)


• In the successive independent sample design, different samples of the respondents from the
population complete the survey over time.
• It allows researchers to study changes in population over time.
• It does not allow researcher to find how individual respondents have changed over time.
• Problem occurs when the samples drawn from the population are not compatible.

iii) Longitudinal design: (same sample; different time)


• In the longitudinal design the same respondents are surveyed over time in order to examine
changes in individual respondents.
• Because of the correlation nature of survey data, it is difficult to identify the causes of
individual’s changes overtime.
• As people drop out of the study over time (attrition), the final sample may no longer be
comparable to the original sample or represent the population.

6. Problems, issues, and applications of survey research


Survey research is a widely used method in psychology to collect data from a large number of
individuals in a relatively short period of time. However, like any research method, it has its
own set of problems, issues, and applications. Here are some of the key ones:
11

Sampling Bias: One of the major issues with survey research is sampling bias. This occurs
when the sample of participants is not representative of the population as a whole. This can
lead to inaccurate conclusions about the population being studied.

Response Bias: Another issue with survey research is response bias, which occurs when
participants provide inaccurate or incomplete responses. This can happen due to social
desirability bias, where participants may respond in a way they think is socially acceptable, or
due to demand characteristics, where participants may respond in a way they think the
researcher wants them to.

Question Wording: The wording of survey questions can also be problematic. Poorly worded
questions can lead to confusion among participants, leading to inaccurate or incomplete
responses. It is essential to use clear and concise language and avoid leading or loaded
questions.

Validity and Reliability: Survey research is only as good as the questions being asked. The
questions must be valid, meaning they measure what they are intended to measure, and reliable,
meaning they produce consistent results over time.

Applications: Survey research can be used in a variety of applications in psychology, including


clinical research, market research, and social psychology research. It is a versatile and efficient
method for gathering data.

In conclusion, survey research is a valuable method for collecting data in psychology, but it is
not without its problems and issues. Researchers must be aware of the limitations of the method
and take steps to minimize bias and increase the validity and reliability of their results.
12

UNIT 2
EXPERIMENTAL DESIGNS –
INDEPENDENT GROUPS DESIGNS, REPEATED MEASURES DESIGNS,
& COMPLEX DESIGNS

Why researchers conduct experiments?


• Researchers conduct experiments to test hypotheses about the causes of behaviour.
• Experiments allow researchers to decide whether a treatment or program effectively
changes behaviour.

Logic of experimental research


• Researchers manipulate an independent variable in an experiment to observe the effect on
behavior, as assessed by the dependent variable.
• Experimental control allows researchers to make the causal inference that the independent
variable caused the observed changes in the dependent variable.
• Control is the essential ingredient of experiments; experimental control is gained through
manipulation, holding conditions constant, and balancing.
• An experiment has internal validity when it fulfils the three conditions required for causal
inference: covariation, time-order relationship, and elimination of plausible alternative
causes.
• When confounding occurs, a plausible alternative explanation for the observed covariation
exists, and therefore, the experiment lacks internal validity. Plausible alternative
explanations are ruled out by holding conditions constant and balancing.

1. Independent Groups Designs (Between Groups Designs):


In an independent groups design, each group of subjects participates in only one condition of
the independent variable. There are three types of independent groups designs:

a) Random Groups Design (use when large number of participants are available)

• Random assignment to conditions is used to form comparable groups by balancing or


averaging subject characteristics (individual differences) across the conditions of the
independent variable manipulation.
13

• When random assignment is used to form independent groups for the levels of the
independent variable, the experiment is called a random groups design.

b) Matched Groups Design (use when less number of participants are available)

• A matched groups design may be used to create comparable groups when there are too few
subjects available for random assignment to work effectively.
• Matching subjects on the dependent variable (as a pretest) is the best approach for creating
matched groups, but scores on any matching variable must correlate with the dependent
variable.
• After subjects are matched on the matching variable, they should then be randomly
assigned to the conditions of the independent variable.

c) Natural Groups Design (use when IV is naturally or artificially categorical; thus,


matching, or random assignment is not possible)

• Researchers are interested in independent variables that are called individual differences
variables, or subject variables.
• For example, religious affiliation is an individual differences variable. Researchers can’t
manipulate this variable by randomly assigning people to Catholic, Jewish, Muslim,
Protestant, or other groups. Instead, researchers “control” the religious affiliation variable
by systematically selecting individuals who naturally belong to these groups. Thus, such
experimental design is called as natural groups design.

1.1 Internal validity:

Internal validity is the degree to which differences in performance on a dependent variable can
be attributed clearly and unambiguously to an effect of an independent variable, as opposed to
some other uncontrolled variable. These uncontrolled variables are often referred to as threats
to internal validity.

When the below three conditions for a causal inference are met, the experiment is said to have
internal validity.
14

a) Manipulation (of IV)

b) Holding conditions constant (controlled variables)

c) Balancing (Groups are balanced/made comparable)

1.2 Threats to internal validity (and their solutions):

1) Pre-existing differences between participants (Balancing)

2) Presence of different extraneous variables in experimental conditions (Block


randomization: increases internal validity by balancing extraneous variables across
conditions of the independent variable)

3) Subject loss – Selective subject loss; Mechanical subject loss. Subject loss destroys the
comparable groups that are essential to the logic of the random groups design and can
thus render the experiment uninterpretable. (Pre-test scores).

4) Demand characteristics (Placebo control group)

5) Experimenter effects (Double blind experiment)

1.3 External Validity:

The findings of an experiment have external validity when they can be applied to other
individuals (samples), settings (location/place), and conditions (different experimental
conditions) beyond the scope of the specific experiment.

- How to increase external validity of an experiment?

1) Replication (Same experiment conducted on different samples).


2) Field experiment (Other settings)
3) Partial replication (Replication of the experiment with certain changes)
15

2. Repeated Measures Design (within groups design):

2.1 Why researchers use repeated measures designs:

Researchers choose to use a repeated measures design in order to (1) conduct an experiment
when few participants are available, (2) conduct the experiment more efficiently, (3) increase
the sensitivity of the experiment, and (4) study changes in participants’ behaviour over time.

2.2 The role of practice effects in repeated measures designs:

• Repeated measures designs cannot be confounded by individual differences variables


because the same individuals participate in each condition (level) of the independent
variable.
• Participants’ performance in repeated measures designs may change across conditions
simply because of repeated testing (not because of the independent variable); these changes
are called practice effects.
• Practice effects may threaten the internal validity of a repeated measures experiment when
the different conditions of the independent variable are presented in the same order to all
participants.
• The two types of repeated measures designs, complete and incomplete, differ in the specific
ways they control for practice effects.

2.3 Balancing Practice Effects in the Complete Design

• Practice effects are balanced in complete designs within each participant using block
randomization or ABBA counterbalancing.
• In block randomization, all of the conditions of the experiment (a block) are randomly
ordered each time they are presented.
• In ABBA counterbalancing, a random sequence of all conditions is presented, followed by
the opposite of the sequence.
• Block randomization is preferred over ABBA counterbalancing when practice effects are
not linear, or when participants’ performance can be affected by anticipation effects.
16

2.4 Balancing Practice Effects in the Incomplete Design

• Practice effects are balanced across subjects in the incomplete design rather than for each
subject, as in the complete design.
• The rule for balancing practice effects in the incomplete design is that each condition of
the experiment must be presented in each ordinal position (first, second, etc.) equally often.
• The best method for balancing practice effects in the incomplete design with four or fewer
conditions is to use all possible orders of the conditions.
• Two methods for selecting specific orders to use in an incomplete design are the Latin
Square and random starting order with rotation.
• Whether using all possible orders or selected orders, participants should be randomly
assigned to the different sequences.

2.5 The problem of differential transfer:

Differential transfer occurs when the effects of one condition persist and influence performance
in subsequent conditions.

Variables that may lead to differential transfer should be tested using a random groups design
because differential transfer threatens the internal validity of repeated measures designs.

Differential transfer can be identified by comparing the results for the same independent
variable when tested in a repeated measures design and in a random groups design.
17

2.6 To sum up,

There are two types of repeated measures designs: complete repeated measures design and
incomplete repeated measures design.

Advantages

a) Fewer Ss are required


b) Efficiency
c) Sensitivity
d) We can study changes in participants’ behavior over time / Trace individual
18

3. Complex Designs (Factorial Designs)

Complex designs can also be called factorial designs because they involve factorial
combination of more than 1 independent variables. Factorial combination involves pairing each
level of one independent variable with each level of a second independent variable. This makes
it possible to determine the effect of each independent variable alone (main effect) and the
effect of the independent variables in combination (interaction effect).

3.1 Effects in a Complex Design

• Researchers use complex designs to study the effects of two or more independent variables
in one experiment.
• In complex designs, each independent variable can be studied with an independent groups
design or with a repeated measures design.
• The simplest complex design is a 2 x 2 design—two independent variables, each with two
levels.
• The number of different conditions in a complex design can be determined by multiplying
the number of levels for each independent variable (e.g., 2 x 2 = 4).
• More powerful and efficient complex designs can be created by including more levels of
an independent variable or by including more independent variables in the design.

An example of 2 x 2 complex design: Values are mean number of correct responses.

2 x 2 design Task Difficulty

Anxiety Level Easy Hard

Low 3.3 3.3

High 5.6 1.2

IV 1: Anxiety levels: Low and high

IV 2: Task difficulty: Easy and hard

DV: Correct responses


19

Effects:

Main effect 1: Anxiety level: High vs Low

Main effect 2: Task difficulty: Easy vs Hard

Interaction effect: Anxiety level * Task difficulty

3.2 Main Effects and Interaction Effects

• The overall effect of each independent variable in a complex design is called a main effect
and represents the differences among the average performance for each level of an
independent variable collapsed across the levels of the other independent variable.
• An interaction effect between independent variables occurs when the effect of one
independent variable differs depending on the levels of the second independent variable.

3.3 Describing Interaction Effects

• Evidence for interaction effects can be identified using descriptive statistics presented in
graphs (e.g., nonparallel lines) or tables (subtraction method).
• The presence of an interaction effect is confirmed using inferential statistics.
20

3.4 Interaction Effects and Ceiling and Floor Effects

When participants’ performance reaches a maximum (ceiling) or a minimum (floor) in one


or more conditions of an experiment, results for an interaction effect are uninterpretable.

3.5 To summarize, complex designs are:

• Used when we have more than 1 IV.


• Simplest complex design is 2 x 2 design.
• We can study main effects and interaction effects.
21

UNIT 4
QUASI EXPERIMENTAL DESIGNS AND SCALING

1. What are Quasi-Experimental Designs:

Problems That Even True Experiments May Not Control

• Threats to internal validity that can occur in any study include contamination, experimenter
expectancy effects, and novelty effects.
• Contamination occurs when information about the experiment is communicated between
groups of participants, which may lead to resentment, rivalry, or diffusion of treatment.
• Experimenter expectancy effects occur when researchers’ biases and expectancies
unintentionally influence the results of a study.
• Novelty effects, including Hawthorne effects, occur when people’s behavior changes
simply because an innovation (e.g., a treatment) produces excitement, energy, and
enthusiasm.
• Threats to external validity occur when treatment effects may not be generalized beyond
the particular people, setting, treatment, and outcome of the experiment.

Quasi-experiments:

• Quasi-experiments provide an important alternative when true experiments are not


possible.
• Quasi-experiments lack the degree of control found in true experiments; most notably,
quasi-experiments typically lack random assignment.
• Researchers must seek additional evidence to eliminate threats to internal validity when
they do quasi-experiments rather than true experiments.
22

2. Types of Quasi-Experimental Designs

a) One-group pretest-posttest design

• The one-group pretest-posttest design is called a pre-experimental design or a bad


experiment because it has so little internal validity.

O1 X O2

Where,

O1 = Observation one

O2 = Observation two

X = Intervention

b) Non-equivalent control group design

• In the nonequivalent control group design, a treatment group and a comparison group are
compared using pretest and posttest measures.
• If the two groups are similar in their pretest scores prior to treatment but differ in their
posttest scores following treatment, researchers can more confidently make a claim about
the effect of treatment.
• Threats to internal validity due to history, maturation, testing, instrumentation, and
regression can be controlled in a nonequivalent control group design.

O1 X O2
----------------
O1 O2

c) Interrupted Time-Series Designs

• In a simple interrupted time-series design, researchers examine a series of observations


both before and after a treatment.
• Evidence for treatment effects occurs when there are abrupt changes (discontinuities) in
the time-series data at the time treatment was implemented.
23

• The major threats to internal validity in the simple interrupted time-series design are
history effects and changes in measurement (instrumentation) that occur at the same time
as the treatment.

Threats to Internal Validity in the Nonequivalent Control Group Design

• To interpret the findings in quasi-experimental designs, researchers examine the study to


determine if any threats to internal validity are present.
• The threats to internal validity that must be considered when using the nonequivalent
control group design include additive effects with selection, differential regression,
observer bias, contamination, and novelty effects.
• Although groups may be comparable on a pretest measure, this does not ensure that the
groups are comparable in all possible ways that are relevant to the outcome of the study.

c1) Simple interrupted time-series design

Simple interrupted time-series design is possible when researchers can observe changes in a
dependent variable for some time before and after a treatment is introduced.

O1 O2 O3 O4 O5 X O6 O7 O8 O9 O10

c2) Time series with nonequivalent control group

In a time series with nonequivalent control group design, researchers make a series of
observations before and after treatment for both a treatment group and a comparable
comparison group.

O1 O2 O3 O4 O5 X O6 O7 O8 O9 O10
-------------------------------------------------------------

O1 O2 O3 O4 O5 O6 O7 O8 O9 O10
24

The Issue of External Validity:

• Similar to internal validity, the external validity of research findings must be critically
examined.
• The best evidence for the external validity of research findings is replication with different
populations, settings, and times.

3. Discontinuity Promotion Design:

• Discontinuity promotion designs (also known as regression discontinuity designs) are a


type of quasi-experimental research design that is used to evaluate the effects of an
intervention or treatment. This design is based on the idea that a treatment can be assigned
to individuals based on whether they fall above or below a certain cutoff score on a pre-
existing variable, such as a test score or age.
• In a discontinuity promotion design, participants who score just above the cutoff score are
assigned to the treatment group, while those who score just below the cutoff score are
assigned to the control group. By comparing the outcomes of these two groups, researchers
can estimate the causal effect of the treatment.
• The key advantage of discontinuity promotion designs is that they allow for the random
assignment of treatment, which is a critical feature of a well-designed experiment. This
design is particularly useful in situations where it is not ethical or practical to randomly
assign participants to a treatment group, such as in educational or policy interventions.
• Discontinuity promotion designs are a valuable tool for evaluating the causal effects of
interventions in situations where random assignment is not possible.

4. Cohort Designs:
• Cohort research design is a type of observational study that involves following a group of
people (a cohort) over a period of time to observe changes in their health or other outcomes.
The cohort is defined by a common characteristic, such as age, gender, or exposure to a
specific risk factor.
• Cohort studies can be prospective or retrospective.
• In prospective studies, participants are followed forward in time from the start of the study.
25

• In retrospective studies, data is collected from records or archives for a specific time period
in the past.
• Cohort studies are useful in investigating the incidence of diseases and their potential risk
factors, and they can provide important insights into the natural history of a disease or
condition. They are particularly useful in studying rare exposures or outcomes that would
be difficult to study in a randomized controlled trial.
• However, cohort studies can be time-consuming and expensive, and there may be issues
with loss to follow-up or changes in the cohort over time. Additionally, it can be difficult
to establish causality in cohort studies, as there may be confounding variables that are
difficult to control for.
• Cohort studies are a valuable research design that can provide important insights into the
causes and natural history of diseases and conditions, but they must be carefully designed
and executed to minimize potential biases and confounding factors.

5. Program Evaluation:

• Program evaluation is used to assess the effectiveness of human service organizations and
provide feedback to administrators about their services.
• Program evaluators assess needs, process, outcome, and efficiency of social services.
• The relationship between basic research and applied research is reciprocal.
• Despite society’s reluctance to use experiments, true experiments and quasi-experiments
can provide excellent approaches for evaluating social reforms.
• Program evaluation comprises research methodology used to evaluate the need for human
services, the implementation of those services, the effect of the services on people who are
served, and the efficiency of the services. The overarching goal of program evaluation is to
provide feedback regarding human service activities.

5.1 Types of Program Evaluation:

• In assessing the effectiveness of an intervention, there are two main types of program
evaluation: process and outcome. Third rarely used type is developmental evaluation.

• Process evaluation: also referred to as formative evaluation, is used to assess whether


the program has reached the intended audience (as identified in the intervention hypothesis)
26

and whether the activities of the program (as outlined in the logic model of the program)
have been carried out as instructed or not.

• Process evaluation attempts to answer the question: “Is the program being implemented in
the way in which it was planned or not?”

• Outcome evaluations are usually conducted after process evaluations.

• Outcome evaluation (also known as summative evaluation) assesses “how well a


program meets its objectives” (i.e., short-term outcomes as described in the program logic
model), and in a more comprehensive evaluation, it also assesses “how well the program is
achieving its goals” (i.e., long-term outcomes, also part of the logic model).

• Developmental evaluation: When interventions are in the early stages of innovation or in


highly complex situations, such as poverty or homelessness, where the causes and solutions
to the problem are ambiguous and intervention stakeholders are not on the same page.

5.2 Reciprocal Relationship between Basic and Applied Research:

You might also like