0% found this document useful (0 votes)
22 views6 pages

Chapter 7 Summary pgdm3

Uploaded by

bontibonti456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views6 pages

Chapter 7 Summary pgdm3

Uploaded by

bontibonti456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

RESEARCH METHODOLOGY

Causal Research Design: Experimentation

CHAPTER-7
Section (PGDM-3)
Group-4

CHITTE NIKUNJ
(2024-2206-0001-0003)
BONTI TAMULI
(2024-2405-0001-0005)
DEEPU SHARMA
(2024-0306-0001-0020)
GANESH MAIN
(2024-2506-0001-0006)
Concept of Causality: Causality in advertising research refers to inferring a
probabilistic dating among motive and effect, acknowledging a couple of
influencing variables. Unlike normal usage, causality in technology cannot be
conclusively verified however inferred. The proper cause may also stay
unidentified, and experimentation enables discover those relationships below
particular conditions.

Circumstances of cause: To determine causality, three conditions must be


satisfied: (1) simultaneous variation (cause and effect together or separate),
(2) temporal structure (causal consumption), the cause precedes or precedes
the cause), and (3) factors that rule out other possible causes. While these
conditions are important, they alone do not prove causation. Strong,
consistent evidence and controlled trials can strengthen causal hypotheses.

Validity in Experimentation: A test involves manipulating unbiased variables


(e.g., price stages, marketing) and measuring their impact on established
variables (e.g. income, earnings), whilst controlling extraneous variables
(e.g. keep size, location). Test devices (e.g. clients, stores) respond to those
variables. Experimental layout specifies check gadgets, treatments,
structured variables, and manage of extraneous factors.

Definition and Concept: In experimentation, unbiased variables are


manipulated to assess their results on dependent variables, which degree
the consequences. Test gadgets are the subjects of the test, while
extraneous variables are out of control factors which can affect
consequences. Experimental design outlines tactics for sampling,
manipulation, measurement, and manipulate of these variables.

Definition of Symbol: In advertising and marketing research, symbols are


used to describe experiments: X denotes exposure to an independent
variable, O represents measurement of the based variable, and R indicates
random assignment. Movement from left to proper suggests time
development, while horizontal and vertical alignments suggest simultaneous
or particular institution activities. Experiments aim for internal validity
(correct conclusions) and outside validity (generalizability).

Validity in Experimentation Internal validity assesses whether or not changes


in the structured variable are because of the manipulation of impartial
variables, instead of different elements. It requires controlling extraneous
variables to ensure correct conclusions about causality. External validity
examines whether findings may be generalized past the experimental setting
to other populations, times, and contexts. Achieving each styles of validity
frequently includes trade-offs, as controlling for internal validity might also
reduce external validity.

Extraneous Variables: In experiments, History refers to external events


happening at the same time that could influence the results, like economic
downturns affecting sales. For instance, if a promotional campaign shows no
change in sales, it might not be due to the campaign itself but rather these
external events. These historical factors can make it hard to determine the
true impact of the experiment. Maturation refers to natural changes over
time, like people aging or stores evolving, which can affect experimental
results. Testing effects arise when initial measurements influence later ones,
potentially skewing results, especially if respondents adjust their attitudes to
stay consistent. Instrumentation issues occur when changes in measurement
tools or methods affect the results, making it hard to determine if observed
differences are due to the treatment or these changes. Statistical regression
happens when extreme scores naturally move toward the average over time.
For example, if some people initially have very high or low attitudes toward a
brand, their opinions might become more moderate by the end of the
experiment, making it hard to attribute this change solely to the treatment.
Selection bias happens when groups in an experiment differ from the start,
affecting results. Mortality refers to participants dropping out during the
experiment, which can distort findings if those dropping out are different
from those who remain. Both issues can complicate interpreting the
experiment's true effects.

Controlling Extraneous Variables: Randomization assigns participants or


conditions randomly to experimental groups to ensure that each group is
similar and reduces the impact of extraneous variables. Matching compares
participants on key variables before assigning them to groups to ensure they
are similar across treatment conditions. Both methods aim to improve the
fairness and accuracy of experimental results. Statistical control adjusts for
the impact of extraneous variables by analyzing their effects separately,
often using methods like ANCOVA. Design control involves structuring the
experiment itself to account for specific extraneous variables, ensuring they
don't affect the results. Both approaches help isolate the true effects of the
treatment being studied.

A Classification of Experimental Design: Experimental designs are


categorized into four types: PR experimental (no randomization, like one-shot
case studies), true experimental (includes random assignment, such as
pretest- posttest control designs), quasi-experimental (partial control over
variables, like time series), and statistical designs (which use statistical
methods to analyze experiments). Each type varies in how it controls for
extraneous factors and analyzes results.

PR experimental Design: PR experimental, true experimental, and quasi-


experimental designs are methods used to examine cause-and-effect
relationships, each with different levels of control. Preexperimental designs
are basic and exploratory, often lacking randomization and control groups.
For example:

1. One-Shot Case Study: A group receives a treatment, and results are


measured afterward. With no comparison or pretest, it's difficult to determine
if the treatment caused the effect.

2. Static Group Design: Two groups are used-one receives treatment, the
other doesn't-but without random assignment, pre-existing differences can
affect results.

True experimental designs provide more control through randomization,


reducing bias and offering clearer results. Examples include:

1. Pretest-Posttest Control Group Design: Groups are tested both before and
after the treatment, controlling for many external variables.

2. Posttest-Only Control Group Design: Groups are tested only after


treatment, avoiding some biases from pretesting but leaving the possibility
of pre-existing group differences.

Quasi-experimental designs offer some control but lack randomization. For


example, Time Series Designs track results over time, which helps control for
certain factors, though they still leave the experiment open to external
influences like historical events. These are often quicker and more practical
but have weaker validity.

Statistical Designs: Statistical designs, like randomised block, Latin square,


and factorial designs, provide advanced control and analysis through dealing
with more than one variables. The randomised block layout controls one
outside variable by grouping subjects, while the Latin square design
manages two. Factorial designs allow for interactions between a couple of
variables. They are green however complicated, especially with many
variables.

Laboratory Versus Field Experiments: Laboratory experiments offer excessive


manipulate and inner validity but can also be afflicted by artificiality, leading
to reactive mistakes and lower external validity. They are regularly much
less. expensive and simpler to behaviour. Field experiments, though more
realistic and generalisable, are less managed. The Internet permits managed
experimentation in a laboratory-like putting. Causal research designs are
premiere for organising motive-and-effect relationships compared to
descriptive designs, which lack manipulate over variables and timing.

Experimental Versus Non-experimental Designs: Causal designs are quality


for inferring motive-and-impact relationships. Descriptive research have
obstacles in establishing causality and controlling variables.

Laboratory Versus Field Experiments:Laboratory experiments offer excessive


manipulate and inner validity but can also be afflicted by artificiality, leading
to reactive mistakes and lower external validity. They are regularly much
less. expensive and simpler to behaviour. Field experiments, though more
realistic and generalisable, are less managed. The Internet permits managed
experimentation in a laboratory-like putting. Causal research designs are
premiere for organising motive-and-effect relationships compared to
descriptive designs, which lack manipulate over variables and timing.

Experimental Versus Non-experimental Designs:Causal designs are quality


for inferring motive-and-impact relationships. Descriptive research have
obstacles in establishing causality and controlling variables.

Limitations of Experimentation: Experiments are costly and time-ingesting,


with demanding situations in management due to extraneous variables,
operational interference, and ability infection. Experimentation in marketing
research has limitations in time, cost, and administration. Measuring long-
term effects, like advertising, can be time-consuming, while the need for
control groups and multiple measurements adds to costs. Administering
experiments is challenging, especially in field settings, where controlling
external variables is difficult and cooperation from partners is hard to secure.
Internationally, these challenges are amplified due to different
infrastructures and regulations. Despite these issues, test marketing and
other forms of controlled experiments are still viable for gaining insights and
refining strategies.

Marketing Research and Social Media:Marketing research in virtual worlds


and social media is value-powerful and permits controlled experiments,
however findings might also vary from real-world settings and require
thorough testing for outside validity.
Mobile Marketing Research: Mobile advertising studies (MMR) allows
managed experiments through cell devices, similar to internet-based
methods. Different experimental remedies can be displayed throughout
websites, and respondents recruited to finish questionnaires. However, MMR
operates in laboratory-type surroundings and faces barriers, in particular for
carrying out surveys, as discussed in Chapter 6.

Ethics in Marketing Research: Disguising research purpose, like in a examine


on Rice Krispies classified ads, enables keep away from biased responses.
Ethical tips require informing contributors the experiment is disguised,
allowing opt-out, and debriefing later on. Proper debriefing well-known shows
the authentic reason, ensures validity, and addresses capacity player issues
to alleviate strain.

You might also like