0% found this document useful (0 votes)
20 views5 pages

Expe Midterm Reviewer

The document outlines the characteristics and types of hypotheses in experimental research, including analytic, contradictory, testable, falsifiable, and synthetic statements. It discusses the inductive and deductive models of reasoning, the importance of operational definitions, and the evaluation of reliability and validity in measurement techniques. Additionally, it emphasizes the significance of building on prior research and the role of serendipity and intuition in generating hypotheses.

Uploaded by

Sofia Meliton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views5 pages

Expe Midterm Reviewer

The document outlines the characteristics and types of hypotheses in experimental research, including analytic, contradictory, testable, falsifiable, and synthetic statements. It discusses the inductive and deductive models of reasoning, the importance of operational definitions, and the evaluation of reliability and validity in measurement techniques. Additionally, it emphasizes the significance of building on prior research and the role of serendipity and intuition in generating hypotheses.

Uploaded by

Sofia Meliton
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

FORMULATING THE - Analytic statement = one that is

always true
HYPOTHESIS - Contradictory statement = statements
with elements that oppose each other,
Hypothesis always false
- is the thesis, or main idea, of an - a hypothesis meets the definition of a
experiment synthetic statement when it can be stated
- a statement about a predicted in what is known as the "if...then..." form
relationship between at least 2 variables
- designed to fit the type of research Testable statements
design that has been selected - the means for manipulating
antecedent conditions and measuring
Nonexperimental hypothesis the resulting behavior must exist
- a statement of your predictions of how - untestable hypotheses are not
events, traits, or behaviors might be necessarily useless
related - not a statement about cause
and effect Falsifiable statements
- some nonexperimental designs, - must be disprovable by the research
particularly those that do not restrict findings
subjects' responses do not typically - hypotheses need to be worded so that
include a hypothesis failures to find the predicted effect
- other nonexperimental designs, such as must be considered evidence that the
correlational and quasi-experimental hypothesis is indeed false
studies, generally include hypotheses
about predicted relationships between Parsimonious statements
variables - a simple hypothesis is preferred over
one that requires many supporting
THE CHARACTERISTICS OF AN assumptions
EXPERIMENTAL HYPOTHESIS
- each experimental hypothesis is a Fruitful statements
tentative explanation of an event or - it leads to new studies
behavior
- a statement that explains the effects THE INDUCTIVE MODEL
of specified antecedent conditions on - the process of reasoning from specific
a measured behavior cases to more general principles
- by examining individual instances, we
Synthetic statements may be able to construct an overall
- are those that can be either true or explanatory scheme to describe them
false - research hypotheses often come from
- each experimental hypothesis must be the use of inductive reasoning
a synthetic statement so that there can - the basic tool of theory building
be some chance it is true and some
chance it is false
- Theory = is a set of general principles SERENDIPITY AND THE WINDFALL
that can be used to explain and predict HYPOTHESIS
behavior
- through induction, researchers Serendipity
construct theories by taking bits of - the knack of finding things that are
empirical data and forming general not being sought
explanatory schemes to - appeared in the work of Ivan Pavlov
accommodate those facts - can be useful in generating new
hypotheses only when we are open to
THE DEDUCTIVE MODEL new possibilities
- is the process of reasoning from - is not just a matter of luck; it is also
general principles to make a matter of knowing enough to use an
predictions about specific instances opportunity
- most useful when we have a well-
developed theory with clearly stated INTUITION
basic premises - may be defined as knowing without
reasoning
COMBINING INDUCTION AND - closest to phenomenology
DEDUCTION - we have a hunch about what might
- in practice, these approaches are not so happen in a particular situation, so we set
neatly separated up an experiment to test it
- through induction, we devise general - guides what we choose to study
principles and theories that can be - intuition is most accurate if it comes
used to organize, explain, and predict from experts
behavior until more satisfactory - intuition should not destroy objectivity
principles are found. Through
deduction we rigorously test the WHEN ALL ELSE FAILS
implications of those theories - one method is to pick a psychology
journal from your library's shelves and
BUILDING ON PRIOR RESEARCH just read through an issue
- the most useful way of finding - observation
hypotheses is by working from - turn your attention to a real-world
research that has already been done problem and try to figure out what
- sometimes, nonexperimental studies causes it
can suggest cause-and-effect
explanations that can be translated SEARCHING THE RESEARCH
into experimental hypotheses LITERATURE
- if you do not already have a specific
hypothesis in mind, you will find the Getting started
experimental literature useful in focusing - once you have decided on a hypothesis,
your thinking on important issues you will want to become familiar with
other published studies in your topic area
- Psychological journals = periodicals - Dependent variable (DV) = the
that publish individual research reports particular behavior we expect to change
and integrative research reviews because of our experimental treatment; it
- Meta-analysis = a statistical is the outcome we are trying to explain
reviewing procedure that uses data
from many similar studies to OPERATIONAL DEFINITIONS
summarize research findings about - Conceptual definition = used in
individual topics everyday language
- Operational definition = specifies the
Writing the report precise meaning of a variable within
- Introduction = consists of a selective an experiment; defines a variable in
review of relevant, recent research terms of observable operations,
- Discussion = integrate your procedures, and measurements
experiment into the existing body of
literature on your topic Defining the IV: Experimental
operational definitions
Finding the articles you need - Experimental operational definitions
- PsycINFO = explain the precise meaning of the
- PsycARTICLES independent variables
- SAGE Full Text - these definitions describe exactly
what was done to create the various
THE BASICS OF treatment conditions of the
experiment
EXPERIMENTATION
Defining the DV: Measured
INDEPENDENT AND DEPENDENT operational definitions
VARIABLES - Measured operational definitions =
- Variables = are measurable elements describe exactly what procedures we
that can vary or take on different values follow to assess the impact of
along some dimension different treatment conditions
- Independent variable (IV) = the - these definitions include exact
dimension that the experimenter descriptions of the specific behaviors
intentionally manipulates; it is the or responses recorded and explain
antecedent the experimenter chooses how those responses are scored
to vary
- Levels of the independent variable = Defining Constructs operationally
when an independent variable has - Hypothetical constructs = unseen
multiple experimental conditions (2 processes postulated to explain
levels = 2 diff. treatments, so on and so behavior (such as anxiety), cannot be
forth) observed directly
- in a true experiment, we test the - many psychological variables are
effects of a manipulated IV - not the hypothetical constructs
effects of different kinds of subjects
- we infer their existence from Test-retest reliability
behaviors that we can observe - comparing scores of people who
have been measured twice with the
Defining Nonconstruct variables same instrument
- operational definitions are equally - they take the test once, then they take
important when we are working with it again (after a reasonable interval)
variables that can be observed more
directly Interitem reliability
- the extent to which different parts of a
Defining scales of measurements questionnaire, test, or other
- many variables can be measured in instruments designed to assess the
more than one way same variable attain consistent
- the measurement alternatives differ in results
the degree of information they provide - scores on different items designed to
- when we have a choice between measure the same construct should be
different levels of measurement, highly correlated
researchers generally choose the
highest level possible because it 2 major approaches in evaluating
provides more information about a interitem reliability
variable - Split-half reliability = involves
- interval and ratio data are more splitting the test into 2 halves at
powerful than nominal and ordinal random and computing a coefficient
of reliability between the scores
EVALUATING OPERATIONAL obtained on the 2 halves
DEFINITIONS - Using statistical tests such as
Cronbach's alpha
RELIABILITY - both approaches measure the internal
- means consistency and consistency of the test items
dependability
- if we apply them in more than one VALIDITY
experiment, they ought to work in - principle of actually studying the
similar ways each time variables that we intend to study
- Manipulation check = providing
Procedures for checking the reliability evidence for the validity of an
of measurement techniques experimental procedure

Interrater reliability Face validity


- to have different observers take - the degree to which an assessment or
measurements of the same responses test subjectively appears to measure
the variable or construct that it is
supposed to measure
- considered the least stringent type of
validity because it does not provide
any real evidence

Content validity
- depends on whether we are taking a
fair sample of the variable we intent to
measure
- "does the content of our measure fairly
reflect the content of the quality we are
measuring? are all aspects of the content
represented appropriately?"
- depends on the variable we want to
measure
- the more specific the variable, the
easier it will be

Predictive validity
- do our procedures yield information
that enables us to predict future
behavior or performance?

Concurrent validity
- compares scores on the measuring
instrument with an outside criterion
- comparative rather than predictive
- evaluated by comparing scores on
the measuring instrument with
another known standard for the
variable being studied

Construct validity
- deals with the transition from theory
to research application
- most important aspect of validity
- Convergent validity = how closely a
test is related with other tests that
measure the same or similar
constructs
- Discriminant validity = the extent to
which a test is not related to other
tests that measure different
constructs

You might also like