Class 2
Class 2
1
Outline
• Teams
• Readiness Assessment 1
• Introduction to Path Analysis
• Model Specification
• Path diagrams
• Equations
• Mplus syntax
• Direct and Indirect Effects
• Homework 1 Assigned
2
Teams
3
Teams
• Take a moment for introductions and sharing contact information
Pasangi Perera Sarah Halvorson-Fried Emmanuel Amoako Thuy Dao Abby Morrison
Carolina Ruiz Quetzabel Benavides Eliana Armora Langoni Kenny Harris Danya Krueger
Dicky Baruah Imani Johnson Joan Wanyama Leiming Ding Martina Spain
4
Readiness Assessment 1
5
Readiness Assessment 1
1) Access Readiness Assessment 1 on Canvas > Quizzes
2) Complete Individual Readiness Assessment 1 on your own
3) Gather with your team to complete Team Readiness Assessment 1
6
Introduction to Path Analysis
7
Path Analysis
• Path analysis (PA) is a relatively simple type of SEM that examines
hypothesized relationships among observed variables (not latent
variables).
• The purpose of path analysis is to use empirical data to test a
theoretical model.
8
Path Analysis
• Unlike SEM with latent variables, PA uses observed variables as IVs
and DVs (i.e., “single-indicator technique”)
• Using observed variables means that all constructs are treated as if
they are perfectly measured (i.e., they do not contain error).
9
Path Analysis
• Path analysis is a building block of more sophisticated uses of SEM, so
it is a good starting place.
• Even though it is a relatively simple form of SEM, it has many
advantages over Ordinary Least Squares (OLS) regression (i.e.,
traditional multiple regression in SPSS, Stata, etc.).
10
Path Analysis – Advantages over OLS Reg.
• Simultaneous equations
o Models can have more than one dependent variable
o Variables can be dependent in one equation and independent in others
o Variables can have reciprocal effects
• More sophisticated tests of model quality
• More sophisticated tests of categorical moderators
11
Path Analysis and Causal Inference
• Path modeling may be referred to as causal modeling, BUT…
• There are 3 core requirements for inferring causality between two
variables:
• the cause precedes the effect
• the cause and effect are correlated
• other possible explanations can be ruled out
12
Path Model Example
z1
x1
y1
z2
y2
x2
Y3 z3
x1: x1 is an observed variable. It is not predicted by any other variable in the model,
so it is an “exogenous” variable. x1 predicts two other variables, y1 and y2. We are
used to calling predictor variables, independent variables.
x2: x2 is an observed variable. It is not predicted by any other variable in the model,
so it is an “exogenous” variable. x2 predicts one other variable, y3. We are used to
calling predictor variables, independent variables.
The double-headed arrow between x1 and x2: A double headed arrow designates a
correlation or association between two variables. Kline (2005) calls a correlation an
“unanalyzed” relationship between two variables. Remember: a double-headed
arrow does NOT indicate one variable predicts another. X1 and X2 are correlated;
neither predicts the other. Because no other variables predict x1 or x2, they are both
exogenous in this model.
13
y2: y2 is a dependent variable in the model because it is predicted by another
observed variable (X1). In SEM parlance, a dependent variable is an “endogenous”
variable.
y3: y3 is a dependent variable in the model because it is predicted by other observed
variables (x2 and y2). In SEM parlance, a dependent variable is an “endogenous”
variable.
The single-headed arrow between x1 and y1: The arrow represents a hypothesized
directional effect of x1 on y1.
The single-headed arrow between x1 and y2: The arrow represents a hypothesized
directional effect of x1 on y2.
The single-headed arrow between x2 and y3: The arrow represents a hypothesized
directional effect of x2 on y3.
The single-headed arrow between y2 and y3: The arrow represents a hypothesized
directional effect of y2 on y3.
The single-headed arrow between y2 and y1: The arrow represents a hypothesized
directional effect of y2 on y1.
The single-headed arrow between y3 and y1: The arrow represents a hypothesized
directional effect of y3 on y1.
z1: z1 represents the part of y1 scores that is unexplained by its predictors (x1 and
y2). It is the error term in the regression equation predicting y1. We can also call
these error terms, “errors of prediction” or “disturbances.”
z2: z2 represents the part of scores for y2 that is unexplained by its predictor (x1). It is
the error term in the regression equation predicting y2. We can also call these error
terms, “errors of prediction” or “disturbances.”
z3: z3 represents the part of scores for y3 that is unexplained by its predictors (x2 and
y2). It is the error term in the regression equation predicting y3. We can also call
these error terms, “errors of prediction” or “disturbances.”
The single-headed arrow between z1 and y1: The arrow from z1 to y1 indicates that
scores on y1 are partially explained by error. Error partially explains the scores. The
rest is explained by the observed variables predicting in the model.
The single-headed arrow between z2 and y2: The arrow from z2 to y2 indicates that
scores on y2 are partially explained by error. Error partially explains the scores. The
rest is explained by the observed variables predicting in the model.
The single-headed arrow between z3 and y3: The arrow from z3 to y3 indicates that
scores on y3 are partially explained by error. Error partially explains the scores. The
rest is explained by the observed variables predicting in the model.
13
Path Analysis – Model Types
• Two types of path models (Kline, 2015):
• Recursive: all causal paths are unidirectional (no reciprocal effects),
and disturbance terms (errors) are not correlated.
• Nonrecursive
14
Recursive Path Model Example
ζ1
g11 1
x1 y1
g21
b12
y2
1
ζ2
15
Nonrecursive Path Model
• Nonrecursive (Type 1): the model contains reciprocal (two-way causal
or feedback) effects between one or more pairs of endogenous
variables
16
Nonrecursive: Direct Feedback Loop
z1
g11
x1 y1
b13 z2
b31 1
φ21 z2 y3
1
g22 b32
x2 y2
X1: Income
X2: Occupation prestige
Y1: Perceived income
Y2: Perceived occupation prestige
Y3: Perceived overall status
17
Nonrecursive Path Model
• Nonrecursive (Type 2): the model has a correlated disturbance term
with a Bow pattern of correlated disturbances (i.e., two disturbances
are correlated, and there is one directional path between the
corresponding endogenous variables)
18
Nonrecursive: Bow Pattern
ψ13
z1
1
g11
x1 y1
b31 z3
φ21 y3
z2
1
g22 b32
x2 y2
19
Note: Recursive
• A model with a Bow-free pattern of correlated disturbances (i.e., two
disturbances are correlated but there is no link between the
corresponding endogenous variables) is considered recursive.
20
Recursive (No Bow)
ψ13
z1
g11
x1 y1
z3
φ21 y3
z2
1
g22
x2 b32
y2
21
Path Analysis
• Set of hypothesized relationships among observed variables.
• Relatively simple for SEM, but more sophisticated than traditional
regression:
o Models with multiple dependent variables and variables that are dependent
and independent can be tested.
o Reciprocal relationships among variables and error terms can be tested.
• Two types: recursive and non-recursive
• Advantages make it a good choice for many analyses
22
Model Specification:
Path Diagrams
23
Specification
• The term “specification” simply means to describe or define the
model we plan to analyze. A “model” is a set of relationships among
variables. In path analysis, all of the variables are observed variables,
that is, variables in our dataset with values for our sample members.
• If you have ever run a regression, ANOVA, or exploratory factor
analysis, you specified a model when you gave the program
instructions for what variables to include and which to treat as
independent or dependent variables.
24
Specification in SEM
• There are many ways to specify models in SEM. By the end of the
course you will be able to specify various models using path diagrams,
equations, matrices, and Mplus syntax. You will also be able to
translate any one specification format into another.
25
Specification with Path Diagrams
• Squares for observed variables
26
Path Diagram of a Path Analysis
z1
x1
y1
z2
y2
x2
Y3 z3
x1: x1 is an observed variable. It is not predicted by any other variable in the model,
so it is an “exogenous” variable. x1 predicts two other variables, y1 and y2. We are
used to calling predictor variables, independent variables.
x2: x2 is an observed variable. It is not predicted by any other variable in the model,
so it is an “exogenous” variable. x2 predicts one other variable, y3. We are used to
calling predictor variables, independent variables.
The double-headed arrow between x1 and x2: A double headed arrow designates a
correlation or association between two variables. Kline (2005) calls a correlation an
“unanalyzed” relationship between two variables. Remember: a double-headed
arrow does NOT indicate one variable predicts another. X1 and X2 are correlated;
neither predicts the other. Because no other variables predict x1 or x2, they are both
exogenous in this model.
27
variable.
y3: y3 is a dependent variable in the model because it is predicted by other observed
variables (x2 and y2). In SEM parlance, a dependent variable is an “endogenous”
variable.
The single-headed arrow between x1 and y1: The arrow represents a hypothesized
directional effect of x1 on y1.
The single-headed arrow between x1 and y2: The arrow represents a hypothesized
directional effect of x1 on y2.
The single-headed arrow between x2 and y3: The arrow represents a hypothesized
directional effect of x2 on y3.
The single-headed arrow between y2 and y3: The arrow represents a hypothesized
directional effect of y2 on y3.
The single-headed arrow between y2 and y1: The arrow represents a hypothesized
directional effect of y2 on y1.
The single-headed arrow between y3 and y1: The arrow represents a hypothesized
directional effect of y3 on y1.
z1: z1 represents the part of scores for y1 that is unexplained by its predictors (x1 and
y2). It is the error term in the regression equation predicting y1.
z2: z2 represents the part of scores for y2 that is unexplained by its predictor (x1). It is
the error term in the regression equation predicting y2.
z3: z3 represents the part of scores for y3 that is unexplained by its predictors (x2 and
y2). It is the error term in the regression equation predicting y3.
The single-headed arrow between z1 and y1: The arrow from z1 to y1 indicates that
scores on y1 are partially explained by error. Error partially explains the scores. The
rest is explained by the observed variables predicting in the model.
The single-headed arrow between z2 and y2: The arrow from z2 to y2 indicates that
scores on y2 are partially explained by error. Error partially explains the scores. The
rest is explained by the observed variables predicting in the model.
The single-headed arrow between z3 and y3: The arrow from z3 to y3 indicates that
scores on y3 are partially explained by error. Error partially explains the scores. The
rest is explained by the observed variables predicting in the model.
27
Path Diagram with Greek Notation
ζ1
g11 1
x1
y1
ζ2
1 b12
φ12
g21
y2
x2 b13
b32
g32
Y3 ζ3
1
g11 is gamma one-one. It designates the coefficient for the path TO endogenous variable 1 (y1) FROM
exogenous variable 1 (x1). Gamma’s relate to paths from exogenous to endogenous variables.
g21 is gamma two-one. It designates the coefficient for the path TO endogenous
variable 2 (y2) FROM exogenous variable 1 (x1).
g31 is gamma three-two. It designates the coefficient for the path TO endogenous
variable 3 (y3) FROM exogenous variable 2 (x2).
b12 is beta one-two. It designates a path TO endogenous variable 1 FROM endogenous variable 2. Beta’s relate to
paths between endogenous variables.
b32 is beta three-two. It designates a path TO endogenous variable 3 FROM endogenous variable 2. Beta’s relate
to paths between endogenous variables.
b13 is beta one-three. It designates a path TO endogenous variable 1 FROM endogenous variable 3. Beta’s relate
to paths between endogenous variables.
ζ1 is the error term associated with the y1. The predictors x1, y2 and y3 are unlikely to explain all the
variance in y1; leftover or unexplained variance is defined as error in SEM. Between the predictors x1,
28
y2, y3, and ζ1 , all of the variance of y1 will be explained.
ζ2, is the error term associated with the y2. X1 is unlikely to explain all the variance in y2; leftover or
unexplained variance is defined as error in SEM. Between the predictors x1 and ζ2 , all of the variance of
y1 will be explained.
ζ3 is the error term associated with the y3. X2 and y2 are unlikely to explain all the variance in y3;
leftover or unexplained variance is defined as error in SEM. Between the predictors x2, y2, and ζ3 , all of
the variance of y3 will be explained.
Paths from ζ1, ζ2, ζ3 with small 1’s: The path from a zeta to an endogenous variable is fixed at 1
automatically by most SEM programs. With an error term, we are interested in knowing what percentage
of variance in the associated variable is explained by error. We are not interested in the path coefficient.
By fixing the path, we obtain an estimate of the variance of zeta. We’ll learn more about this later.
28
Greek Notation for a Nonrecursive Model
z1
1
g11
x1 y1 b13 z2
b31 1
φ21 1
z2 y3
g22
x2 y2 b32 Note the order of the
numbers in the
X1: Income
X2: Occupation prestige
subscripts for the
Y1: Perceived income reciprocal paths. The first
Y2: Perceived occupation prestige number refers to the
Y3: Perceived overall status
dependent variable.
29
Another Nonrecursive Model
ψ13
z1
1
g11
x1 y1
b31 z3
1
φ21 z2
y3
Note the Greek letters
1
x2
g22
y2
b32 on the double-headed
correlation arrows. Φ
X1: Income (phi) for the correlation
X2: Occupation prestige between two observed
Y1: Perceived income variables, and ψ (psi) for
Y2: Perceived occupation prestige
Y3: Perceived overall status the correlation between
two disturbances.
30
Practice with Greek in Path Diagrams
You’ll need the following: Phi (φ12), gamma (γ11 γ12), beta (β21), zeta (ζ1 ζ2)
31
Specification with Path Diagrams: Summary
• Path diagrams offer a precise way to represent path models. They
consist of single- and double-headed arrows, squares, and circles.
• The Greek letter beta (β) is used for paths between two endogenous
variables and gamma (γ) is used for paths from an exogenous variable
to an endogenous variable. Squares represent the exogenous and
endogenous variables and indicate they are observed variables.
• The equations end with the term zeta (ζ) signifying the dependent
variable’s disturbance or error, which is unexplained variance. Zeta’s
are represented with circles because they are latent, or unobserved,
variables.
32
Model Specification:
Equations
33
Specification
• We’ve seen how to specify a model with a path diagram; now we’ll
see how to do so with equations.
• The Greek we used to label components of the path diagram will
help.
34
Path Analysis Specification with Equations
• The ability of SEM to estimate multiple equations simultaneously
makes it a valuable tool for models with hypothesized mediators.
• An OLS regression model permits only one dependent variable.
• Testing mediation hypotheses requires the running of multiple
models—one for each variable that is predicted by other variables in
the model.
35
OLS Regression Specification with Equations
• An OLS regression model can be specified with one equation in which
the outcome variable (y) is predicted by an intercept, one or more
predictors (independent variables), and an error term.
36
OLS Regression Specification with Equations
• Using typical notation the equation might be:
yi = a + b1(x1i) + b2(x2i) + ei
• Where yi is the outcome for an individual, a is the intercept, b1 and b2
are the effects of the first and second predictor, x1i and x2i are the
individual’s scores on the two predictors, and ei is the difference
between the individual’s observed score on y and the average
predicted score for y.
37
Specification with Equations
ζ1
g11 1
x1 y1
ζ2
1
b12
φ12 g21
y2
x2 b13
b32
g32
Y3 1
ζ3
g11 is gamma one-one. It designates the coefficient for the path TO endogenous variable 1 (y1) FROM
exogenous variable 1 (x1). Gamma’s relate to paths from exogenous to endogenous variables.
g21 is gamma two-one. It designates the coefficient for the path TO endogenous
variable 2 (y2) FROM exogenous variable 1 (x1).
g31 is gamma three-two. It designates the coefficient for the path TO endogenous
variable 3 (y3) FROM exogenous variable 2 (x2).
b12 is beta one-two. It designates a path TO endogenous variable 1 FROM endogenous variable 2. Beta’s relate to
paths between endogenous variables.
b32 is beta three-two. It designates a path TO endogenous variable 3 FROM endogenous variable 2. Beta’s relate
to paths between endogenous variables.
b13 is beta one-three. It designates a path TO endogenous variable 1 FROM endogenous variable 3. Beta’s relate
to paths between endogenous variables.
ζ1 is the error term associated with the y1. The predictors x1, y2 and y3 are unlikely to explain all the
variance in y1; leftover or unexplained variance is defined as error in SEM. Between the predictors x1,
38
y2, y3, and ζ1 , all of the variance of y1 will be explained.
ζ2, is the error term associated with the y2. X1 is unlikely to explain all the variance in y2; leftover or
unexplained variance is defined as error in SEM. Between the predictors x1 and ζ2 , all of the variance of
y1 will be explained.
ζ3 is the error term associated with the y3. X2 and y2 are unlikely to explain all the variance in y3;
leftover or unexplained variance is defined as error in SEM. Between the predictors x2, y2, and ζ3 , all of
the variance of y3 will be explained.
Paths from ζ1, ζ2, ζ3 with small 1’s: The path from a zeta to an endogenous variable is fixed at 1
automatically by most SEM programs. With an error term, we are interested in knowing what percentage
of variance in the associated variable is explained by error. We are not interested in the path coefficient.
By fixing the path, we obtain an estimate of the variance of zeta. We’ll learn more about this later.
38
Specification with Equations
ζ1
1
g11
x1
y1
ζ2
1 b12
φ12 g21
y2
x2 b13
b32
g32
Y3 ζ3
1
39
Specification with Equations
ζ1
1
g11
x1
y1
ζ2
1 b12
φ12 g21
y2
x2 b13
b32
g32
Y3 ζ3
1
40
Specification with Equations
ζ1
1
g11
x1
ζ2
y1
1 b12
φ12 g21
y2
x2 b13
b32
g32
Y3 ζ3
1
41
Specification with Equations
ζ1
1
g11
x1
y1
ζ2
1 b12
φ12 g21
y2
x2 b13
b32
g32
Y3 ζ3
1
And y3. y3 = ?
42
Specification with Equations
y1 = b12(y2) + b13(y3) + g11(x1) + ζ1
y2 = g21(x1) + ζ2
y3 = b32(y2) + g32(x2) + ζ3
43
Path Analysis Equations
• SEM programs usually center variables around their means, making
the intercept of these equations = 0.
• Recent versions of Mplus, however, include intercepts by default.
Intercepts are required in models using variables with missing values.
44
Specification with Equations
y1 = g11(x1) + b12(y2) + ζ1
y2 = g21(x1) + g22(x2) + ζ2
45
Specification with Equations: Summary
• The regression components of path models can be specified with a
set of equations: one equation for each endogenous (or dependent)
variable in the model.
• The Greek letter beta (β) is used for paths between two endogenous
variables and gamma (γ) is used for paths from an exogenous variable
to an endogenous variable.
• The equations end with the term zeta (ζ) signifying the dependent
variable’s disturbance or error, which is unexplained variance.
• SEM equations usually do not have intercepts because variables are
often centered at 0 before analysis by default.
46
Model Specification:
Mplus Syntax
47
Specification in Mplus
• Specification in Mplus is very simple. A core section of Mplus syntax
starts with the command “MODEL:”
• Following this command (and colon), users write one line specifying
each outcome variable and its predictors, separated by the word ON
(short for “regressed on”). The line ends with a semi-colon.
48
Mplus Example
MODEL: “Model” is the command.
y1 on x1 x2 x3; The second line says:
y2 on x1 y1; “regress y1 on x1, x2, and
x3.”
Mplus is not case sensitive. The order in which the y1 and y2 statements
are written does not matter. The order in which the independent
variables are written also does not matter. The colon after the command
and semi-colons at the end of each statement, however, are critical.
49
Practice Specifying a Model in Mplus
z1
1
x1
y1
z2
1
y2
x2
Y3 1
z3
50
Specification in Mplus
• SEM allows us to have control over more parts of our model than we
have in traditional OLS regression.
• We can specify that exogenous predictors are correlated with each
other, or that the disturbances of dependent variables are correlated.
51
Path Model with Correlated Xs
z1
1
x1
y1
z2
1
y2
x2
Y3 1
z3
52
Path Model with Correlated Xs z1
1
x1
y1
z2
1
y2
x2
MODEL: Y3 1
z3
y1 on x1 y2 y3;
y2 on x1;
It could also be written “x2 with
y3 on x2 y2; x1” because correlational
x1 with x2; relationships have no direction.
53
Path Model with Correlated Disturbances
z1
1
MODEL:
x1 y1 z3
y1 on x1;
1 y2 on x2;
1
z2
y3 y3 on y1 y2;
y2
y1 with y3;
x2
54
Practice Specifying a Model in Mplus
z1
x1 y1
z3
1
y3
z2
1
x2 y2
55
Specification in Mplus: Summary
• The relationships among variables in a path model are specified with
simple phrases under the MODEL section of commands (or in the
MODEL statement).
• Statements start with the name of the dependent variable followed
by the word ON and the names of observed variables that predict it.
• Correlations among exogenous variables or disturbances of
endogenous variables are specified with the word WITH.
56
Direct and Indirect Effects
57
Reminder: Path Analysis and Causal Inference
• Path modeling may be referred to as causal modeling. And in this
discussion, we talk about the “effects” on one variable on another,
BUT…
• To infer causality (i.e., that one construct causes another), the 3
criteria for causality must be met:
• the cause precedes the effect
• the cause and effect are correlated
• other possible explanations can be ruled out
58
Effect Decomposition
The path diagram below includes:
n A direct effect of X1 on Y1, and
g21 ζ2
b12
y2
59
Effect Decomposition
The term “effect decomposition” refers to this
process of breaking down (decomposing) the
effects of one variable on another into its direct
and indirect components.
ζ1
g11
x1 y1
g21 ζ2
b12
y2
60
Effect Decomposition
n Direct effect of X1 on Y1 = g11
g21 ζ2
b12
y2
61
Leveraging Mplus Syntax
When the BOOTSTRAP option is used alone, bootstrap standard errors of the model
parameter estimates are obtained. When the BOOTSTRAP option is used in
conjunction with the CINTERVAL(BOOTSTRAP) option of the OUTPUT command,
bootstrap standard errors of the model parameter estimates and non-symmetric
bootstrap confidence intervals for the model parameter estimates are obtained. The
BOOTSTRAP option can be used in conjunction with the MODEL INDIRECT command
to obtain bootstrap standard errors for indirect effects. When both MODEL INDIRECT
and CINTERVAL(BOOTSTRAP) are used, bootstrapped standard errors and bootstrap
confidence intervals are obtained for the indirect effects. By selecting
BOOTSTRAP=1000, bootstrapped standard errors will be computed using 1000 draws.
62
Chi-square = 2.763 (3 df)
p = .430
.50
GPA academic error1
.1 8
height .50 -.08
- .1 0
.0
0
.3 4
.1 5
- .1 6
weight
-.08 attract error2
-.27
6
.3
rating
63
.50
GPA academic error1
.18 .47
.02
height
-.10
.00
.34
.15
-.08
weight attract error2
-.16
-.27
6
rating
.3
Calculate the effects on Attract
Direct Effect Indirect Effect Total Effect
GPA .02 (.50*.47) = .235 .255
Height
Weight
Rating
Academic
64
Effect Decomposition
• It is also possible to determine whether an indirect path is statistically
significant.
• Use the following formula to calculate the standard error of an
indirect effect (Sobel, 1986):
g11
x1 y1
y2
65
Effect Decomposition
n Why calculate the Standard Error?
Because in conjunction with the parameter
estimate, it tells you if a path is significant.
Is X1’s indirect effect on Y1 significant?
The parameter divided by its SE tells you.
g11
x1 y1
a b
y2
66
zeta1 SEab = b 2 ( SEa) 2 + a 2 ( SEb) 2
paeduc 1
zeta3
1
= 0.002995
= 0.002995
Estimate S.E. C.R. P
educ <--- paeduc .128 .030 4.256 *** = (-.002866)/.002995
=0.95
educ <--- maeduc .183 .037 4.927 ***
tax <--- educ .022 .009 2.369 .018 The ab path is not significant at .05!
tax <--- prestg80 -.002 .002 -1.240 .215 To be statistically significant, the
prestg80 <--- educ 1.433 .435 3.291 *** parameter/SE (critical ratio, or C.R.)
educ <--- prestg80 .052 .016 3.190 .001 must be greater than 1.96.
67
Percent of Variance Explained
zeta1
paeduc 1
zeta3
1
educ tax
zet2
1
maeduc
prestg80
68
Percent of Variance Explained
• SEM output provides us with values for residual variances in
endogenous variables. The unstandardized residual variances are the
amount of variance out of a variable’s original raw-metric variance that
is unexplained.
• If you divide the unstandardized residual variance by the original
variance, you obtain the percentage of the original variance that is
unexplained. If the variance of the variable “tax” in your dataset is .84
and zeta3 = .42, for example, 50% of the variance in tax is unexplained.
This value (.50) is also what you’ll see in the standardized output under
residual variance. Mplus will also tell you the R2 associated with the
standardized residual variance. R2 equals 1 – the residual variance.
69
Calculating R2
• If an endogenous variable has one predictor, its R-squared = the
square of beta (standardized path). If it has multiple UNCORRELATED
predictors and no indirect paths, its Squared Multiple Correlation
(SMC) equals the sum of the squared direct paths leading to it.
• If an endogenous variable is predicted by direct and indirect paths
from the same variable, or by correlated predictors, we cannot
calculate its SMC by squaring paths.
70
Caution
• Be sure to note the distinction between calculating effects and writing
the equation for an endogenous variable. Calculating indirect effects
involves looking at paths in a model that are not in the equation for
an endogenous variable. In the path diagram below, the indirect
effect of x1 on y1 is ab, but the equation for y1 is: y1 = c(x1) + b(y2).
c
x1 y1
a b
y2
71
Practice with Effects
• Answer the questions on the following slides. Check your answers on
the final slide.
72
Chi-square = 2.763 (3 df)
p = .430
.50
GPA academic error1
.1 8
height .50 -.08
- .1 0
.0
0
.3 4
.1 5
- .1 6
weight
-.08 attract error2
-.27
6
.3
rating
73
knowledge .41
.35 e1
.12 .40
.35
value performance error
-.06
.00
satisfaction .10
e2
74
knowledge .41
.35 e1
.12 .40
.35
value performance error
-.06
.00
satisfaction .10
e2
75
knowledge .41
.35 e1
.12 .40
.35
value performance error
-.06
.00
satisfaction .10
e2
76
knowledge .41
.35 e1
.12 .40
.35
value performance error
-.06
.00
satisfaction .10
e2
.1225
.41
-.0021
.5304
77
knowledge .41
.35 e1
.12 .40
.35
value performance error
-.06
.00
satisfaction .10
e2
78
knowledge .41
.35 e1
.12 .40
.35
value performance error
-.06
.00
satisfaction .10
e2
79
Answers
1. .50*.50 = .25.
2. Because it is predicted by both direct and indirect effects.
3. .41
4. False (It is endogenous because it is predicted by knowledge.
5. True
6. Sum the direct and indirect effects of variables that predict it.
.41 + (.35*.35) + (.35*-.06*.10) = .41 + .1225 - .0021 = .5304
Note that negative products are subtracted in the calculation of
total effects.
7. (-.06)2 = .0036
8. (.35) 2 = .1225
80
Homework 1
81