Table Stat Test Example SPSS
Table Stat Test Example SPSS
Number of Nature of
Dependent Independent Nature of Dependent Variable(s) Test(s)
Variables Variables
1 interval & normal one-sample t-test
ordinal or interval one-sample median
0 IVs (1 population)
categorical (2 categories) binomial test
Categorical Chi-square goodness-of-fit
interval & normal 2 independent sample t-test
Wilcoxon-Mann Whitney
1 IV with 2 levels ordinal or interval
test
(independent groups)
Chi-square test
Categorical
Fisher’s exact test
interval & normal one-way ANOVA
1 IV with 2 or more
levels (independent ordinal or interval Kruskal Wallis
groups)
Categorical Chi-square test
interval & normal paired t-test
1 IV with 2 levels
(dependent/matched ordinal or interval Wilcoxon signed ranks test
groups)
Categorical McNemar
one-way repeated measures
interval & normal
ANOVA
1 IV with 2 or more
levels (dependent/ ordinal or interval Friedman test
matched groups)
repeated measures logistic
categorical (2 categories)
regression
interval & normal factorial ANOVA
2 or more IVs
ordinal or interval ordered logistic regression
(independent groups)
categorical (2 categories) factorial logistic regression
1 interval IV interval & normal correlation
interval & normal simple linear regression
Source:
Introduction to SAS. UCLA: Statistical Consulting Group (2016). From https://stats.idre.ucla.edu/
sas/modules /sas-learning-moduleintroduction-to-the-features-of-sas/ (accessed Sept. 2, 2018).
Introduction
This document shows how to perform a number of statistical tests using SPSS. Each section gives
a brief description of the aim of the statistical test, when it is used, an example showing the SPSS
commands and SPSS (often abbreviated) output with a brief interpretation of the output. You can
see the page Choosing the Correct Statistical Test for a table that shows an overview of when each
test is appropriate to use. In deciding which test is appropriate to use, it is important to consider
the type of variables that you have (i.e., whether your variables are categorical, ordinal or interval
and whether they are normally distributed), see What is the difference between categorical,
ordinal and interval variables? for more information on this.
Most of the examples in this page will use a data file called hsb2, high school and beyond. This
data file contains 200 observations from a sample of high school students with demographic
information about the students, such as their gender (female), socio-economic status (ses) and
ethnic background (race). It also contains a number of scores on standardized tests, including tests
of reading (read), writing (write), mathematics (math) and social studies (socst). You can get the
hsb data file by clicking on hsb2.
A one sample t-test allows us to test whether a sample mean (of a normally distributed interval
variable) significantly differs from a hypothesized value. For example, using the hsb2 data file, say
we wish to test whether the average writing score (write) differs significantly from 50. We can do
this as shown below.
t-test
/testval = 50
/variable = write.
A one sample median test allows us to test whether a sample median differs significantly from a
hypothesized value. We will use the same variable, write, as we did in the one sample t-
test example above, but we do not need to assume that it is interval and normally distributed (we
only need to assume that write is an ordinal variable).
nptests
/onesample test (write) wilcoxon(testvalue = 50).
A one sample binomial test allows us to test whether the proportion of successes on a two-level
categorical dependent variable significantly differs from a hypothesized value. For example, using
the hsb2 data file, say we wish to test whether the proportion of females (female) differs
significantly from 50%, i.e., from .5. We can do this as shown below.
npar tests
/binomial (.5) = female.
The results indicate that there is no statistically significant difference (p = .229). In other words,
the proportion of females in this sample does not significantly differ from the hypothesized value
of 50%.
A chi-square goodness of fit test allows us to test whether the observed proportions for a
categorical variable differ from hypothesized proportions. For example, let’s suppose that we
believe that the general population consists of 10% Hispanic, 10% Asian, 10% African American
and 70% White folks. We want to test whether the observed proportions from our sample differ
significantly from these hypothesized proportions.
npar test
/chisquare = race
/expected = 10 10 10 70.
An independent samples t-test is used when you want to compare the means of a normally
distributed interval dependent variable for two independent groups. For example, using the hsb2
data file, say we wish to test whether the mean for write is the same for males and females.
Because the standard deviations for the two groups are similar (10.3 and 8.1), we will use the
“equal variances assumed” test. The results indicate that there is a statistically significant
difference between the mean writing score for males and females (t = -3.734, p = .000). In other
words, females have a statistically significantly higher mean score on writing (54.99) than males
(50.12).
See also
The results suggest that there is a statistically significant difference between the underlying
distributions of the write scores of males and the write scores of females (z = -3.329, p = 0.001).
See also
FAQ: Why is the Mann-Whitney significant when the medians are equal?
Chi-square test
A chi-square test is used when you want to see if there is a relationship between two categorical
variables. In SPSS, the chisq option is used on the statistics subcommand of
the crosstabs command to obtain the test statistic and its associated p-value. Using the hsb2 data
file, let’s see if there is a relationship between the type of school attended (schtyp) and students’
gender (female). Remember that the chi-square test assumes that the expected value for each
cell is five or higher. This assumption is easily met in the examples below. However, if this
assumption is not met in your data, please see the section on Fisher’s exact test below.
crosstabs
/tables = schtyp by female
/statistic = chisq.
Let’s look at another example, this time looking at the linear relationship between gender (female)
and socio-economic status (ses). The point of this example is that one (or both) variables may
have more than two levels, and that the variables do not have to have the same number of levels.
In this example, femalehas two levels (male and female) and ses has three levels (low, medium
and high).
crosstabs
/tables = female by ses
/statistic = chisq.
See also
The Fisher’s exact test is used when you want to conduct a chi-square test but one or more of your
cells has an expected frequency of five or less. Remember that the chi-square test assumes that
each cell has an expected frequency of five or more, but the Fisher’s exact test has no such
assumption and can be used regardless of how small the expected frequency is. In SPSS unless you
have the SPSS Exact Test Module, you can only perform a Fisher’s exact test on a 2×2 table, and
these results are presented by default. Please see the results from the chi squared example
above.
One-way ANOVA
A one-way analysis of variance (ANOVA) is used when you have a categorical independent variable
(with two or more categories) and a normally distributed interval dependent variable and you
wish to test for differences in the means of the dependent variable broken down by the levels of
the independent variable. For example, using the hsb2 data file, say we wish to test whether the
mean of write differs between the three program types (prog). The command for this test would
be:
From this we can see that the students in the academic program have the highest mean writing
score, while students in the vocational program have the lowest.
See also
The Kruskal Wallis test is used when you have one independent variable with two or more levels
and an ordinal dependent variable. In other words, it is the non-parametric version of ANOVA and
a generalized form of the Mann-Whitney test method since it permits two or more groups. We
will use the same data file as the one way ANOVA example above (the hsb2 data file) and the
same variables as in the example above, but we will not assume that write is a normally
distributed interval variable.
If some of the scores receive tied ranks, then a correction factor is used, yielding a slightly
different value of chi-squared. With or without ties, the results indicate that there is a statistically
significant difference among the three type of programs.
Paired t-test
A paired (samples) t-test is used when you have two related observations (i.e., two observations
per subject) and you want to see if the means on these two normally distributed interval variables
differ from one another. For example, using the hsb2 data file we will test whether the mean
of read is equal to the mean of write.
The Wilcoxon signed rank sum test is the non-parametric version of a paired samples t-test. You
use the Wilcoxon signed rank sum test when you do not wish to assume that the difference
between the two variables is interval and normally distributed (but you do assume the difference
is ordinal). We will use the same example as above, but we will not assume that the difference
between read and write is interval and normally distributed.
npar test
/wilcoxon = write with read (paired).
The results suggest that there is not a statistically significant difference between read and write.
If you believe the differences between read and write were not ordinal but could merely be
classified as positive and negative, then you may want to consider a sign test in lieu of sign rank
test. Again, we will use the same variables in this example and assume that this difference is not
ordinal.
McNemar test
You would perform McNemar’s test if you were interested in the marginal frequencies of two
binary outcomes. These binary outcomes may be the same outcome variable on matched pairs
(like a case-control study) or two outcome variables from a single group. Continuing with the hsb2
dataset used in several above examples, let us create two binary outcomes in our
dataset: himath and hiread. These outcomes can be considered in a two-way contingency table.
The null hypothesis is that the proportion of students in the himath group is the same as the
proportion of students in hiread group (i.e., that the contingency table is symmetric).
crosstabs
/tables=himath BY hiread
/statistic=mcnemar
/cells=count.
You would perform a one-way repeated measures analysis of variance if you had one categorical
independent variable and a normally distributed interval dependent variable that was repeated at
least twice for each subject. This is the equivalent of the paired samples t-test, but allows for two
or more levels of the categorical variable. This tests whether the mean of the dependent variable
differs by the categorical variable. We have an example data set called rb4wide, which is used in
Kirk’s book Experimental Design. In this data set, y is the dependent variable, a is the repeated
measure and s is the variable that indicates the subject number.
glm y1 y2 y3 y4
/wsfactor a(4).
See also
If you have a binary outcome measured repeatedly for each subject and you wish to run a logistic
regression that accounts for the effect of multiple measures from single subjects, you can perform
a repeated measures logistic regression. In SPSS, this can be done using the GENLIN command
and indicating binomial as the probability distribution and logit as the link function to be used in
the model. The exercise data file contains 3 pulse measurements from each of 30 people assigned
to 2 different diet regiments and 3 different exercise regiments. If we define a “high” pulse as
being over 100, we can then predict the probability of a high pulse using diet regiment.
GET FILE='C:mydatahttps://stats.idre.ucla.edu/wp-content/uploads/2016/02/exercise.sav'.
GENLIN highpulse (REFERENCE=LAST)
BY diet (order = DESCENDING)
/MODEL diet
DISTRIBUTION=BINOMIAL
LINK=LOGIT
/REPEATED SUBJECT=id CORRTYPE = EXCHANGEABLE.
Factorial ANOVA
A factorial ANOVA has two or more categorical independent variables (either with or without the
interactions) and a single normally distributed interval dependent variable. For example, using
the hsb2 data file we will look at writing scores (write) as the dependent variable and gender
(female) and socio-economic status (ses) as independent variables, and we will include an
interaction of female by ses. Note that in SPSS, you do not need to have the interaction term(s) in
your data set. Rather, you can have SPSS create it/them temporarily by placing an asterisk
between the variables that will make up the interaction term(s).
See also
Friedman test
You perform a Friedman test when you have one within-subjects independent variable with two
or more levels and a dependent variable that is not interval and normally distributed (but at least
ordinal). We will use this test to determine if there is a difference in the reading, writing and math
scores. The null hypothesis in this test is that the distribution of the ranks of each type of score
(i.e., reading, writing and math) are the same. To conduct a Friedman test, the data need to be in
a long format. SPSS handles this for you, but in other statistical packages you will have to reshape
the data before you can conduct this test.
npar tests
/friedman = read write math.
Ordered logistic regression is used when the dependent variable is ordered, but not continuous.
For example, using the hsb2 data file we will create an ordered variable called write3. This
variable will have the values 1, 2 and 3, indicating a low, medium or high writing score. We do not
generally recommend categorizing a continuous variable in this way; we are simply creating a
variable to use for this example. We will use gender (female), reading score (read) and social
studies score (socst) as predictor variables in this model. We will use a logit link and on
the print subcommand we have requested the parameter estimates, the (model) summary
statistics and the test of the parallel lines assumption.
See also
A factorial logistic regression is used when you have two or more categorical independent
variables but a dichotomous dependent variable. For example, using the hsb2 data file we will
use female as our dependent variable, because it is the only dichotomous variable in our data set;
certainly not because it common practice to use gender as an outcome variable. We will use type
of program (prog) and school type (schtyp) as our predictor variables. Because prog is a
categorical variable (it has three levels), we need to create dummy codes for it. SPSS will do this
for you by making dummy codes for all variables listed after the keyword with. SPSS will also
create the interaction term; simply list the two variables that will make up the interaction
separated by the keyword by.
See also
Correlation
A correlation is useful when you want to see the relationship between two (or more) normally
distributed interval variables. For example, using the hsb2 data file we can run a correlation
between two continuous variables, read and write.
correlations
/variables = read write.
In the second example, we will run a correlation between a dichotomous variable, female, and a
continuous variable, write. Although it is assumed that the variables are interval and normally
distributed, we can include dummy variables when performing correlations.
correlations
/variables = female write.
In the first example above, we see that the correlation between read and write is 0.597. By
squaring the correlation and then multiplying by 100, you can determine what percentage of the
variability is shared. Let’s round 0.597 to be 0.6, which when squared would be .36, multiplied by
100 would be 36%. Hence read shares about 36% of its variability with write. In the output for
the second example, we can see the correlation between write and female is 0.256. Squaring this
number yields .065536, meaning that female shares approximately 6.5% of its variability
with write.
See also
Simple linear regression allows us to look at the linear relationship between one normally
distributed interval predictor and one normally distributed interval outcome variable. For
example, using the hsb2 data file, say we wish to look at the relationship between writing scores
(write) and reading scores (read); in other words, predicting write from read.
We see that the relationship between write and read is positive (.552) and based on the t-value
(10.47) and p-value (0.000), we would conclude this relationship is statistically significant. Hence,
See also
Non-parametric correlation
A Spearman correlation is used when one or both of the variables are not assumed to be normally
distributed and interval (but are assumed to be ordinal). The values of the variables are converted
in ranks and then correlated. In our example, we will look for a relationship
between read and write. We will not assume that both of these variables are normal and interval.
nonpar corr
/variables = read write
/print = spearman.
Logistic regression assumes that the outcome variable is binary (i.e., coded as 0 and 1). We have
only one variable in the hsb2 data file that is coded 0 and 1, and that is female. We understand
that female is a silly outcome variable (it would make more sense to use it as a predictor variable),
but we can use female as the outcome variable to illustrate how the code for this command is
structured and how to interpret the output. The first variable listed after the logistic command is
the outcome (or dependent) variable, and all of the rest of the variables are predictor (or
independent) variables. In our example, female will be the outcome variable, and read will be the
predictor variable. As with OLS regression, the predictor variables must be either dichotomous or
continuous; they cannot be categorical.
The results indicate that reading score (read) is not a statistically significant predictor of gender
(i.e., being female), Wald = .562, p = 0.453. Likewise, the test of the overall model is not
statistically significant, LR chi-squared – 0.56, p = 0.453.
See also
Multiple regression
Multiple regression is very similar to simple regression, except that in multiple regression you have
more than one predictor variable in the equation. For example, using the hsb2 data file we will
predict writing score from gender (female), reading, math, science and social studies (socst)
scores.
The results indicate that the overall model is statistically significant (F = 58.60, p = 0.000).
Furthermore, all of the predictor variables are statistically significant except for read.
See also
Analysis of covariance
Analysis of covariance is like ANOVA, except in addition to the categorical predictors you also have
continuous predictors as well. For example, the one way ANOVA example used write as the
dependent variable and prog as the independent variable. Let’s add read as a continuous variable
to this model, as shown below.
The results indicate that even after adjusting for reading score (read), writing scores still
significantly differ by program type (prog), F = 5.867, p = 0.003.
See also
Multiple logistic regression is like simple logistic regression, except that there are two or more
predictors. The predictors can be interval variables or dummy variables, but cannot be categorical
variables. If you have categorical predictors, they should be coded into one or more dummy
variables. We have only one variable in our data set that is coded 0 and 1, and that is female. We
understand that female is a silly outcome variable (it would make more sense to use it as a
predictor variable), but we can use female as the outcome variable to illustrate how the code for
this command is structured and how to interpret the output. The first variable listed after
the logistic regression command is the outcome (or dependent) variable, and all of the rest of the
See also
Discriminant analysis is used when you have one or more normally distributed interval
independent variables and a categorical dependent variable. It is a multivariate technique that
considers the latent dimensions in the independent variables for predicting group membership in
the categorical dependent variable. For example, using the hsb2 data file, say we wish to
use read, write and math scores to predict the type of program a student belongs to (prog).
See also
One-way MANOVA
MANOVA (multivariate analysis of variance) is like ANOVA, except that there are two or more
dependent variables. In a one-way MANOVA, there is one categorical independent variable and
two or more dependent variables. For example, using the hsb2 data file, say we wish to examine
the differences in read, write and math broken down by program type (prog).
See also
Multivariate multiple regression is used when you have two or more dependent variables that are
to be predicted from two or more independent variables. In our example, we will
predict write and read from female, math, science and social studies (socst) scores.
Canonical correlation
Canonical correlation is a multivariate technique used to examine the relationship between two
groups of variables. For each set of variables, it creates latent variables and looks at the
relationships among the latent variables. It assumes that all variables in the model are interval and
normally distributed. SPSS requires that each of the two groups of variables be separated by the
keyword with. There need not be an equal number of variables in the two groups (before and
after the with).
-------------------------------------
EFFECT .. WITHIN CELLS Regression (Cont.)
Univariate F-tests with (2,197) D. F.
Variable Sig. of F
READ .000
WRITE .000
-------------------------------------
Raw canonical coefficients for DEPENDENT variables
Function No.
Variable 1
READ .063
WRITE .049
-------------------------------------
Standardized canonical coefficients for DEPENDENT variables
Function No.
Variable 1
* * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * *
Variable 1
READ .927
WRITE .854
-------------------------------------
Variance in dependent variables explained by canonical variables
CAN. VAR. Pct Var DE Cum Pct DE Pct Var CO Cum Pct CO
-------------------------------------
Raw canonical coefficients for COVARIATES
Function No.
COVARIATE 1
MATH .067
SCIENCE .048
-------------------------------------
Standardized canonical coefficients for COVARIATES
CAN. VAR.
COVARIATE 1
-------------------------------------
Correlations between COVARIATES and canonical variables
CAN. VAR.
Covariate 1
MATH .929
SCIENCE .873
* * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * *
CAN. VAR. Pct Var DE Cum Pct DE Pct Var CO Cum Pct CO
-------------------------------------
Regression analysis for WITHIN CELLS error term
--- Individual Univariate .9500 confidence intervals
Dependent variable .. READ reading score
-------------------------------------
* * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * *
EFFECT .. CONSTANT
Multivariate Tests of Significance (S = 1, M = 0, N = 97 )
-------------------------------------
EFFECT .. CONSTANT (Cont.)
Univariate F-tests with (1,197) D. F.
-------------------------------------
EFFECT .. CONSTANT (Cont.)
Variable 1
READ .041
WRITE .124
-------------------------------------
Standardized discriminant function coefficients
Function No.
Variable 1
READ .293
WRITE .889
-------------------------------------
Estimates of effects for canonical variables
Canonical Variable
Parameter 1
1 2.196
* * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * *
Variable 1
READ .504
WRITE .959
Factor analysis
Factor analysis is a form of exploratory multivariate analysis that is used to either reduce the
number of variables in a model or to detect relationships among variables. All variables involved
in the factor analysis need to be interval and are assumed to be normally distributed. The goal of
the analysis is to try to identify factors which underlie the variables. There may be fewer factors
than variables, but there may not be more factors than variables. For our example, let’s suppose
that we think that there are some common factors underlying the various test scores. We will
include subcommands for varimax rotation and a plot of the eigenvalues. We will use a principal
components extraction and will retain two factors. (Using these options will make our results
compatible with those from SAS and Stata and are not necessarily the options that you will want
to use.)
factor
/variables read write math science socst
/criteria factors(2)
/extraction pc
/rotation varimax
/plot eigen.
See also