0% found this document useful (0 votes)
13 views8 pages

Psych Stats

The document compares parametric and non-parametric statistics, highlighting that parametric tests assume a normal distribution and require homogeneity of variance, while non-parametric tests do not assume a specific distribution and are used when homogeneity is violated. Examples of parametric tests include T-tests and ANOVA, whereas non-parametric tests include Chi-Square tests, Mann-Whitney U tests, and Wilcoxon Signed-Rank tests. The document provides formulas and explanations for various statistical tests used in research.

Uploaded by

ashleykarylduran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views8 pages

Psych Stats

The document compares parametric and non-parametric statistics, highlighting that parametric tests assume a normal distribution and require homogeneity of variance, while non-parametric tests do not assume a specific distribution and are used when homogeneity is violated. Examples of parametric tests include T-tests and ANOVA, whereas non-parametric tests include Chi-Square tests, Mann-Whitney U tests, and Wilcoxon Signed-Rank tests. The document provides formulas and explanations for various statistical tests used in research.

Uploaded by

ashleykarylduran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

PARAMETRIC VS NON-PARAMETRIC

​ Parametric statistics assumes underlying distribution which is normal


distribution. They require the use of data in interval or ratio level. (Hodge & Gillespie,
2005). Additionally, parametric statistics require homogeneity of variance. Homogeneity
of variance is also referred to as homoscedasticity. According to the American
Psychological Association, it means that the average squared distance or spread of a
score from the mean is the same across all groups compared in the study. In T-test and
ANOVA, we make an assumption that each group has equal variances, otherwise
results can be misleading. An example of these is that when all groups have a similar
spread of scores, either high or low scores, we can say that there is a homogeneity of
variances.

Here are some examples of parametric statistics:


●​ T-Test (Independent or Paired) - This is used when you are determining if the
statistical difference between two mean scores is significant. (Williamson & Bow,
2002).

(Paired T-test) (Independent T-test formula)

Where: Where:
●​ T = Student’s t-test ●​ T = Student’s T-test
●​ 𝑋1 - 𝑋2 = Difference mean of the pairs ●​ 𝑋1 = mean of the 1st group

●​ S = standard deviation ●​ 𝑋2 = mean of the 2nd group


●​ n = sample size ●​ 𝑆1 = Standard deviation of group 1

●​ 𝑆2 = Standard deviation of group 2


●​ 𝑛1 = no. of observations in group 1

●​ 𝑛2 = no. of observations in group 2

(T-test Formula)

Where:
σ
●​ = is the standard error
𝑛

●​ X̄ = mean of the sample


●​ µ = is the assumed mean
●​ σ = is the standard deviation
●​ n = is the no. of the observation

●​ Pearson R - Compares two variables, typically quantifying the degree of


association between two variables. (Williamson & Bow, 2002).

(Pearson R Correlation Formula)


●​ ANOVA (Analysis of Variance) - This is used to determine and compare the means
of two or three groups. This is to test whether the means of each group, at least one
of each group, is statistically significantly different from each other. (DATAtab, 2025)
One-Way ANOVA tests if there's a significant difference between the means of two
or more groups it has one independent variable that affects a dependent variable,
whereas Two-way ANOVA is a statistical method used to analyze data with two
independent variables (factors) and one dependent variable (the outcome).

Alternatively;
●​ F = MST/MSE
●​ MST = SST/ p-1
●​ MSE = SSE/N-p
●​ SSE = Σ (n-1)
2
●​ 𝑠

Where,
●​ F = Anova Coefficient
●​ MSB = Mean sum of squares between groups
●​ MSW = Mean sum of squares within the groups
●​ MSE= Mean sum of squares due to error
●​ SST = Total Sum of Squares
●​ P = total of the populations
●​ N = The total number of samples in a population
●​ SSW = Sum of squares within the groups
●​ SSB = Sum of squares between the groups
●​ SSE = Sum of squares due to error
●​ S = standard deviation of thee samples
●​ N = total no. of observation

(ONE-WAY ANOVA FORMULA)


(TWO-WAY ANOVA FORMULA)

If one group is mostly similar and the other group has a wide ranger, then there is a
violation of homogeneity. If there is a violation of homogeneity we could use
non-parametric statistics or adjusted versions of parametric tests.

Non-parametric Tests are considered to be distribution free, which is that they do not
assume a specific distribution. (Williamson & Bow, 2002). Like what is mentioned
above, these are only used when there is a violation of homogeneity. According to Faizi
& Alvi (2023), these tests allow statistical inference without assuming that the samples
drawn from a population are normally distributed. They often use nominal or ordinal
data.

Here are some examples of non-parametric tests:


●​ Chi-Square - is where you examine if there is a significant association between 2
categorical variables; nominal and ordinal data. (Sharpe, 2015). You basically test if
your data is as expected and you compare the observed value in your data to the
expected value to see if the null hypothesis is true. There are 2 commonly used
Chi-square tests;
○​ Chi-square goodness of fit test - According to Agresti (2018), This is
where you test if the observed frequencies match the expected
frequencies. (e.g. You want to test if a die shows equal numbers across all
six faces.) The example given is that the data is in frequency so you would
have to use goodness of fit.
○​ Chi-square of Independence - This is where you test if two categorical
variables are related. For instance, you want to find You make a survey of
50 people whose favourite colors are; pink, blue, and black. Now, you
want to find out if the preferences differ by gender or what is the
relationship between genders and color preferences.​

(Chi-Square Test Formula)

●​ Mann-Whitney U Test - This test can be used in place of an independent sample


t-test when your data is ordinal or not normally distributed. (Krishan et al, 2023). For
instance, if you want to compare the stress levels (on a scale of 1-10 or a likert
scale) between people who hoard and those who do not. Stress scales are ordinal
and could be not normally distributed, so we use this test.

(Mann-Whitney U Test Formula)


Where:
●​ R1 - is the sum of the ranks for the first sample
●​ n1 - is the sample size for the first sample
●​ R2 - is the sum of the ranks of the second sample
●​ N2 - sample size for the first sample

●​ Wilcoxon Signed-Rank test - It is alternative to the paired t-test samples. It is used


to assess the difference on the data obtained before or after the study to find out the
changes caused by the procedure. You use this on paired data, for example, before
and after treatment. (Pirelli & Fanucci 2010). A more detailed example; You want to
measure the stress level of 10 people before group therapy and after group therapy.
You want to find out if the change is statistically significant.

(WIlcoxon Test Formula)


●​ Kruskal-Wallis H Test - This is an alternative to one-way ANOVA. This tests if three
or more independent groups have different medians. You use this test when you
have one independent variable with 3+ groups and when your dependent variable is
ordinal/continuous but not normally distributed.

(Kruskal-Wallis H Test Formula)

●​ Friedman Test - According to IBM (2021), this test is an extension of Wilcoxon


signed-rank test. You test to find out the differences in treatments across multiple
tests. This is helpful when you have three or more matched paired groups.​

●​ Spearman’s Rank Correlation - This test is an alternative to Pearson’s correlation.


It is also referred to as Spearman's rho. You measure the strength and direction of
the association between two ranked variables. gives the measure of monotonicity of
the relation between two variables i.e. how well the relationship between two
variables could be represented using a monotonic function. (Gupta, 2025).
Monotonic Function is one that never increases or never decreases as its
independent variable changes. (Gupta, 2025).

(Formula when there are no tied ranks)

You might also like