0% found this document useful (0 votes)
6 views265 pages

Lecture 8 Merged

The document provides an introduction to matrices, covering their definitions, types, properties, and operations such as addition, subtraction, multiplication, and inversion. It explains the significance of determinants, minors, and cofactors in matrix operations, and discusses their application in solving linear equations. Additionally, it outlines the modeling of translational and rotational mechanical systems, detailing their basic elements and the forces acting upon them.

Uploaded by

hafizfawad118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views265 pages

Lecture 8 Merged

The document provides an introduction to matrices, covering their definitions, types, properties, and operations such as addition, subtraction, multiplication, and inversion. It explains the significance of determinants, minors, and cofactors in matrix operations, and discusses their application in solving linear equations. Additionally, it outlines the modeling of translational and rotational mechanical systems, detailing their basic elements and the forces acting upon them.

Uploaded by

hafizfawad118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 265

Lecture-8 Biomedical

Modeling and Simulation


Matrices - Introduction
Matrix algebra has at least two advantages:
•Reduces complicated systems of equations to simple
expressions
•Adaptable to systematic method of mathematical treatment
and well suited to computers

Definition:
A matrix is a set or group of numbers arranged in a square or
rectangular array enclosed by two brackets
Matrices - Introduction
Properties:
•A specified number of rows and a specified number of
columns
•Two numbers (rows x columns) describe the dimensions
or size of the matrix.

Examples:
3x3 matrix
2x4 matrix
1x2 matrix
Matrices - Introduction
A matrix is denoted by a bold capital letter and the elements
within the matrix are denoted by lower case letters
e.g. matrix [A] with elements aij

Amxn=
n
m
A

i goes from 1 to m
j goes from 1 to n
Matrices - Introduction
TYPES OF MATRICES

1. Column matrix or vector:


The number of rows may be any integer but the number of
columns is always 1
Matrices - Introduction
TYPES OF MATRICES

2. Row matrix or vector


Any number of columns but only one row
Matrices - Introduction
TYPES OF MATRICES

3. Rectangular matrix
Contains more than one element and number of rows is not
equal to the number of columns
Matrices - Introduction
TYPES OF MATRICES
4. Square matrix
The number of rows is equal to the number of columns
(a square matrix A has an order of m)
mxm

The principal or main diagonal of a square matrix is composed of all


elements aij for which i=j
Matrices - Introduction
TYPES OF MATRICES

5. Diagonal matrix
A square matrix where all the elements are zero except those on
the main diagonal

i.e. aij =0 for all i = j


aij = 0 for some or all i = j
Matrices - Introduction
TYPES OF MATRICES

6. Unit or Identity matrix - I


A diagonal matrix with ones on the main diagonal

i.e. aij =0 for all i = j


aij = 1 for some or all i = j
Matrices - Introduction
TYPES OF MATRICES

7. Null (zero) matrix - 0


All elements in the matrix are zero

For all i,j


Matrices - Introduction
TYPES OF MATRICES

8. Triangular matrix
A square matrix whose elements above or below the main
diagonal are all zero
Matrices - Introduction
TYPES OF MATRICES

8a. Upper triangular matrix


A square matrix whose elements below the main
diagonal are all zero

i.e. aij = 0 for all i > j


Matrices - Introduction
TYPES OF MATRICES

8b. Lower triangular matrix

A square matrix whose elements above the main diagonal are all
zero

i.e. aij = 0 for all i < j


Matrices – Introduction
TYPES OF MATRICES
9. Scalar matrix
A diagonal matrix whose main diagonal elements are
equal to the same scalar
A scalar is defined as a single number or constant

i.e. aij = 0 for all i = j


aij = a for all i = j
Matrices
Matrix Operations
Matrices - Operations

EQUALITY OF MATRICES
Two matrices are said to be equal only when all
corresponding elements are equal
Therefore their size or dimensions are equal as well

A= B= A=B
Matrices - Operations
Some properties of equality:
•IIf A = B, then B = A for all A and B
•IIf A = B, and B = C, then A = C for all A, B and C

A= B=

If A = B then
Matrices - Operations
ADDITION AND SUBTRACTION OF MATRICES

The sum or difference of two matrices, A and B of the same


size yields a matrix C of the same size

Matrices of different sizes cannot be added or subtracted


Matrices - Operations
Commutative Law:
A+B=B+A

Associative Law:
A + (B + C) = (A + B) + C = A + B + C

A B C
2x3 2x3 2x3
Matrices - Operations
A+0=0+A=A
A + (-A) = 0 (where –A is the matrix composed of –aij as elements)
Matrices - Operations
SCALAR MULTIPLICATION OF MATRICES

Matrices can be multiplied by a scalar (constant or single


element)
Let k be a scalar quantity; then
kA = Ak

Ex. If k=4 and


Matrices - Operations

Properties:
• k (A + B) = kA + kB
• (k + g)A = kA + gA
• k(AB) = (kA)B = A(k)B
• k(gA) = (kg)A
Matrices - Operations
MULTIPLICATION OF MATRICES

The product of two matrices is another matrix


Two matrices A and B must be conformable for multiplication to
be possible
i.e. the number of columns of A must equal the number of rows
of B
Example.
A x B = C
(1x3) (3x1) (1x1)
Matrices - Operations
B x A = Not possible!
(2x1) (4x2)

A x B = Not possible!
(6x2) (6x3)

Example
A x B = C
(2x3) (3x2) (2x2)
Matrices - Operations

Successive multiplication of row i of A with column j of


B – row by column multiplication
Matrices - Operations

Remember also:
IA = A
Matrices - Operations
Assuming that matrices A, B and C are conformable for
the operations indicated, the following are true:
1. AI = IA = A
2. A(BC) = (AB)C = ABC - (associative law)
3. A(B+C) = AB + AC - (first distributive law)
4. (A+B)C = AC + BC - (second distributive law)

Caution!
1. AB not generally equal to BA, BA may not be conformable
2. If AB = 0, neither A nor B necessarily = 0
3. If AB = AC, B not necessarily = C
Matrices - Operations
AB not generally equal to BA, BA may not be conformable
Matrices - Operations
If AB = 0, neither A nor B necessarily = 0
Matrices - Operations
TRANSPOSE OF A MATRIX

If :

2x3

Then transpose of A, denoted AT is:

For all i and j


Matrices - Operations
To transpose:
Interchange rows and columns
The dimensions of AT are the reverse of the dimensions of A

2x3

3x2
Matrices - Operations
Properties of transposed matrices:
1. (A+B)T = AT + BT
2. (AB)T = BT AT
3. (kA)T = kAT
4. (AT)T = A
Matrices - Operations
1. (A+B)T = AT + BT
Matrices - Operations
(AB)T = BT AT
Matrices - Operations
SYMMETRIC MATRICES
A Square matrix is symmetric if it is equal to its
transpose:
A = AT
Matrices - Operations

When the original matrix is square, transposition does not


affect the elements of the main diagonal

The identity matrix, I, a diagonal matrix D, and a scalar matrix, K,


are equal to their transpose since the diagonal is unaffected.
Matrices - Operations
INVERSE OF A MATRIX
Consider a scalar k. The inverse is the reciprocal or division of 1
by the scalar.
Example:
k=7 the inverse of k or k-1 = 1/k = 1/7
Division of matrices is not defined since there may be AB = AC
while B = C
Instead matrix inversion is used.
The inverse of a square matrix, A, if it exists, is the unique matrix
A-1 where:
AA-1 = A-1 A = I
Matrices - Operations
Example:

Because:
Matrices - Operations
Properties of the inverse:

A square matrix that has an inverse is called a nonsingular matrix


A matrix that does not have an inverse is called a singular matrix
Square matrices have inverses except when the determinant is zero
When the determinant of a matrix is zero the matrix is singular
Matrices - Operations
DETERMINANT OF A MATRIX

To compute the inverse of a matrix, the determinant is required


Each square matrix A has a unit scalar value called the
determinant of A, denoted by det A or |A|

If

then
Matrices - Operations
If A = [A] is a single element (1x1), then the determinant is
defined as the value of the element
Then |A| =det A = a11
If A is (n x n), its determinant may be defined in terms of order
(n-1) or less.
Matrices - Operations
MINORS
If A is an n x n matrix and one row and one column are deleted,
the resulting matrix is an (n-1) x (n-1) submatrix of A.
The determinant of such a submatrix is called a minor of A and
is designated by mij , where i and j correspond to the deleted
row and column, respectively.
mij is the minor of the element aij in A.
Matrices - Operations
eg.

Each element in A has a minor


Delete first row and column from A .
The determinant of the remaining 2 x 2 submatrix is the minor
of a11
Matrices - Operations
Therefore the minor of a12 is:

And the minor for a13 is:


Matrices - Operations
COFACTORS

The cofactor Cij of an element aij is defined as:

When the sum of a row number i and column j is even, cij = mij and
when i+j is odd, cij =-mij
Matrices - Operations
DETERMINANTS CONTINUED

The determinant of an n x n matrix A can now be defined as

The determinant of A is therefore the sum of the products of the


elements of the first row of A and their corresponding cofactors.
(It is possible to define |A| in terms of any other row or column
but for simplicity, the first row only is used)
Matrices - Operations
Therefore the 2 x 2 matrix :

Has cofactors :

And:

And the determinant of A is:


Matrices - Operations
Example 1:
Matrices - Operations
For a 3 x 3 matrix:

The cofactors of the first row are:


Matrices - Operations
The determinant of a matrix A is:

Which by substituting for the cofactors in this case is:


Matrices - Operations

Example 2:
Matrices - Operations
ADJOINT MATRICES

A cofactor matrix C of a matrix A is the square matrix of the same


order as A in which each element aij is replaced by its cofactor cij .

Example:

If

The cofactor C of A is
Matrices - Operations
The adjoint matrix of A, denoted by adj A, is the transpose of its
cofactor matrix

It can be shown that:


A(adj A) = (adjA) A = |A| I

Example:
Matrices - Operations
Matrices - Operations
USING THE ADJOINT MATRIX IN MATRIX INVERSION
Since
AA-1 = A-1 A = I

and
A(adj A) = (adjA) A = |A| I

then
Matrices - Operations
Example

A=

To check AA-1 = A-1 A = I


Matrices - Operations
Example 2

The determinant of A is
|A| = (3)(-1-0)-(-1)(-2-0)+(1)(4-1) = -2

The elements of the cofactor matrix are


Matrices - Operations
The cofactor matrix is therefore

so

and
Matrices - Operations
The result can be checked using

AA-1 = A-1 A = I

The determinant of a matrix must not be zero for the inverse to


exist as there will not be a solution
Nonsingular matrices have non-zero determinants
Singular matrices have zero determinants
Matrix Inversion
Simple 2 x 2 case
Matrices and Linear
Equations
Linear Equations
Linear Equations
Linear equations are common and important for survey
problems
Matrices can be used to express these linear equations and
aid in the computation of unknown values
Example
n equations in n unknowns, the aij are numerical coefficients,
the bi are constants and the xj are unknowns
Linear Equations
The equations may be expressed in the form
AX = B
where

and

nxn nx1 nx1

Number of unknowns = number of equations = n


Linear Equations
If the determinant is nonzero, the equation can be solved to produce
n numerical values for x that satisfy all the simultaneous equations
To solve, premultiply both sides of the equation by A-1 which exists
because |A| = 0

A-1 AX = A-1 B
Now since
A-1 A = I

We get
X = A-1 B

So if the inverse of the coefficient matrix is found, the unknowns,


X would be determined
Linear Equations
Example

The equations can be expressed as


Linear Equations
When A-1 is computed the equation becomes

Therefore
Linear Equations
The values for the unknowns should be checked by substitution
back into the initial equations
Lecture-9 Biomedical
Modeling and Simulation
• There are two types of mechanical systems based on the type of
motion.
• Translational mechanical systems
• Rotational mechanical systems

Modeling of Translational Mechanical Systems


• Translational mechanical systems move along a straight line.
• These systems mainly consist of three basic elements.
Mass, Spring and Dashpot or damper.
• If a force is applied to a translational mechanical system, then it is
opposed by opposing forces due to mass, elasticity and friction of the
system.
• Since the applied force and the opposing forces are in opposite
directions, the algebraic sum of the forces acting on the system is
zeroMass
Mass
Mass is the property of a body, which stores kinetic energy. If a
force is applied on a body having mass M, then it is opposed by an
opposing force due to mass. This opposing force is proportional to
the acceleration of the body. Assume elasticity and friction are
negligible.
Spring
Spring is an element, which stores potential energy. If a force is
applied on spring K, then it is opposed by an opposing force due
to elasticity of spring. This opposing force is proportional to the
displacement of the spring. Assume mass and friction are
negligible.
Dashpot
If a force is applied on dashpot B, then it is opposed by an
opposing force due to friction of the dashpot. This opposing force
is proportional to the velocity of the body. Assume mass and
elasticity are negligible
Modeling of Rotational Mechanical Systems
• Rotational mechanical systems move about a fixed axis. These
systems mainly consist of three basic elements. Those
are moment of inertia, torsional spring and dashpot.
• If a torque is applied to a rotational mechanical system, then it
is opposed by opposing torques due to moment of inertia,
elasticity and friction of the system.
• Since the applied torque and the opposing torques are in
opposite directions, the algebraic sum of torques acting on the
system is zero.
Moment of Inertia
• In translational mechanical system, mass stores kinetic energy. Similarly, in
rotational mechanical system, moment of inertia stores kinetic energy.
• If a torque is applied on a body having moment of inertia J, then it is
opposed by an opposing torque due to the moment of inertia. This opposing
torque is proportional to angular acceleration of the body. Assume elasticity
and friction are negligible.
Torsional Spring
In translational mechanical system, spring stores potential energy. Similarly,
in rotational mechanical system, torsional spring stores potential energy.
If a torque is applied on torsional spring K, then it is opposed by an opposing
torque due to the elasticity of torsional spring. This opposing torque is
proportional to the angular displacement of the torsional spring. Assume that
the moment of inertia and friction are negligible.
Dashpot
If a torque is applied on dashpot B, then it is opposed by an
opposing torque due to the rotational friction of the dashpot. This
opposing torque is proportional to the angular velocity of the
body. Assume the moment of inertia and elasticity are negligible.
Electrical Analogies of Mechanical Systems
Two systems are said to be analogous to each other if the
following two conditions are satisfied.
• The two systems are physically different
• Differential equation modelling of these two systems are same
Electrical systems and mechanical systems are two physically
different systems.
There are two types of electrical analogies of translational
mechanical systems.
Those are force voltage analogy and force current analogy.
Model of Electrical Systems
Basic components

resistor

inductor

capacitor
Impedance
Basic components

resistor

inductor

capacitor
Circuit Systems

+ +
Force Voltage Analogy
In force voltage analogy, the mathematical equations of translational
mechanical system are compared with mesh equations of the electrical
system.
• Consider the following translational mechanical system as shown in the
following figure.
Torque Voltage Analogy
In this analogy, the mathematical equations of rotational
mechanical system are compared with mesh equations of the
electrical system.
Force Current Analogy
In force current analogy, the mathematical equations of
the translational mechanical system are compared with the nodal
equations of the electrical system.
Lecture-10 Biomedical
Modeling and Simulation
Gaussian (Normal) Distribution
•The Gaussian Distribution is one of the
most used distributions in all of science.
It is also called the “bell curve” or the
Normal Distribution.
•The Gaussian
Distribution is
also called the
“Normal
Distribution”.
•The Gaussian
Distribution is
also called the
“Normal
Distribution”.
•Less well known
is that there is
also a
“Paranormal
Distribution!”
Normal or Gaussian Distribution

It is Symmetric. It’s Mean, Median,


& Mode are Equal
A 2-Dimensional Gaussian
Gaussian or Normal Distribution
• It is a symmetrical, bell-shaped curve.
• It has a point of inflection at a position 1 standard
deviation from the mean. Formula:

f (X )

X
μ
The Normal Distribution

This is a bell shaped curve


Note the constants: with different centers &
π = 3.14159 spreads depending on μ & σ
e = 2.71828
• There are only 2 variables that determine the curve
shape: The mean μ & The variance σ.
The rest are constants.
• For “z scores” (μ = 0, σ = 1), the equation
becomes:

•The negative exponent means that big |z|


values give small function values in the tails.
Normal Distribution
•It’s a probability function, so no matter what
the values of μ and σ, it must integrate to 1!
The Normal Distribution is Defined
by its Mean & Standard Deviation.

μ=

σ2 =
l

Standard Deviation = σ
Normal Distribution
• Can take on an infinite
number of possible values.
• The probability of any one
of those values occurring
is essentially zero.
• Curve has area or
probability = 1
7-6

•A normal distribution with a mean μ


= 0 & a standard deviation σ = 1 is
called
The standard normal
distribution.
•Z Value: The distance between a
selected value, designated X, and
the population mean μ, divided by
the population standard deviation, σ
Example 1
7-7

• The monthly incomes of recent MBA graduates in a


large corporation are normally distributed with a
mean of $2000 and a standard deviation of $200.
What is the Z value for an income of $2200? An
income of $1700?
• For X = $2200, Z= (2200-2000)/200 = 1.
• For X = $1700, Z = (1700-2000)/200 = -1.5
• A Z value of 1 indicates that the value of $2200 is 1
standard deviation above the mean of $2000, while
a Z value of $1700 is 1.5 standard deviation below
the mean of $2000.
Probabilities Depicted by Areas
Under the Curve
• Total area under the curve is 1
• The area in red is equal to
p(z > 1)
• The area in blue is equal to
p(-1< z <0)
• Since the properties of the
normal distribution are known,
areas can be looked up on
tables or calculated on a
computer.
Probability of an Interval
Cumulative Probability
•Given any positive value for z, the corresponding
probability can be looked up in standard tables.

A table will give this


probability

Given positive z
The probability found using a table is the
probability of having a standard normal variable
between 0 & the given positive z.
Areas Under the Standard Normal Curve
Areas and Probabilities
•The Table shows cumulative normal
probabilities. Some selected entries:
z F(z) z F(z) z F(z)

0 .50 .3 .62 1 .84


.1 .54 .4 .66 2 .98
.2 .58 .5 .69 3 .99
• About 54 % of scores fall below z of .1. About 46 % of
scores fall below a z of -.1 (1-.54 = .46). About 14% of
scores fall between z of 1 and 2 (.98-.84).
Areas Under the Normal Curve
7-9

• About 68 percent of the area under the normal


curve is within one standard deviation of the
mean:
-σ < μ < σ
• About 95 percent is within two standard
deviations of the mean:
-2σ < μ < 2σ
• About 99.74 percent is within three standard
deviations of the mean:
-3σ < μ < 3σ
7-10 r a l i t r b u i o n : μ = 0 , σ2 = 1

Areas Under the Normal Curve


0 . 4

Between:
1.68.26%
0 . 3 2.95.44%
3.99.74%
0 . 2
f ( x

0 . 1

. 0

- 5

Irwin/McGraw-Hill © The McGraw-Hill Companies, Inc., 1999


Key Areas Under the
Curve
For normal
distributions
+ 1 σ ~ 68%
+ 2 σ ~ 95%
+ 3 σ ~ 99.9%
“68-95-99.7 Rule”

68% of
the data

95% of the data

99.7% of the data


68.26 -95.44-99.74 Rule
For a Normally distributed variable:
1. > 68.26% of all possible observations lie within
one standard deviation on either side of the
mean
(between μ−σ and μ+σ).
2. > 95.44% of all possible observations lie within
two standard deviations on either side of the
mean
(between μ−2σ and μ+2σ).
3. > 99.74% of all possible observations lie within
three standard deviations on either side of the
mean (between μ−3σ and μ+3σ).
• Using the unit normal (z), we can find areas and
probabilities for any normal distribution.
• Suppose X = 120, μ =100, σ =10.
• Then z = (120-100)/10 = 2.
• About 98 % of cases fall below a score of 120 if the
distribution is normal. In the normal, most (95%) are
within 2 σ of the mean. Nearly everybody (99%) is
within 3 σ of the mean.
68.26-95.44-99.74 Rule
68-95-99.7 Rule in Math terms…
Central Limit Theorem
• Flip coin N times
• Each outcome has an associated random variable Xi
(= 1, if heads, otherwise 0)
• Number of heads:

NH = x1 + x2 + …. + xN
• NH is a random variable
Central Limit Theorem
• Coin flip problem.
• Probability function of NH
• P(Head) = 0.5 (fair coin)

N=5 N = 10 N = 40
Central Limit Theorem
The distribution of the sum of N random variables
becomes increasingly Gaussian as N grows.
Example: N uniform [0,1] random variables.
112.3 127.8 143.3
Normal Distribution

%
Probability / %
Normal Distribution
Why are normal distributions so
important?
• Many dependent variables are commonly
assumed to be normally distributed in the
population
• If a variable is approximately normally distributed
we can make inferences about values of that
variable
• Example: Sampling distribution of the mean
• So what?
• Remember the Binomial distribution
•With a few trials we were able to calculate
possible outcomes and the probabilities of
those outcomes
Normal Distribution
Why are normal distributions so important?
• Remember the Binomial distribution
• With a few trials we were able to calculate possible
outcomes and the probabilities of those outcomes
• Now try it for a continuous distribution with an infinite
number of possible outcomes. Yikes!
• The normal distribution and its properties are well
known, and if our variable of interest is normally
distributed, we can apply what we know about the
normal distribution to our situation, and find the
probabilities associated with particular outcomes.
•Since we know the shape of the normal
curve, we can calculate the area under the
curve
•The percentage of that area can be used to
determine the probability that a given value
could be pulled from a given distribution.
•The area under the curve tells us about the
probability- in other words we can obtain a
p-value for our result (data) by treating it as a
normally distributed data set.
Lecture-11 Biomedical
Modeling and Simulation
Meaning of Correlation Analysis
Correlation analysis is a process to find out the degree of relationship
between two or more variables by applying various statistical tools and
techniques.
According to Conner
“if two or more quantities vary in sympathy, so that movement in one
tend to be accompanied by corresponding movements in the other ,
then they said to be correlated.”
Correlation : On the basis of number of
variables
Partial correlation
When three or more variables are considered for
analysis but only two influencing variables are studied
and rest influencing variables are kept constant.
For example
Correlation analysis is done with demand, supply and
income. Where income is kept constant.
Correlation : On the basis of number of variables

Multiple correlation
In case of multiple correlation three or more variables are
studied simultaneously.
For example :
Rainfall, production of rice and price of rice are studied
simultaneously will be known are multiple correlation.
Correlation : On the basis of linearity

Linear correlation :
If the change in amount of one variable tends to
make changes in amount of other variable
bearing constant changing ratio it is said to be
linear correlation.
For example :
Income ( Rs.) : 350 360 370 380
Weight ( Kg.) : 30 40 50 60
Correlation : On the basis of
linearity
Non - Linear correlation :
If the change in amount of one variable tends to make
changes in amount of other variable but not bearing
constant changing ratio it is said to be non - linear
correlation.
For example :
Income ( Rs.) : 320 360 410 490
Weight ( Kg.) : 2133 49 56
Three Stages to solve correlation problem

1. Determination of relationship, if yes, measure it.

2. Significance of correlation.

3. Establishing the cause and effect relationship, if any.


Uses of Correlation Analysis
1. It is used in deriving the degree and direction of
relationship within the variables.
2. It is used in reducing the range of uncertainty in matter
of prediction.
3. It I used in presenting the average relationship between
any two variables through a single value of coefficient of
correlation.
Uses of Correlation Analysis

• In the field of science and philosophy these methods are used


for making progressive conclusions.

• In the field of nature also, it is used in observing the multiplicity


of the inter related forces.
Importance of correlation analysis

Measures the degree of relation i.e. whether it is positive or


negative.
Estimating values of variables i.e. if variables are highly
correlated then we can find value of variable with the help of
given value of variable.
Helps in understanding economic behavior.
Correlation and Causation
The correlation may be due to pure chance,
especially in a small sample.

Both the correlated variables may be influenced


by one or more other variables.

Both the variables may be mutually influencing each


other so that neither an be designed as the cause
and other as effect.
Conditions under Probable error
❑ if the value of r is less than the probable error there is no
evidence of correlation, i.e. the value of r is not at all significant.

❑ If the value of r is more than six times the probable error, the
coefficient of correlation is practically certain i.e. the value of r is
significant.
Conditions under Probable error

By adding and subtracting the value of probable error from the


coefficient of correlation we get the upper and lower limits,
between correlation lies.

P = r+ P.E. ( upper limit ) P = r- P.E. ( lower


limit )
Coefficient of Determination
Coefficient of determination also helps in interpreting the
value of coefficient of correlation. Square of value of
correlation is used to find out the proportionate
relationship or dependence of dependent variable on
independent variable.

For e.g. r= 0.9 then r2 = .81 or 81% dependence of


dependent variable on independent variable
Coefficient of Determination Explained variation
= Total variance
Lecture-12 Biomedical
Modeling and Simulation
Lecture-13 Biomedical
Modeling and Simulation
Regression Analysis
Lecture-14 Biomedical
Modeling and Simulation

You might also like