0% found this document useful (0 votes)
13 views14 pages

Matrix 01

Uploaded by

iamadnan707
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views14 pages

Matrix 01

Uploaded by

iamadnan707
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Introduction to Matrix Algebra I

1 Matrices and Vectors

Matrix: A matrix is simply an arrangement of numbers in rectangular form. Generally, a (j × k)


matrix A can be written as follows:

 a11 a12 a1k 


a a22 a2 k 
A=
21

 
 
 a j1 a j 2 a jk 

Note: There are j rows and k columns. The elements are double subscripted, with the row
number first, and the column number second.
Note: You should always keep the dimensionality of matrices in mind. The dimensionality of the
matrix is also called the order of a matrix. That is, the A above is of order (j, k).

Examples:
1 3
W=  
1 − 6

is of order (2, 2). This is also called a square matrix, because the row dimension equals the
column dimension (j = k). There are also rectangular matrices (j  k), such as:

1 4 
1 3 
= 
 1 −2 
 
0 3 
which is of order (4, 2).

Vector: There exists a special kind of matrix called a vector. Vectors are matrices that have
either one row or one column. Of course, a matrix with one row and one column is the same as
a scalar – a regular number.

Note: Row vectors are those that have a single row and multiple columns.
Example: An order (1, k) row vector looks like this:
 = 1  2 3  k 
Similarly, column vectors are those that have a single column and multiple rows.
Example: An order (k, 1) column vector looks like:
 y1 
y 
 2
y= 
 
 
 yk 

2 Matrix Addition and Subtraction


Given a matrix of numbers, one can extend regular scalar algebra in a straight forward way.
Scalar addition is simply:
m + n = 2+5 = 7

Addition is similarly defined for matrices. If matrices or vectors are of the same order, then they
can be added. One performs the addition element by element.

Thus, for a pair of order (2, 2) matrices, addition proceeds as follows for the problem A+B = C:

 a11 a12   b11 b12   a11 + b11 a12 + b12   c11 c12 
a + = = 
 21 a22  b21 b22   a21 + b 21 a22 + b22  c21 c22 

Subtraction similarly follows. It is important to keep in mind that matrix addition and
subtraction is only defined if the matrices are of the same order, or, in other words, share the
same dimensionality. If they do, they are said to be conformable for addition. If not, they are
non-conformable.

Example:
1 4 −2   −3 2 8   4 2 −10 
5 −3 3  −  2 2 −3 =  3 −5 6 
     

Note: There are two important properties of matrix addition that are worth noting:

▪ A+B=B+A Matrix addition is commutative.


▪ (A + B) + C = A + (B + C) Matrix addition is associative.

Transpose of a Matrix
Another operation that is often useful is transposition. In this operation, the order subscripts
are exchanged for each element of the matrix A. Thus, an order (j , k) matrix becomes an order
(k, j) matrix.

Example:
 q1,1 q1,2 
   q1,1 q2,1 q3,1 
Q =  q2,1 q2,2  , Q 
 q1,2 q2,2 q3,2 
 q3,1 q3,3 
Note: The subscripts in the transpose remain the same, they are just exchanged. Transposition
makes more sense when using numbers. Here is an example for a row vector:

1
3
 = 1 3 2 − 5  =  
2
 
 −5

Note: Transposing a row vector turns it into a column vector, and vice versa.

There are a couple of results regarding transposition that are important to remember:
▪ An order (j, j) matrix A is said to be symmetric  A = A . That is, all symmetric matrices
are square matrices. Here is an example:

 1 .2 −.5  1 .2 −.5
 
W =  .2 1 .4  W  =  .2 1 .4 
 −.5 .4 1   −.5 .4 1 

▪ ( A) = A . The transpose of the transpose is the original matrix.


▪ For a scalar k, (kA) = kA .
▪ ( A + B) = A + B. Transposition is also commutative.

3 Matrices and Multiplication


The first type of multiplication is a scalar times a matrix. In other words, a scalar α times a
matrix A equals the scalar times each element of A. Thus,

 a1,1 a1,2   a1,1  a1,2 


A =  = 
 a2,1 a2,2   a2,1  a2,2 

So, for:
4 8 2  1  2 4 1
A=  , A =  3 4 5
6 8 10  2  

Given A of order (m, n) and B of order (n, r), then the product AB = C is the order (m, r) matrix
whose entries are defined by:
n
cij =  ai ,k bk , j
k =1

where i = 1, . . ., m and j = 1, . . . , r. Note that for matrices to be multiplication conformable, the


number of columns in the first matrix n must equal the number of rows in the second matrix n.

Examples:
 3 −2 
 −2 1 3
A=  B =  2 4 
 4 1 6 1 −3
Here, we would say that B is pre-multiplied by A, or that A is post-multiplied by B:

 −2(3) + 1(2) + 3(1) −2(−2) + 1(4) + 3( −3) 


AB =  
 4(3) + 1(2) + 6(1) 4(−2) + 1(4) + 6( −3) 
 −1 −1 
= 
 20 −22 

Note that A is of order (2, 3), and B is of order (3, 2). Thus, the product AB is of order (2, 2).

Problem: Compute the product BA which will be of order (3, 3).

 −14 1 −3 
BA =  12 6 30 
 −14 −2 −15

Note: This shows that the multiplication of matrices is not commutative. That is, AB  BA.

There are a couple of identities worth noting:

▪ The matrix multiplication is not commutative: That is


AB  BA.
▪ Matrix multiplication is associative. That is
(AB)C = A(BC)
▪ Matrix multiplication is distributive. That is,
A(B + C) = AB + AC
▪ Scalar multiplication is commutative, associative, and distributive.
▪ The transpose of a product takes an interesting form, that can easily be
proven:
( AB ) = BA
Note: Just as with scalar algebra, the exponentiation operator is used to denote the repeated
multiplication. For a square matrix (Why? → Because it is the only type of matrix conformable
with itself), we use the notation
A4 = A · A · A ·A

to denote exponentiation.

4 Vectors and Multiplication


The same formula for vector multiplication can be used also for matrix multiplication. Let e be a
column vector of the order (N, 1). Then the inner product of e is simply:

 e1 
e 
ee =  e1 e2 eN   2 
 
 
eN 
which can be simplified as:
N
ee = e1e1 + e2 e2 + eN eN =  ei2
i =1

Note: The inner product of a column vector with itself is simply equal to the sum of the square
values of the vector, which is used quite often in the regression model. Geometrically, the
square root of the inner product is the length of the vector.
Note: Similarly the outer product for column vector e, denoted ee’, which yields an order (N, N)
matrix.

Note: There are couples of other vector products that are interesting to note.
Let i denote an order (N, 1) vector of ones, and x denote an order (N, 1) vector of data.
The following is an interesting quantity:
1 1 1
ix = ( x1 + x2 + ... + xN ) =  xi = x
N N N
From this, it follows that:
ix =  xi
Similarly, let y denote another (N, 1) vector of data. The following is also interesting:
n
X Y = x1 y1 + x2 y2 + ... + xN y N =  xi yi
i =1

One can employ matrix multiplication using vectors. Let A be an order (2, 3) matrix that looks
like this:
a a a13 
A =  11 12 
 a21 a22 a23 
Let the row vector be a1 =  a11 a12 a13  and the other row vector a2 =  a21 a22 a23  . So, A can
be expressed as follows:
 a 
A=  1
 a2 
It is most common to represent all vectors as column vectors, so to write a row vector you use
the transposition operator. Let B be matrix of order (3, 2). It can be written as:

 b11 b12 
B = b21 b22  = b1 b2 
b31 b32 

where b1 and b2 represent the columns of B. The product of the matrices, then, can be
expressed in terms of four inner products.

 ab a1b2 
AB = C =  1 1 
 a2 b1 a2 b2 
This is the same as the summation definition of multiplication.

5 Special Matrices and Their Properties


In scalar algebra, x.1 = x , is known as the identity relationship. Similar relationship also found
in matrix algebra also: AI = A where I is a diagonal square matrix with ones on the main
diagonal, and zeros on the off diagonal. For example,

1 0 0 
I 3 = 0 1 0
0 0 1 
The subscription denotes its dimensionality. Example of the use of an identity matrix:

1 2 1 0 1 2
 3 4   0 1  = 3 4 
    

The identity matrix is commutative, associative, and distributive with respect to multiplication.
That is,

AIB = IAB = ABI = AB

In the presentation with the identity in scalar algebra that x + 0 = x. This generalizes to matrix
algebra also, with the definition of the null matrix, which is simply a matrix of zeros, denoted
0j,k. Here is an example:
1 2 0 0  1 2 
A + 02,2 =   = =A
 3 4   0 0  3 4 
There are two other matrices worth mentioning. The first is a diagonal matrix, which takes
values on the main diagonal, and zeros on the off diagonal. Formally, matrix A is diagonal if
ai , j = 0  i  j. The identity matrix is an example, so is the matrix Ω:
 2.5667 0 0 

= 0 3.4126 0 
 0 0 7.4329

Introduction to Matrix Algebra II


1 Computing Determinants
So far addition, subtraction, and multiplication are discussed however, yet not the division.
Remember this simple algebraic problem:

2x = 6
1 1
2x = 6
2 2
x = 3
1
The quantity = 2−1 can be called the inverse of 2. This is exactly what we are doing when we
2
divide in scalar algebra.

Let A−1 be the inverse matrix of the matrix A. In scalar algebra, a number of times its inverse
equals one. In matrix algebra, then, we must find the matrix A-1 where AA-1 = A-1A = I.

Given this matrix, one needs to do when solving the systems of equations:
Ax = b
A−1 Ax = A−1b
x = A−1b
-1
because A A = I.

The question remains as to how to compute A-1. In doing so, first compute the determinants of
square matrices using the cofactor expansion. Then, use these determinants to form the matrix
A-1.

Note: Determinants are defined only for square matrices, and are scalars. Determinants are
important in determining whether a matrix is invertable, and what the inverse is: For an order
(2, 2) matrix, the determinant is defined as follows:

a11 a12
A= = a11a 22 −a12 a21
a21 a22
Examples:
2 4
G=  , G = 2.3 − 4.6 = −18
6 3
2 4
=  ,  = 2.2 − 4.1 = 0
1 2 

For an order (n, n) square matrix A, we can define the cofactor  r , s for each element of A : ar , s .
The cofactor of ar , s is denoted:
(r +s)
r ,s = ( −1) Ar ,s
where Ar,s is the matrix formed after deleting row r and column s of the matrix (sometimes
called the minor of A). Thus, each element of the matrix A has its own cofactor. So we can
compute the matrix of cofactors for a matrix.

Example:
1 3 2 
B =  4 5 6 
8 7 9 

then the matrix of cofactors:

 5 6 4 6 4 5 
 − 
 7 9 8 9 8 7 
 3 2  3 12 −12 
1 3 
 = −13 −7 17 
1 2
 = − −
 7 9 8 9 8 7  
    8 2 −7 
 3 2 −1 2 1 3 
 5 6 4 6 4 5 
We can use any row or column of the matrix of cofactors Θ to compute the determinant of a
matrix A. For any row i,

n
A =  aijij
j =1

Or, for any column j,

n
A =  aijij
i =1

These are handy formulas, because they allow the determinant of an order n matrix to be
decreased to order (n − 1). Note that one can use any row or column to do the expansion, and
compute the determinant. This process is called cofactor expansion. It can be repeated on very
large matrices many times to get down to an order 2 matrix.

Let us return to our example, we will do the cofactor expansion on the first row, and the second
column.
n
B =  b1 j1 j
j =1

= 1(3) + 3(12) + 2(−12) = 15


n
B =  bi 2i 2
i =1

= 3(12) + 5(−7) + 7(2) = 15

You can do this for the other two rows, or the other two columns, and get the same result.

Note: For any given matrix, you can perform the cofactor expansion and compute the
determinant.

2 Matrix Inversion
Let A−1 be the inverse matrix of the matrix A. In scalar algebra, a number of times its inverse
equals one. In matrix algebra, then, we must find the matrix A-1 where AA-1 = A-1A = I.

The determinant of a matrix A is denoted by A , and the matrix of cofactors is denoted by  A .


The adjoint matrix is simply the matrix of cofactors transposed. Thus,


adj ( A) =  = r , s  = ( −1)
(r +s)
Ar ,s 
 

where Ar,s is the matrix formed after deleting row r and column s of the matrix (sometimes
called the minor of A).

Now, the inverse of a matrix A is defined as follows:

1
A−1 = adj ( A )
A

Thus, for any matrix A which is invertable, we can compute the inverse. This is trivial for order
(2, 2) matrices, and only takes a few minutes for order (3, 3) matrices.

3 Examples of Inversion
First, find the inverse of an order (2, 2) matrix:

 2 4
B= 
0 3

The determinant of B:
B = 2.3 − 6.4 = −18
The matrix of cofactors for B, which then transpose to get the adjoint:

 3 −6   3 −4 
adj ( B ) =  B =  = 
 −4 2   −6 2 

Given the determinant and the adjoint, the inverse of B is:


 −3 4 
1  3 −4   18 18 
B −1 = −  = 
18  −6 2   6 −2 
 18 18 

To make sure, let us check the product B−1B equals the identity matrix I2:

 −3 4 18 0 
 18 
18  2 4  18 18  1 0 
B −1 B =   = = = I2
6 −2   6 3   0 18  0 1 
 18 18  18 18 

Example:
Here we use the same matrix B of order (3, 3) matrix as we used earlier. The matrix was:

1 3 2 
B =  4 5 6 
8 7 9 

The cofactor matrix is:

 3 12 −12
 B =  −13 −7 17 
 8 2 −7 

The determinant of B is B = 15 . Now, the inverse of B:

 3 −13 8 
B −1 =  B =  12 −7 2
1 1
15 15
 −12 17 7 

Again, let us check to make sure this is indeed the inverse of B.


 3 −13 8  1 3 2
adj ( B ) B =  12 −7 2   4 5 6 
1 1
B −1 B =
B 15
 −12 17 7  8 7 9 
15 0 0  1 0 0
1 
= 0 15 0  = 0 1 0  = I 3
15
 0 0 15 0 0 1 

4 Diagonal Matrices
Let A denote a diagonal matrix of order (k, k). Remember that diagonal matrices have to be
square and its inverse is nearly trivial:
 a11 0 0 0
0 a 0 0 
 22

A= 0 0 a33 0
 
 
 0 0 0 akk 

The inverse of A is the following:

 a11−1 0 0 0 
 −1 
 0 a22 0 0 
A−1 =  0 −1
0 a33 0 
 
 
0 akk−1 
 0 0

To show this is the case, simply multiply AA-1 and you will get Ik.

5 Conditions for Singularity


In simple algebraic problem to solve for x, one needs to pre-multiply by x−1 to solve a system of
equations. This is the same operation as division.

In scalar algebra, there is one number for which the inverse is not defined is 0. That is, the
1
quantity is not defined.
0
Similarly, in matrix algebra there are a set of matrices for which an inverse does not exist.
Remember the formula to compute the inverse of matrix A:
1
A−1 = adj ( A)
A
Question: Under what conditions will this not be defined?
When the determinant of A is not defined, of course, that is if the determinant of A equals zero,
then A-1 is not defined.

Question: For what sorts of matrices is this a problem?


For the matrices that have rows or columns that are linearly dependent on other rows or
columns have determinants that are equal to zero. For these matrices, the determinant is
undefined.

Consider an order (k, k) matrix A, using column vectors:

A =  a1 a2 ak 

Each of the vectors ai is of order (k, 1). A column ai of A is said to be linearly independent of the
others if there exists no set of scalars αj such that:

ai =   j a j
j i

Thus, given the rest of the columns, if we cannot find a weighted sum to get the column we are
interested in, we say that it is linearly independent.

Rank: The rank of a matrix is defined as the number of linearly independent columns (or rows)
of a matrix. If all of the columns are independent, the matrix is of full rank. The rank of a matrix
is denoted as r(A). By definition, r(A) is a integer that can take values from 1 to k.

Here are some examples:

2 −4
= 2 ( −6 ) − 3 ( −4 ) = −12 + 12 = 0
3 −6

Notice that the second column is -2 times the first column. The rank of this matrix is 1 – it is not
of full rank. Here is another example:

2 −7
= 2 ( −14 ) − 4 ( −7 ) = −28 + 28 = 0
4 −14

Notice that the second row is 2 times the first row. Again, the rank of this matrix is 1 – it is not
of full rank. Here is a final example, for an order (3, 3) matrix:

1 2 4
3 0 6 =0
5 3 13
Notice that the first column times 2 plus the second column equals the third column. The rank
of this matrix is 2 – it is not of full rank.

Here are a series of important statements about inverses and nomenclature:

▪ A must be square. It is a necessary, but not sufficient, condition that A is square for
A-1 to exist. That is, sometimes the inverse of a matrix does not exist. If the inverse of
a matrix does not exist, it is called singular.
▪ The statements are equivalent: full rank  nonsingular  invertable. These imply
that A-1 exists.
▪ If the determinant of A equals zero, then A is said to be singular, or not invertable.
That is,
A = 0  A singular
▪ If the determinant of A is non-zero, then A is said to be nonsingular, or invertable.
That is, the inverse exists. That is,

A  0  A nonsingular
▪ If a matrix A is not of full rank, it is not invertable that is, it is singular.

6 Some Important Properties of Inverses

▪ AA-1 =I, A-1A =I, A-1 is unique.


▪ A must be square. It is a necessary, but not sufficient, condition that A is
square for A-1 to exist. That is, sometimes the inverse of a matrix does not
exist, it is called singular.
▪ (A-1)-1 = A. That is, the inverse of an inverse is the original matrix.
▪ Just as with transposition,
( AB ) = B −1 A−1
−1

▪ The inverse of the transpose is the transpose of the inverse.


A = A−1 
( ) ( )
−1

7 Solving Systems of Equations Using Matrices


Matrices are particularly useful when solving systems of equations. In the following example,
there are three equations and three unknowns:

x + 2y + z = 3
3 x + y − 3 z = −1
2x + 3y + z = 4

There are various techniques to solve, including substitution, and multiplying equations by
constants and adding them to get single variables to cancel.
The easier way, however, is to use a matrix. Note that this system of equations can be
represented as follows:

1 2 3   x   3 
 3 −1 −3  y  =  −1  Ax = b
    
 2 3 1   z   4 

This problem Ax = b can be solved by pre-multiplying both sides by A−1 and simplifying. That is:

Ax = b → A−1 Ax = A−1b → x = A−1b

So system of equations can be solved by computing the inverse of A, and multiplying it by b. :

1 2 1   8 1 −5
 
A =  3 −1 −3 , A =  −9 −1 6 
−1

 2 3 1   11 1 −7 
So,
 8 1 −5  3   3 
x = A−1b =  −9 −1 6   −1 =  −2
 11 1 −7   4   4 
Note:
This approach only works, however, if the matrix A is nonsingular. If it is not invertable, then
this will not work.

In fact, if a row or a column of the matrix A is a linear combination of the others, there are no
solutions to the system of equations, or many solutions to the system of equations. In either
case, the system is said to be under-determined. The determinant of a matrix can be computed
to see if it in fact is underdetermined.

You might also like