Matrix
Up ] Vector ] [ Matrix ] Vector vs. Matrix ]

 

Properties
Matrix Multiplication
Determinant
Inverse Matrix

Properties

A matrix is composed of several rows and columns of numbers:

    [1]

As shown in [1], a m x n (dimension) matrix has m rows and n columns. The elements of the matrix are denoted by aij where i = the row number, and j = the column number. In biomechanics and motion analysis, 3 x 3 and 3 x 1 matrices are the most commonly used types.

A matrix of m = n is called a square matrix. In a diagonal matrix which is a square matrix, all elements except the diagonal ones (i = j) are zero:

    [2]

Matrices of the same dimension can be added or subtracted, element by element:

    [3]

As shown in [2], matrix addition is commutative.

When a matrix is multiplied by a scalar, all the elements of the matrix magnify by the scalar value:

    [4]

This operation is commutative as shown in [4].

Transpose of a m x n matrix is a n x m matrix whose columns are identical to the corresponding rows of the original matrix:

    [5]

Note in [5] that the superscripted t , ()t, is used as the symbol for the transpose. Transposing suffices the following:

    [6]

Top

Matrix Multiplication

One of the most useful properties of the matrix is the matrix multiplication. Imagine a system of linear equations:

    [7]

where a's & c's = scalars, and x, y & z = the unknowns. [7] can be simplified as:

    [8]

[8] is expressed in matrix multiplication form. The general form of matrix multiplication is

    [9]

or

    [10]

Note that the number of columns in the left vector (n in [9]) must be the same to the number of rows in the right matrix. The dimension of the resulting vector is m (rows of the left matrix) x p (columns of the right matrix) as shown in [9].

The matrix multiplication is not commutative:

    [11]

 but is distributive:

    [12]

Similarly:

    [13]

Matrix multiplication is also associative:

    [14]

For any scalar d:

    [15]

An identity matrix (I) is a square matrix whose diagonal elements are all 1 while the off-diagonal elements are all 0:

    [16]

Then:

    [17]

where d shown in [17] is the Kronecker delta:

    [18]

From [5] and [9]:

    [19]

Top

Determinant

The determinant of a 2 x 2 matrix is defined as:

    [20]

The determinant of a 3 x 3 matrix can be reduced to a series of the determinants of 2 x 2 matrices:

    [21]

To generalize [21] for any square matrix, a new matrix needs to be defined:

    [22]

In other words, matrix aij is matrix a less the i-th row and the j-th column. Then, [21] can be generalize to

    [23]

[23] is called a cofactor expansion across the first row of a. In fact, the determinant of a can be cofactor-expanded across any row or column:

    [24]

The following properties of the determinant hold:

    [25]

Top

Inverse matrix

The inverse matrix of a square matrix suffices the following relationship:

    [26]

where a-1 = the inverse matrix of a, and I = the identity matrix. For a square matrix to be invertible, its determinant must not be 0. The inverse matrix of square matrix a can be expressed as

    [27]

The matrix of cofactors of a on the right side in [27] is called the adjugate of a. Note in [27] that element aij of the adjugate is associated with det(aji) rather than det(aij).

A system of linear equations such as that in [8] can be generalized as

    [28]

where a = the known square coefficient matrix, b = the known column matrix, and x = the unknown parameter matrix. If matrix a is invertible, the unknown parameters can be obtained as follows:

    [29]

In motion analysis, matrix a shown in [28] is generally not square. If matrix a is not a square matrix, the system of linear equations can be solved as

    [30]

[30] is the so-called least square method. See Least Square Method for more details.

Top

 

© Young-Hoo Kwon, 1998-