# Matrix / GNU Octave Basics

## Create a Matrix in GNU Octave

$\small \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$

In GNU Octave, a matrix is specified with comma-delimited columns and semicolon-delimited rows:

[victoria@victoria ~]$date Fri Sep 28 13:08:45 PDT 2018 [victoria@victoria ~]$ octave
GNU Octave, version 4.4.1
...
Octave was configured for "x86_64-pc-linux-gnu".
...

octave:>> a=[1,2;3,4]
a =
1   2
3   4

## Octave Packages

• Additional linear algebra code, including general SVD and matrix functions.

• The Octave-Forge Symbolic package adds symbolic calculation features to GNU Octave. These include common Computer Algebra System tools such as algebraic operations, calculus, equation solving, Fourier and Laplace transforms, variable precision arithmetic and other features. Internally, the package uses SymPy, but no knowledge of Python is required. Compatibility with other symbolic toolboxes is intended.

• etc. I installed those, but I am unsure how useful they will be.

## Determinants

The determinant of a matrix is a special number that can be calculated from a square matrix . The determinant tells us things about the matrix that are useful in systems of linear equations, helps us find the inverse of a matrix, is useful in calculus and more.

Like the symbol for an absolute value $\small \vert -1 \vert = 1$,  $\small \vert \mathbf{A} \vert$ means “the determinant of matrix $\small \mathbf{A}$.” However, note that in this case, the vertical lines do not mean absolute value: the determinant can be negative.

While the methods for calculating the determinant of $\small [2 \times 2]$ and $\small [3 \times 3]$ matrices are relatively straightforward, approaches for addressing larger matrices through minor and cofactor expansion – while not overly complicated – are omitted here in the interest of brevity (refer instead to Sources [1-5],  below).

#### Examples

$\small [2 \times 2]$ matrix:

$\small \mathbf{A} = \begin{bmatrix} a & b \\\ c & d \end{bmatrix}$

Determinant $\small \vert A \vert = ad - bc$

$\small \mathbf{B} = \begin{bmatrix} 4 & 6 \\\ 3 & 8 \end{bmatrix}$

$\small \vert B \vert = (4 \times 8) - (6 \times 3) = 32 - 18 = 14$

$\small [3 \times 3]$ matrix:

$\small \mathbf{A} = \begin{bmatrix} a & b & c \\\ d & e & f \\\ g & h & i \end{bmatrix}$

$\small \vert A \vert = a \cdot \begin{bmatrix} e & f \\\ h & i \end{bmatrix} - b \cdot \begin{bmatrix} d & f \\\ g & i \end{bmatrix} + c \cdot \begin{bmatrix} d & e \\\ g & h \end{bmatrix}$ [Image source. Click image to open in new window.]

Determinant $\small \vert A \vert = a(ei - fh) - b(di - fg) + c(dh - eg)$

A similar "expansion" holds for larger matrices , with the operators in the order "$\small + \ - \ + \ - \ \ldots$"  [see the "sign chart" on p. 4 in , and Section 9 (p. 6) in ] -- which is why the "$\small b(di - fg)$" term, above, is subtracted.
octave:>> det(a)
ans = -2
octave:>> 

### [Determinants]Kirchhoff’s Theorem

In the mathematical field of graph theory, Kirchhoff’s theorem or Kirchhoff’s matrix tree theorem (named after Gustav Kirchhoff) is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time as the determinant of a matrix derived from the graph. It is a generalization of Cayley’s formula which provides the number of spanning trees in a complete graph. [Image source. Click image to open in new window.]

Kirchhoff’s theorem relies on the notion of the Laplacian matrix of a graph that is equal to the difference between the graph’s degree matrix (a diagonal matrix with vertex degrees on the diagonals) and its adjacency matrix (a $\small (0,1)$-matrix with 1’s at places corresponding to entries where the vertices are adjacent and 0’s otherwise).

For a given connected graph $\small \mathcal{G}$ with $\small n$ labeled vertices, let $\small \lambda_1, \lambda_2, ldots, \lambda_{n-1}$ be the non-zero eigenvalues of its Laplacian matrix. Then the number of spanning trees of $\small \mathcal{G}$ is

\small \begin{align} t(\mathcal{G) = \frac{1}{n}} \lambda_1, \lambda_2, \cdots, \lambda_{n-1} \end{align}.

Equivalently the number of spanning trees is equal to any cofactor of the Laplacian matrix of $\small \mathcal{G}$.

## Identity Matrix

In linear algebra, the identity matrix, or sometimes ambiguously called a unit matrix, of size $\small n$ is the $\small [n \times n]$ square matrix with ones on the main diagonal and zeros elsewhere. It is denoted by $\small I_n$, or simply by $\small I$ if the size is immaterial or can be trivially determined by the context.

$\small I_{1}={\begin{bmatrix}1\end{bmatrix}},\ \small I_{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ \small I_{3}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}},\ \cdots ,\ \small I_{n}={\begin{bmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\0&0&1&\cdots &0 \\ \vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &1 \end{bmatrix}}$

The product of any matrix and the appropriate identity matrix is always the original matrix, regardless of the order in which the multiplication was performed! Multiplying by the identity matrix $\small I$ doesn’t change anything, just like multiplying a number by 1 doesn’t change anything: $\small A \cdot I = I \cdot A$.

Restated, when $\small A$ is $\small [m \times n]$, it is a property of matrix multiplication that $\small I_{m}A = AI_{n} = A$. This is illustrated in the following example. [Source. Click image to open in new window.]

The identity matrix also has the property that, when it is the product of two square matrices, the matrices can be said to be the inverse of one another.

octave:>> eye
ans =  1
octave:>> eye(1)
ans =  1
octave:>> eye(2)
ans =
Diagonal Matrix
1   0
0   1

octave:>> eye(5)
ans =
Diagonal Matrix
1   0   0   0   0
0   1   0   0   0
0   0   1   0   0
0   0   0   1   0
0   0   0   0   1

octave:>> a
a =
1   2
3   4

octave:>> a * eye(2)
ans =
1   2
3   4

octave:>>

## Inverse Matrix

$\small \mathbf{A} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$

$\small \mathbf{A^{-1}} = \frac{1}{\text{determinant($\mathbf{A})$}} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$

$\small \mathbf{A^{-1}} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$

### Example

$\small \mathbf{A} = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$

$\small \mathbf{A^{-1}} = \frac{1}{2(5) - 3(4)} \begin{bmatrix} 5 & -3 \\ -4 & 2 \end{bmatrix} = \frac{\ 1}{-2} \begin{bmatrix} 5 & -3 \\ -4 & 2 \end{bmatrix} = \begin{bmatrix} -2.5 & 1.5 \\ 2 & -1 \end{bmatrix}$

octave:>> a=[2,3;4,5]
a =
2   3
4   5

octave:>> a^-1
ans =
-2.5000   1.5000
2.0000  -1.0000

octave:>>

$\mathbf{A} \cdot \mathbf{A}^{-1} = \mathbf{I}$

#### Example

$\small \mathbf{A} = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$

$\small \mathbf{A} = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} \begin{bmatrix} -2.5 & 1.5 \\ 2 & -1 \end{bmatrix}$

$\small \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae + bg & af + bh \\ ce + dg & cf + dh \end{bmatrix}$

$\small = \begin{bmatrix} 2(-2.5) + 3(2) & 2(1.5) + 3(-1) \\ 4(-2.5) + 5(2) & 4(1.5) + 5(-1) \end{bmatrix} = \begin{bmatrix} -5 + 6 & 3 - 3 \\ -10 + 10 & 6 - 5 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$

octave:>> a
a =
2   3
4   5

octave:>> a^-1
ans =
-2.5000   1.5000
2.0000  -1.0000

octave:>> a * a^-1
ans =
1   0
0   1

octave:>>

#### Inverse Matrices: More Examples

Not all matrices are invertible! Note the following.

## Transpose Matrix

“Rows become columns …”

$\small \mathbf{A} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$

$\small \mathbf{A}^T = \begin{bmatrix} a & c \\ b & d \end{bmatrix}$

In GNU Octave, $\small \mathbf{A}^T$ is $\small \mathbf{A}’$  i.e., $\small \mathbf{A}^{\text{“prime”}}$.

octave:>> a
a =
2   3
4   5

octave:>> a'
ans =
2   4
3   5

octave:>>

## Matrix Affine Transformations

To represent affine transformations with matrices  (see also), we can use homogeneous coordinates. This means representing a 2-vector $\small (x,y)$ as a 3-vector $\small (x,y,1)$, and similarly for higher dimensions. Using this system, translation can be expressed with matrix multiplication. The functional form

$\small x' = x + t_x$;
$\small y' = y + t_y$

becomes:

$\small \begin{bmatrix} x' \\\ y' \\\ 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & t_{x} \\\ 0 & 1 & t_{y} \\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\\ y \\\ 1 \end{bmatrix}$.

All ordinary linear transformations are included in the set of affine transformations, and can be described as a simplified form of affine transformations. Therefore, any linear transformation can also be represented by a general transformation matrix. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to $\small 1$. For example, the counter-clockwise rotation matrix from above becomes:

$\small \begin{bmatrix} \cos \theta & -\sin \theta & 0 \\\ sin \theta & \cos \theta & 0 \\\ 0 & 0 & 1 \end{bmatrix}$

Using transformation matrices containing homogeneous coordinates, translations become linearly independent, and thus can be seamlessly intermixed with all other types of transformations. The reason is that the real plane is mapped to the $\small w = 1$ plane in real projective space, and so translation in real Euclidean space can be represented as a shear in real projective space. Although a translation is a non-linear transformation in a 2-D or 3-D Euclidean space described by Cartesian coordinates (i.e. it can’t be combined with other transformations while preserving commutativity and other properties), it becomes, in a 3-D or 4-D projective space described by homogeneous coordinates, a simple linear transformation (a shear).

For an example of shear mapping in the eigenspace, refer here.

More affine transformations can be obtained by composition of two or more affine transformations. … [continued here.]

#### Example: 2-D Transformation

When a transformation takes place on a 2D plane, it is called 2D transformation.

We can represent a point, $\small \mathbf{p} = (x,y)$, in the plane as a row vector $\small \begin{bmatrix} x & y \end{bmatrix}$ or as a column vector $\small \begin{bmatrix} x \\ y \end{bmatrix}$.

We can represent a 2-D transformation $\small \mathbf{M}$ by a matrix $\small \mathbf{M} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$.

If $\small \mathbf{p}$ is a column vector ($\small [2 \times 1]$), then we need to place $\small \mathbf{M}$ on the left:

… $\small [\color{Blue}2 \times \color{Magenta}2] [\color{Magenta}2 \times \color{Blue}1] = [\color{Blue}2 \times \color{Blue}1]$; we cannot multiply $\small [2 \times 1] [2 \times 1]$, due to the mismatched matrix dimensions.

$\small \begin{bmatrix} x' \\\ y' \end{bmatrix} = \begin{bmatrix} a & b \\\ c & d \end{bmatrix} \begin{bmatrix} x \\\ y \end{bmatrix} = \begin{bmatrix} ax + by \\\ cx + dy \end{bmatrix}$

$\small \begin{bmatrix} x' \\\ y' \end{bmatrix} = \begin{bmatrix} 1 & 2 \\\ 3 & 4 \end{bmatrix} \begin{bmatrix} 3 \\\ 2 \end{bmatrix} = \begin{bmatrix} 1(3) + 2(2) \\\ 3(3) + 4(2) \end{bmatrix} = \begin{bmatrix} 7 \\\ 17 \end{bmatrix}$

If $\small \mathbf{p}$ is a row vector ($\small [1 \times 2]$), then we need to place $\small \mathbf{M}^T$ on the right ($\small [1 \times 2][2 \times 2]^\color{Magenta}T$):

$\small \mathbf{M} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$,

$\small \mathbf{M}^T = \begin{bmatrix} a & c \\ b & d \end{bmatrix}$.

$\small \begin{bmatrix} x' & y' \end{bmatrix} = \begin{bmatrix} x & y \end{bmatrix} \begin{bmatrix} a & c \\\ b & d \end{bmatrix} = \begin{bmatrix} xa + yb & xc + yd \end{bmatrix}$

$\small \begin{bmatrix} x' & y' \end{bmatrix} = \begin{bmatrix} 3 & 2 \end{bmatrix} \begin{bmatrix} 1 & 3 \\\ 2 & 4 \end{bmatrix}$ $\small = \begin{bmatrix} 3(1) + 2(2) & 3(3) + 2(4) \end{bmatrix} = \begin{bmatrix} 7 & 17 \end{bmatrix}$

which is basically the same result as above (same point, same transformation!).

[ If we did not do that matrix transform, $\small \mathbf{M}^T$, we would have obtained $\small \begin{bmatrix} 9 & 14 \end{bmatrix}$. ]

octave:>> p=[3,2]
p =
3   2

octave:>> q=[3;2]
q =
3
2

octave:>> M=[1,2;3,4]
M =
1   2
3   4

octave:>> M'
ans =
1   3
2   4

octave:>> p*M'
ans =
7   17

octave:>> M*q
ans =
7
17

octave:>>

## Reduced Row Echelon

• I previously used this approach with graph Laplacians.
• The reduced row echelon is also used to determine the rank of a matrix (further below).

Reduced row echelons, available in GNU octave as the rref function, greatly simplifies matrix calculations.

As can be seen from that discussion (which determines the eigenvalues and eigenvectors from the graph Laplacian matrix, derived from a graph adjacency matrix), rather than determining the eigenvectors for eigenvalues $\small \lambda_2 = -3$ and $\small \lambda_3 = -3$ from

$\small \begin{bmatrix} 8 & 8 & 16 \\ 4 & 4 & 8 \\ -4 & -4 & -8 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}$

we can much more tractably work with the reduced row echelon version:

$\small \begin{bmatrix} 1 & 1 & 2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}$

which rather astoundingly preserves the information from the source matrix!

$\small v_1 + v_2 + 2v_3 = 0$; let $\small v_2 = t$ and $\small v_3 = s$
$\small v_1 + t + 2s = 0$
$\small v_1 = -2S - t$

Therefore,

$\small \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix} -2s - t \\ t \\ s \end{bmatrix} = \begin{bmatrix} -2 \\ 0 \\ 1 \end{bmatrix} \cdot s + \begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix} \cdot t$

octave:>> e
e =                           %% THIS WAS THE SOURCE MATRIX
5    8   16
4    1    8
-4   -4  -11

octave:>> ## CALCULATE (e - lambda*I) for lambda = -3:
octave:>> eye(3)
ans =
Diagonal Matrix
1   0   0
0   1   0
0   0   1

octave:>> g = e-(-3*eye(3))   %% THIS IS THE GRAPH LAPLACIAN
g =                           %% FOR EIGENVALUE (LAMBDA) = -3
8    8   16
4    4    8
-4   -4   -8

octave:>> ## CALCULATE REDUCED ROW ECHELON:
octave:>> h = rref(g)
h =
1   1   2
0   0   0
0   0   0

octave:>>

So … the set of non-normalized eigenvectors for

$\small \mathbf{A} = \begin{bmatrix} 5 & 8 & 16 \\\ 4 & 1 & 8 \\\ -4 & -4 & -11 \end{bmatrix}$ with eigenvalues $\small (1, -3, -3)$ is $\small \begin{bmatrix} -2 & -2 & -1 \\\ -1 & 0 & 1 \\\ 1 & 1 & 0 \end{bmatrix}$.

Normalized eigenvectors:

octave:>> a=[5,8,16;4,1,8;-4,-4,-11]
a =
5    8   16
4    1    8
-4   -4  -11

octave:>> [evec,eval] = eig(a)
evec =
0.81650 + 0.00000i   0.14037 - 0.55581i   0.14037 + 0.55581i
0.40825 + 0.00000i  -0.71521 + 0.00000i  -0.71521 - 0.00000i
-0.40825 + 0.00000i   0.28742 + 0.27790i   0.28742 - 0.27790i

eval =
Diagonal Matrix
1.00000 + 0.00000i                    0                    0
0  -3.00000 + 0.00000i                    0
0                    0  -3.00000 - 0.00000i

octave:>> 

## Matrix Rank

This website explains it well! They describe a method for finding the rank of any matrix, assuming familiarity with echelon matrices and echelon transformations. [See also my Reduced Row Echelon notes, above! ]

The maximum number of linearly independent vectors in a matrix
is equal to the number of non-zero rows in its row echelon matrix.

Therefore, to find the rank of a matrix, we simply transform the matrix to its row echelon form and count the number of non-zero rows.

Consider matrix $\small \mathbf{A}$ and its row echelon matrix, $\small \mathbf{A}_{ref}$. First, find the row echelon form for matrix $\small \mathbf{A}$:

$\large {\begin{bmatrix} 0 & 1 & 2 \\ 1 & 2 & 1 \\ 2 & 7 & 8 \end{bmatrix} \atop \mathbf{A}} {\begin{matrix} \ \\ \Rightarrow \\ \ \end{matrix} \atop {\ }} {\begin{bmatrix} 1 & 2 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{bmatrix} \atop \mathbf{A}_{ref}}$

Because the row echelon form $\small \mathbf{A}_{ref}$ has two non-zero rows, we know that matrix $\small \mathbf{A}$ has two independent row vectors; and we know that the rank of matrix $\small \mathbf{A}$ is 2.

You can verify that this is correct. Row 1 and Row 2 of $\small \mathbf{A}$ are linearly independent. However, Row 3 is a linear combination of Rows 1 and 2. Specifically, Row 3 = 3( Row 1 ) + 2( Row 2). Therefore, matrix $\small \mathbf{A}$ has only two independent row vectors.

### Full Rank Matrices

When all of the vectors in a matrix are linearly independent, the matrix is said to be full rank. Consider the matrices $\small \mathbf{A}$ and $\small \mathbf{B}$ below.

$\small {\mathbf{A} = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \end{bmatrix}}$       $\small {\mathbf{B} = \begin{bmatrix} 1 & 0 & 2 \\ 2 & 1 & 0 \\ 3 & 2 & 1 \end{bmatrix}}$

Notice that row 2 of matrix $\small \mathbf{A}$ is a scalar multiple of row 1; that is, row 2 is equal to twice row 1. Therefore, rows 1 and 2 are linearly dependent. Matrix $\small \mathbf{A}$ has only one linearly independent row, so its rank is 1. Hence, matrix $\small \mathbf{A}$ is not full rank.

Now, look at matrix $\small \mathbf{B}$. All of its rows are linearly independent, so the rank of matrix $\small \mathbf{B}$ is 3. Matrix $\small \mathbf{B}$ is full rank.

octave:>> a=[1,2,3;4,5,6;7,8,9]
a =
1   2   3
4   5   6
7   8   9

octave:>> rank(a)
ans =  2
octave:>> rref(a)
ans =
1.00000   0.00000  -1.00000
0.00000   1.00000   2.00000
0.00000   0.00000   0.00000

octave:>> b=[0,1,2;1,2,1;2,7,8]
b =
0   1   2
1   2   1
2   7   8

octave:>> rref(b)
ans =
1   0  -3
0   1   2
0   0   0

octave:>> rank(b)
ans =  2
octave:>> c=[1,0,2;2,1,0;3,2,1]
c =
1   0   2
2   1   0
3   2   1

octave:>> rref(c)
ans =
1   0   0
0   1   0
0   0   1

octave:>> rank(c)
ans =  3
octave:>>