
13 Answers
13.1 Introduction
Graphical solution
Exercise 1.
Graphical solution:
Algebraic solution: from the first equation we derive \(y=x\). Substitution in the second equation yields \(3x=6\), or, \(x=2\). Since \(y=x\), also \(y=2\)
Exercise 2.
Graphical solution:

Algebraic solution: from the first equation we get \(y=2-x\). Substitution in the second equation yields \(3x-4=0\), or \(x=\tfrac{4}{3}\). Since \(y=2-x\), \(y=\tfrac{2}{3}\).
Exercise 3.
Graphical solution:

Algebraic solution: The solution set is the empty set (there is no solution).
Exercise 4.
Subtract the third equation from the second and subtract the result (\(u + v + w = 1\)) from the first equation. This leaves \(0 = 1\). Hence, the system of equations is inconsistent (has no solution for \(u\), \(v\), and \(w\))
Exercise 5.
The graphical solution yields two lines on top of each other:

The solution set corresponds to the entire line, in set notation: \(\{(x,y) | x \in \mathbb{R} \textrm{ and } y=2-x \}\). The second equation is equivalent to the first equation with both sides of the equation multiplied by -2. Therefore the second equation does not add extra information to specify \(x\) and \(y\), and we can fully specify the solution set using the first equation only.
13.2 Gaussian elimination
Gaussian elimination
Exercise 6.
\[ \begin{array}{cc|c} 1 & 1 & 0 \\ 3 & -5 & 0 \end{array} \;\yields\; \begin{array}{cc|c} 1 & 1 & 0 \\ 0 & -8 & 0 \end{array} \] Back-substitution yields \(y=0\) and \(x=0\)
Exercise 7.
\[ \begin{array}{rr|r} 2 & 1 & 5 \\ 4 & 2 & 10 \end{array} \;\yields\; \begin{array}{rr|r} 2 & 1 & 5 \\ 0 & 0 & 0 \end{array} \] The second equation is equivalent to the first equation (both sides times 2), hence it yields no extra information. We can define the solution set as \[ \{(x,y)|x \in \mathbb{R} \textrm{ and } y = 5 - 2x \} \]
Exercise 8.
\[ \begin{array}{rrr|r} 1 & -2 & 1 & 5 \\ 3 & 2 & -1 & 12 \\ -1 & 2 & 1 & 10 \end{array} \;\yields\; \begin{array}{rrr|r} 1 & -2 & 1 & 5 \\ 0 & 8 & -4 & -3 \\ 0 & 0 & 2 & 15 \end{array} \] From the last equation we obtain \(z = 7 \tfrac{1}{2}\). Substitution in the second equation gives \(8y - 30 = -3\), or \(y = \tfrac{27}{8}\). Substitution in the first equation gives \(x - \tfrac{27}{4} + \tfrac{15}{2} = 5\), or \(x = \tfrac{20}{4} + \tfrac{27}{4} - \tfrac{30}{4} = \tfrac{17}{4}\).
Exercise 9.
\[ \begin{array}{rrr|r} 2 & 3 & 1 & 8 \\ 4 & 7 & -5 & 20 \\ 0 & -2 & 2 & 0 \end{array} \;\yields\; \begin{array}{rrr|r} 2 & 3 & 1 & 8 \\ 0 & 1 & -7 & 4 \\ 0 & -2 & 2 & 0 \end{array} \;\yields\; \begin{array}{rrr|r} 2 & 3 & 1 & 8 \\ 0 & 1 & -7 & 4 \\ 0 & 0 & -12 & 8 \end{array} \] or, written as a system of equations again (pivots underlined) \[ \begin{align*} \underline{2x} + 3y + z & = 8 \\ \underline{y} - 7z & = 4 \\ \underline{-12 z} & = 8 \end{align*} \] Back substitution in the third equation yields \(z = \frac{-8}{12} = \frac{-2}{3}\). Substitution in the second equation yields \(y = 4 + 7 \cdot \frac{-2}{3} = \frac{12}{3} - \frac{14}{3} = \frac{-2}{3}\). Substitution in the first equation yields \(2x= \frac{24}{3} + \frac{6}{3} + \frac{2}{3} = \frac{32}{3}\), or \(x = \frac{16}{3}\). Check by substituting in the original equations.
Exercise 10.
\[ \begin{array}{rrr|r} 2 & 3 & 0 & 0 \\ 4 & 5 & 1 & 3 \\ 2 & -1 & -3 & 5 \end{array} \;\yields\; \begin{array}{rrr|r} 2 & 3 & 0 & 0 \\ 0 & -1 & 1 & 3 \\ 0 & -4 & -3 & 5 \end{array} \;\yields\; \begin{array}{rrr|r} \underline{2} & 3 & 0 & 0 \\ 0 & \underline{-1} & 1 & 3 \\ 0 & 0 & \underline{-7} & -7 \end{array} \] Pivots are underlined. The single solution is \(w = 1\), \(v = -2\) and \(u = 3\)
Exercise 11.
\[ \begin{split} \begin{array}{rrrr|r} 2 & -1 & 0 & 0 & 0 \\ -1 & 2 & -1 & 0 & 0 \\ 0 & -1 & 2 & -1 & 0 \\ 0 & 0 & -1 & 2 & 5 \end{array} \;\yields\; \begin{array}{rrrr|r} 2 & -1 & 0 & 0 & 0 \\ 0 & \tfrac{3}{2} & -1 & 0 & 0 \\ 0 & -1 & 2 & -1 & 0 \\ 0 & 0 & -1 & 2 & 5 \end{array} \;\yields\; \begin{array}{rrrr|r} 2 & -1 & 0 & 0 & 0 \\ 0 & \tfrac{3}{2} & -1 & 0 & 0 \\ 0 & 0 & \tfrac{4}{3} & -1 & 0 \\ 0 & 0 & -1 & 2 & 5 \end{array} \;\yields\; \\ \\ \begin{array}{rrrr|r} \underline{2} & -1 & 0 & 0 & 0 \\ 0 & \underline{\tfrac{3}{2}} & -1 & 0 & 0 \\ 0 & 0 & \underline{\tfrac{4}{3}} & -1 & 0 \\ 0 & 0 & 0 & \underline{\tfrac{5}{4}} & 5 \end{array} \end{split} \] This yields the single solution \(z = 4\), \(w = 3\), \(v = 2\) and \(u = 1\).
Exercise 12.
The intersection of the three hyperplanes (three-dimensional structures in four-dimensional space) is a line in four-dimensional space, but only if after Gaussian elimination we are still left with 3 equations. The original set of equations is then called an independent set of equations. \[ \begin{array}{rrrr|r} 1 & 1 & 1 & 1 & 6 \\ 1 & 0 & 1 & 1 & 4 \\ 1 & 0 & 1 & 0 & 2 \end{array} \;\yields\; \begin{array}{rrrr|r} 1 & 1 & 1 & 1 & 6 \\ 0 & -1 & 0 & 0 & -2 \\ 0 & -1 & 0 & -1 & -4 \end{array} \;\yields\; \begin{array}{rrrr|r} 1 & 1 & 1 & 1 & 6 \\ 0 & -1 & 0 & 0 & -2 \\ 0 & 0 & 0 & -1 & -2 \end{array} \] Which corresponds to the set of 3 equations that defines a line in 4 dimensional space \[ \begin{align*} u + v + w + z & = 6 \\ -v = -2 \\ -z = -2 \end{align*} \] The fourth hyperplane (\(u=-1\)) intersects the line in a point (the point \((u,v,w,z) = (-1,2,3,2)\))
Exercise 13.
a. \[ \begin{align*} u + \ v + \ w & = 2 \\ 2 v + 2 w & = -2 \\ 2 w & = 2 \end{align*} \]
b. \(u=3\), \(v=-2\), \(w=1\)
c. No constraints necessary. This set of equations will always yield a single solution in \(\mathbb{R}\) for \(u\), \(v\), and \(w\), and, hence always a single solution for \(x=e^{u}\), \(y=e^{v}\), and \(z=e^{w}\).
13.3 Matrix notation
Multiplying a vector by a matrix
Exercise 14.
There are two ways of doing this.
- Use the rule of multiplying and adding corresponding positions in the rows of the matrix and in the column vector: \[ \begin{bmatrix} 4 & 0 & 1 \\ 0 & 1 & 0 \\ 4 & 0 & 1 \end{bmatrix} \begin{bmatrix} 3 \\ 4 \\ 5 \end{bmatrix} = \begin{bmatrix} 4 \times 3 + 0 \times 4 + 1 \times 5 \\ 0 \times 3 + 1 \times 4 + 0 \times 5 \\ 4 \times 3 + 0 \times 4 + 1 \times 5 \end{bmatrix} = \begin{bmatrix} 17 \\ 4 \\ 17 \end{bmatrix} \]
- Or, which is fully equivalent, re-write the matrix equation as \[ \begin{bmatrix} 4 & 0 & 1 \\ 0 & 1 & 0 \\ 4 & 0 & 1 \end{bmatrix} \begin{bmatrix} 3 \\ 4 \\ 5 \end{bmatrix} = \begin{bmatrix} 4 \\ 0 \\ 4 \end{bmatrix} \times 3 + \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \times 4 + \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} \times 5 = \begin{bmatrix} 12 \\ 0 \\ 12 \end{bmatrix} + \begin{bmatrix} 0 \\ 4 \\ 0 \end{bmatrix} + \begin{bmatrix} 5 \\ 0 \\ 5 \end{bmatrix} = \begin{bmatrix} 17 \\ 4 \\ 17 \end{bmatrix} \]
Exercise 15.
\[ \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 5 \\ -2 \\ 3 \end{bmatrix} = \begin{bmatrix} 5 \\ -2 \\ 3 \end{bmatrix} \] It is called an identity matrix because when multiplying a column vector by it, an identical column vector is obtained.
Exercise 16.
\[ \begin{bmatrix} 2 & 0 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \end{bmatrix} \]
Dual representation
Exercise 17.
Graphical solution from equation perspective:

yields \(x=2\) and \(y=3\). Writing the equations in column vector format: \[ \begin{bmatrix}2\\1\end{bmatrix} x + \begin{bmatrix}-1\\1\end{bmatrix} y = \begin{bmatrix}1\\5\end{bmatrix} \] Graphical solution from the column-vector perspective:

Exercise 18.
a. \(x = 2\) and \(y = -1\)
b. \(x = -2\) and \(y = -1\)
c. \(x = \frac{5}{4}\) and \(y = \frac{1}{4}\)
d. \(x = \frac{1}{2}\) and \(y = 4\)
Exercise 19.
a. Missing figure
b. No
c. Yes, the trivial solution \((0,0)\)
d. \[ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} x + \begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix} y = \begin{bmatrix} 2 \\ 2 \\ 1 \end{bmatrix} \]
e. Yes, for example \(\begin{bmatrix} 3 \\ 0 \\ 1 \end{bmatrix}\)
Pivots
Exercise 20.
If \(c=0\) we have a row of zero’s. If \(c=2\), the first and second equation become the same, and their difference, after Gaussian elimination, becomes a row of zero’s. If \(c=7\) we have two equal columns: then by Gaussian elimination we can subtract the third from the second equation to get \((-1,0,0)\). If we add 6 times this to the last equation we get \((2,7,7)\), which is the same as the first row, so we can again get a row of zero’s.
Exercise 21.
If \(c=0\) we have a row of zero’s. If \(c=5\), the first and second equation become the same, and their difference, after Gaussian elimination, becomes a row of zero’s. If \(c=3\) we have two equal columns. Then we can subtract 3 times the last row from the first row to get \((0,-6,-6)\), and 5 times the third row from the second to get \((0,-12,-12)\), which is then twice the first row. Hence wou would get a row of 0’s.
13.4 Solving the matrix equation \({A} \vec{x} = \vec{0}\)
Vector space
Exercise 22.
Only one, namely the vector space \(\{[0]\}\).
Null space
Exercise 23.
a. \[ \begin{bmatrix} 3 & -6 & 0 \\ 0 & 2 & -2 \\ 1 & -1 & -1 \end{bmatrix} \begin{bmatrix}2 \\ 1 \\ 1\end{bmatrix} = \begin{bmatrix} 6 - 6 + 0 \\ 0 + 2 - 2 \\ 2 - 1 - 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \] Therefore, \((2,1,1)\) is a solution to \(\mat{A} \vec{x} = \vec{0}\). In fact any multiple \(\alpha \begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}\) with \(\alpha\) being a real value will be a solution.
b. To see whether there might be more solutions (i.e. whether the null space of \(\mat{A}\) has a dimension larger than \(1\)) we perform Gaussian elimination on \(\mat{A}\): \[ \begin{array}{rrr} 3 & -6 & 0 \\ 0 & 2 & -2 \\ 1 & -1 & -1 \end{array} \;\yields\; \begin{array}{rrr} 3 & -6 & 0 \\ 0 & 2 & -2 \\ 0 & 1 & -1 \end{array} \;\yields\; \begin{array}{rrr} \underline{3} & -6 & 0 \\ 0 & \underline{2} & -2 \\ 0 & 0 & 0 \end{array} \] We see that there are two pivots (underlined), hence the dimension of the nullspace is \(3 - 2 = 1\), and we can conclude that \(\begin{bmatrix}2 \\ 1 \\ 1\end{bmatrix}\) must span the complete null space.
Exercise 24.
\[ \begin{array}{rrr} 1 & 1 & 1 \\ 1 & 2 & 4 \\ 2 & 4 & 8 \end{array} \;\yields\; \begin{array}{rrr} 1 & 1 & 1 \\ 0 & 1 & 3 \\ 0 & 2 & 6 \end{array} \;\yields\; \begin{array}{rrr} 1 & 1 & 1 \\ 0 & 1 & 3 \\ 0 & 0 & 0 \end{array} \;\yields\; \begin{array}{rrr} 1 & 0 & -2 \\ 0 & 1 & 3 \\ 0 & 0 & 0 \end{array} \] which is equivalent to the system of equations \[ \begin{alignedat}{5} u & {} & {}+{} & -2w &= 0 \\ {} & v & {}+{} & 3w &= 0 \\ \end{alignedat} \] which means that \[ \left\{ (2w,-3w,w) | w \in \mathbb{R} \right\} \] is the null space. Or alternatively formulated: \[ \begin{bmatrix} 2 \\ -3 \\ 1 \end{bmatrix} w, \quad \textrm{with } w \in \mathbb{R} \]
Exercise 25.
\[ \begin{array}{rrr} 1 & 2 & 1 \\ 2 & 6 & 3 \\ 0 & 2 & 5 \end{array} \;\yields\; \begin{array}{rrr} 1 & 2 & 1 \\ 0 & 2 & 1 \\ 0 & 2 & 5 \end{array} \;\yields\; \begin{array}{rrr} 1 & 2 & 1 \\ 0 & 2 & 1 \\ 0 & 0 & 4 \end{array} \;\yields\; \begin{array}{rrr} 1 & 2 & 1 \\ 0 & 1 & \tfrac{1}{2} \\ 0 & 0 & 1 \end{array} \;\yields\; \begin{array}{rrr} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \;\yields\; \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \] which is equivalent to the system of equations \[ \begin{align*} u &= 0 \\ v &= 0 \\ w &= 0 \end{align*} \] which means that the single solution \(\{(0,0,0)\}\) or \(\vec{0}\) is the null space.
Exercise 26.
We need to solve:
\[\begin{equation*} \begin{bmatrix} 1 & 2 & -1 & 1 \\ 2 & 4 & -3 & 2 \\ -1 & -2 & 1 & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \end{equation*}\]
Reduce the matrix to the reduced row echelon form:
\[\begin{equation*} \begin{bmatrix} 1 & 2 & -1 & 1 \\ 2 & 4 & -3 & 2 \\ -1 & -2 & 1 & -1 \end{bmatrix} \;\yields\; \begin{bmatrix} 1 & 2 & -1 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \;\yields\; \begin{bmatrix} 1 & 2 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*}\]
\(x_2\) and \(x_4\) are the dependent variables, and \(x_1 = -2 x_2 -x_4\), so the general solution is:
\[\begin{equation*} \mathbf{x} = x_2 \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + x_4 \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \end{equation*}\]
Exercise 27.
We need to solve:
\[\begin{equation*} \begin{bmatrix} 1 & 2 & -1 & 3 \\ 2 & 4 & -2 & 6\\ 3 & 6 & -3 & 9 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \end{equation*}\]
Reduce the matrix to the reduced row echelon form:
\[\begin{equation*} \begin{bmatrix} 1 & 2 & -1 & 3 \\ 2 & 4 & -2 & 6\\ 3 & 6 & -3 & 9 \end{bmatrix} \;\yields\; \begin{bmatrix} 1 & 2 & -1 & 3\\ 0 & 0 & 0 & 0\\ 3 & 6 & -3 & 9 \end{bmatrix} \;\yields\; \begin{bmatrix} 1 & 2 & -1 & 3\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \end{equation*}\]
\(x_2\), \(x_3\) and \(x_4\) are the dependent variables, and \(x_1 = -2 x_2 + x_3 -3 x_4\), so the general solution is:
\[\begin{equation*} \vec{x} = x_2 \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} + x_4 \begin{bmatrix} -3 \\ 0 \\ 0 \\ 1 \end{bmatrix} \end{equation*}\]
Exercise 28.
a. Since we have 3 equations with 4 variables, at least 1 variable must be free!
b. To find out whether there are more free variables, let’s do Gaussian elimination to obtain the row echelon form and find the rank: \[ \begin{array}{rrrr} 1 & 2 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 1 & 2 & 0 & 1 \end{array} \;\yields\; \begin{array}{rrrr} \underline{1} & 2 & 0 & 1 \\ 0 & \underline{1} & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \] The number of pivots (underlined) which equals the rank, equals is \(2\). Hence there will be \(\# columns - rank = 4 - 2 = 2\) free variables. Continuing elimination to obtain the reduced-row echelon form: \[ \;\yields\; \begin{bmatrix} 1 & 0 & -2 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \] An expression for the null space is obtained by expressing the pivot variables in terms of free variables using this reduced system: \(x_1 = 2 x_3 - x_4\) and \(x_2 = -x_3\). Hence, a description of the complete null space is \[ \vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} 2 \\ -1 \\ 1 \\ 0 \end{bmatrix} x_3 + \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix} x_4, \quad \textrm{with } x_3, x_4 \in \mathbb{R} \]
Exercise 29.
Gaussian elimination yields \[ \begin{array}{rrr} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{array} \;\yields\; \begin{array}{rrr} 1 & 2 & 3 \\ 0 & -3 & -6 \\ 0 & -6 & -12 \end{array} \;\yields\; \begin{array}{rrr} \underline{1} & 2 & 3 \\ 0 & \underline{-3} & -6 \\ 0 & 0 & 0 \end{array} \] We have two pivots, three columns, hence the dimension of the nullspace equals 1. \(x_3\) is a free variable. Continuing elimination: \[ \yields \begin{array}{rrr} 1 & 0 & -1 \\ 0 & -3 & -6 \\ 0 & 0 & 0 \end{array} \;\yields\; \begin{array}{rrr} \underline{1} & 0 & -1 \\ 0 & \underline{1} & 2 \\ 0 & 0 & 0 \end{array} \] The last operation, dividing the second equation by \(-3\), was necessary to make its pivot coefficient equal to \(1\). We obtain a spanning vector and full description of the nullspace from the third column (the one without a pivot) by switching signs and putting a \(1\) at the third position, or equivalently, by just solving this set of equations for the pivot variables (\(x_1\), \(x_2\)) in terms of the free variable \(x_3\) with right-hand sides equal to \(0\): \(x_1 = x_3\) and \(x_2 = -2 x_3\). Or \[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix} x_3, \quad \textrm{with } x_3 \in \mathbb{R} \]
Exercise 30.
When it is a square matrix, we have an equal number of equations and variables. Therefore the null space is equal to the null vector \(\vec{0}\). When it is a non-square matrix the number of rows must be larger than the number of columns, otherwise the matrix could not be full column rank. Hence, there will be no free variables, and the null space will be the null vector, also in this case.
Application
Exercise 31.
a. \[ \begin{align} \frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} \mathrm{[G]} \\ \mathrm{[P]} \\ \mathrm{[ATP]} \\ \mathrm{[ADP]} \end{bmatrix} = \begin{bmatrix} 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 2 & -1 & -1 & 1 & -1 \\ 0 & 2 & 0 & 10 & 0 & -100 \\ 0 & -2 & 0 & -10 & 0 & 100 \end{bmatrix} \begin{bmatrix} v_0 \\ v_1 \\ v_2 \\ v_3 \\ v_4 \\ v_{BM} \end{bmatrix} \end{align} \]
b. \[ \begin{align} \frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} \mathrm{[G]} \\ \mathrm{[P]} \\ \mathrm{[ATP]} \\ \mathrm{[ADP]} \end{bmatrix} = \begin{bmatrix} 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 2 & -1 & -1 & 1 & -1 \\ 0 & 2 & 0 & 10 & 0 & -100 \\ 0 & -2 & 0 & -10 & 0 & 100 \end{bmatrix} \begin{bmatrix} v_0 \\ v_1 \\ v_2 \\ v_3 \\ v_4 \\ v_{BM} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0\end{bmatrix} \end{align} \] \[ \begin{bmatrix} 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 2 & -1 & -1 & 1 & -1 \\ 0 & 2 & 0 & 10 & 0 & -100 \\ 0 & -2 & 0 & -10 & 0 & 100 \end{bmatrix} \yields \begin{bmatrix} 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 5 & 0 & -50 \\ 0 & -2 & 1 & 1 & -1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \yields \begin{bmatrix} 1 & 0 & 0 & 5 & 0 & -50 \\ 0 & 1 & 0 & 5 & 0 & -50 \\ 0 & 0 & 1 & 11 & -1 & -99 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \]
\(v_3\), \(v_4\) and \(v_{BM}\) are de dependent variables and \(v_0 = -5 v_3 + 50 v_{BM}\), \(v_1 = -5 v_3 + 50 v_{BM}\) and \(v_2 = -11 v_3 + v_4 + 99 v_{BM}\). The nullspace is described by:
\[ \begin{bmatrix} -5 \\ -5 \\ -11 \\ 1 \\ 0 \\ 0 \end{bmatrix} \cdot v_3 + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \cdot v_4 + \begin{bmatrix} 50 \\ 50 \\ 99 \\ 0 \\ 0 \\ 1 \end{bmatrix} \cdot v_{BM} \]
c. They can be formed by choosing the right values for \(v_3\), \(v_4\) and \(v_{BM}\) from the vectors that span the nullspace. EFM1: \(v_3 = 9\), \(v_4=0\) and \(v_{BM} = 1\), EFM2: \(v_3 = 0\), \(v_4=0\) and \(v_{BM} = 1\), EFM1: \(v_3 = 10\), \(v_4=11\) and \(v_{BM} = 1\). Alternatively, you can multiply the stoichiometry matrix with the Elementary Flux modes and show that this leads to the 0 vector, which is the definition for vectors that are in the nullspace.
d. The rank of the Stoichiometric matrix is 3 (3 pivot elements) and there are 6 variables (6 columns in the matrix). Therefore the dimension of the nullspace is 3. There are 3 EFMs and they are independent (this is clear from how they are formed from the vectors that span the nullspace, which are linearly independent) and all in the nullspace, therefore they form a basis of the nullspace.
13.5 Solving the matrix equation \(A \vec{x}=\vec{b}\)
Solving \(\mat{A}\vec{x}=\vec{b}\)
Exercise 32.
Gaussian elimination to reduced row echelon form yields \[ \begin{array}{rrrr|r} 1 & 3 & 3 & 2 & 1\\ 2 & 6 & 9 & 7 & 5\\ -1 & -3 & 3 & 4 & 5 \end{array} \;\yields\; \begin{array}{rrrr|r} 1 & 3 & 3 & 2 & 1\\ 0 & 0 & 3 & 3 & 3\\ 0 & 0 & 6 & 6 & 6 \end{array} \;\yields\; \begin{array}{rrrr|r} 1 & 3 & 3 & 2 & 1\\ 0 & 0 & 3 & 3 & 3\\ 0 & 0 & 0 & 0 & 0 \end{array} \;\yields\; \begin{array}{rrrr|r} \underline{1} & 3 & 0 & -1 & -2\\ 0 & 0 & \underline{1} & 1 & 1\\ 0 & 0 & 0 & 0 & 0 \end{array} \] Our pivot variables are \(u\) and \(w\) and the free variables are \(v\) and \(x\). The reduced row echelon form contains the following equations: \[ \begin{align*} u + 3 v - x &= -2 \\ w + x &= 1 \end{align*} \] or, expressing the pivot variables in terms of free variables: \[ \begin{align*} u &= - 2 - 3 v + x\\ w &= 1 - x \end{align*} \] which means that the solution set to the original set of equations can be described as \[ \begin{bmatrix} u \\ v \\ w \\ x \end{bmatrix} = \begin{bmatrix} -2 \\ 0 \\ 1 \\ 0 \end{bmatrix} + \begin{bmatrix} -3 \\ 1 \\ 0 \\ 0 \end{bmatrix} v + \begin{bmatrix} 1 \\ 0 \\ -1 \\ 1 \end{bmatrix} x \] with \(v,x \in \mathbb{R}\)
Exercise 33.
Perform Gaussian elimination \[ \begin{array}{rrr|l} 1 & 1 & 2 & 2 \\ 2 & 3 & -1 & 5 \\ 3 & 4 & 1 & c \end{array} \;\yields\; \begin{array}{rrr|l} 1 & 1 & 2 & 2 \\ 0 & 1 & -5 & 1 \\ 0 & 1 & -5 & c - 6 \end{array} \;\yields\; \begin{array}{rrr|l} 1 & 1 & 2 & 2 \\ 0 & 1 & -5 & 1 \\ 0 & 0 & 0 & c - 7 \end{array} \] This implies that only \(c = 7\) is compatible with the system (yielding \(0 = 0\) as the third equation). We continue the elimination process (after substituting \(c = 7\)) to obtain the reduced row-echelon form: \[ \begin{array}{rrr|r} 1 & 1 & 2 & 2 \\ 0 & 1 & -5 & 1 \\ 0 & 0 & 0 & 0 \end{array} \;\yields\; \begin{array}{rrr|r} 1 & 0 & 7 & 1 \\ 0 & 1 & -5 & 1 \\ 0 & 0 & 0 & 0 \end{array} \] Or as a system of equations (equivalent to the original system!): \[ \begin{align*} u + 7 w &= 1 \\ v - 5 w &= 1 \\ \end{align*} \] This system, or the reduced row-echelon form shows a few things. First, that the general solution to \(\mat{A} \vec{x} = \begin{bmatrix} 2 \\ 5 \\ 7 \end{bmatrix}\) (the original equation with \(c=7\) substituted) will have one free variable, here \(w\), and second, its right-hand side column vector \(\begin{bmatrix}u \\ v \\ w \end{bmatrix} = \begin{bmatrix}1 \\ 1 \\ 0 \end{bmatrix}\) is a particular solution to this equation (just fill in \(w=0\) in the reduced system of equations: \(w\) can have any value, so why not use \(0\)). The general solution also follows from the reduced set of equations: just bring the terms with \(w\) to the right side: \[ \begin{align*} u &= 1 - 7 w \\ v &= 1 + 5 w \\ \end{align*} \] which yields for the complete solution \[ \begin{bmatrix}u \\ v \\ w \end{bmatrix} = \begin{bmatrix}1 - 7 w \\ 1 + 5 w \\ w \end{bmatrix} = \begin{bmatrix}1 \\ 1 \\ 0 \end{bmatrix} + w \begin{bmatrix}-7 \\ 5 \\ 1 \end{bmatrix} \] with \(w \in \mathbb{R}\). The vector \(\begin{bmatrix}-7 \\ 5 \\ 1 \end{bmatrix}\) is a solution to \(\mat{A} \vec{x} = \vec{0}\)
Exercise 34.
The system of equations has two equations and four variables \((x_{1},x_{2},x_{3},x_{4})\). So, at least two variables will remain undetermined, or the nullspace of \(\mat{A}\) will have at least dimension 2. Gaussian elimination on the system yields \[ \begin{array}{rrrr|l} 1 & 2 & 0 & 3 & b_1 \\ 2 & 4 & 0 & 7 & b_2 \end{array} \;\yields\; \begin{array}{rrrr|l} 1 & 2 & 0 & 3 & b_1 \\ 0 & 0 & 0 & 1 & b_2 - 2 b_1 \end{array} \] as the echelon form, so \(\mat{A}\) has rank 2, and its columns span the full vector space \(\mathbb{R}^{2}\) (the set of all real-valued two-component vectors). Therefore, any \(\vec{b}\) should yield a solution (there are no solubility conditions). The nullspace of \(\mat{A}\) can be most easily deduced from the reduced row echelon form. The reduced row echelon form is \[ \vec{x} = \begin{bmatrix} -2\\ 1\\ 0\\ 0 \end{bmatrix} x_{2} + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} x_{3} + \begin{bmatrix} 7b_{1} - 3b_{2} \\ 0 \\ 0 \\ b_{2}-2b_{1} \end{bmatrix} \]
Exercise 35.
a. This matrix equation corresponds to a system of 3 equations with 4 variables.
b. There must be at least 1 free variable
c. Use Gaussian elimination: \[ \begin{array}{rrrr} 1 & 1 & 2 & 2\\ 1 & 1 & 1 & 0\\ 0 & 0 & 1 & 2 \end{array} \yields \begin{array}{rrrr} 1 & 1 & 2 & 2\\ 0 & 0 & -1 & -2\\ 0 & 0 & 1 & 2 \end{array} \yields \begin{array}{rrrr} 1 & 1 & 2 & 2\\ 0 & 0 & -1 & -2\\ 0 & 0 & 0 & 0 \end{array} \] Hence, the rank (nr. of pivots) equals 2
d. The null space has \(4-2=2\) free variables
e. TODO
f. \(b_3 + b_2 - b_1 = 0\)
Exercise 36.
a. \[ \begin{array}{rrrr|l} 1 & 2 & 3 & 5 & b_1 \\ 2 & 4 & 8 & 12 & b_2 \\ 3 & 6 & 7 & 13 & b_3 \end{array} \;\yields\; \begin{array}{rrrr|l} 1 & 2 & 3 & 5 & b_1 \\ 0 & 0 & 2 & 2 & b_2 - 2b_1 \\ 0 & 0 & -2 & -2 & b_3 - 3b_1 \end{array} \;\yields\; \begin{array}{rrrr|l} 1 & 2 & 3 & 5 & b_1 \\ 0 & 0 & 2 & 2 & b_2 - 2b_1 \\ 0 & 0 & 0 & 0 & b_3 + b_2 - 5b_1 \end{array} \]
b. Clearly, \(b_{3}+b_{2}-5b_{1}=0\).
c. \[ \begin{array}{rrrr|l} 1 & 2 & 3 & 5 & b_1 \\ 0 & 0 & 1 & 1 & \frac{b_2 - 2b_1}{2} \\ 0 & 0 & 0 & 0 & b_3 + b_2 - 5b_1 \end{array} \;\yields\; \begin{array}{rrrr|l} 1 & 2 & 0 & 2 & b_1 - \frac{3}{2} (b_2 - 2b_1)\\ 0 & 0 & 1 & 1 & \frac{b_2 - 2b_1}{2} \\ 0 & 0 & 0 & 0 & b_3 + b_2 - 5b_1 \end{array} \]
d. Well, that would be the solution to \(\mat{A}\vec{x}=\vec{0}\) of the original system, and hence also of the reduced system \[ \begin{align*} x_1 + 2 x_2 + 2 x_4 &= 0 \\ x_3 + x_4 &= 0 \end{align*} \] with \(x_2\) and \(x_4\) being our free variables we obtain the equation describing the nullspace \[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} x_2 + \begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix} x_4 \] (Check this!)
e. The system is solvable (has a non empty solution set) when \(b_{3}+b_{2}-5b_{1} = 0\). That is true for \(b_1=0\), \(b_2=6\) and \(b_{3}=-6\). Filling this in in the reduced row echelon form yields \[ \begin{bmatrix} 1 & 2 & 0 & 2 & -9 \\ 0 & 0 & 1 & 1 & 3 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \] Filling in \(0\) for our free variables yields \(x_1 = -9\) and \(x_3 = 3\), or \[ \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix} \] as a “particular” solution (Check this!). The complete solution equals the particular solution \(+\) the null space: \[ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} -9 \\ 0 \\ 3 \\ 0 \end{bmatrix} + \begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \end{bmatrix} x_2 + \begin{bmatrix} -2 \\ 0 \\ -1 \\ 1 \end{bmatrix} x_4 \]
Applications
Exercise 37. Deconvolution
a. \[ \begin{align*} 2x + y & = 8.0 \\ x + 1.5y & = 4.5 \end{align*} \]
b. By Gaussian elimination \[ \begin{bmatrix} 2 & 1 & 8.0 \\ 1 & 1.5 & 4.5 \end{bmatrix} \yields \begin{bmatrix} 2 & 1 & 8.0 \\ 0 & 1 & 0.5 \end{bmatrix} \] So, from the last equation \(y=0.5\,\text{mM}\). Back substitution in the first equation yields \(2x+0.5=8.0\) or \(2x=7.5\) or \(x=3.75\,\text{mM}\).
c. It would have failed if the extinction coefficients of one of the compounds would have been a multiple (which could be 1) of those of the other compound.
Exercise 38.
a. \[ \begin{align*} 41 p_1 + 37 p_2 + 39 p_3 + 40 p_4 & = 39 \\ 37 p_1 + 34 p_2 + 36 p_3 + 34 p_4 & = 35 \\ p_1 + p_2 + p_3 + p_4 & = 1 \end{align*} \] The last equation says that the fractions have to add up to \(1\). In matrix form this is: \[ \begin{bmatrix} 41 & 37 & 39 & 40 \\ 37 & 34 & 36 & 34 \\ 1 & 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} p_1 \\ p_2 \\ p_3 \\ p_4\end{bmatrix} = \begin{bmatrix} 39 \\ 35 \\ 1 \end{bmatrix} \]
b. Since \(p_1 \ldots p_4\) are fractions they have to have values between \(0\) and \(1\): \(0 \leq p_1 \leq 1\), \(0 \leq p_2 \leq 1\), \(0 \leq p_3 \leq 1\) and \(0 \leq p_4 \leq 1\).
c. A solution to the matrix equation might yield values of \(p_1 \ldots p_4\) outside these boundaries. We would have to formulate the problem in a manner such that the first two equations yield final fat and protein concentrations as close as possible to the target values without violating the other constraints on the fractions (for example by minimizing the sum of squares between the target value and the best possible value). Another, “technical” solution that may sometimes help to gain an additional degree of freedom (free variable) is the addition of water as a fifth component.
Exercise 39.
a. \[ \begin{align*} \dd{a}{t} & = - v_1 + v_3 \\ \dd{b}{t} & = 2 v_1 - v_2 \\ \dd{c}{t} & = v_2 - 2 v_3 \end{align*} \] or in matrix notation, and using the dot notation (\(\dd{a}{t} \equiv \dot{a}\) for derivatives:) \[ \begin{bmatrix}\dot{a} \\ \dot{b} \\ \dot{c} \end{bmatrix} = \begin{bmatrix} -1 & 0 & 1 \\ 2 & -1 & 0 \\ 0 & 1 & -2 \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} \]
b. We use Gaussian elimination again \[ \begin{split} \begin{array}{rrrr} -1 & 0 & 1 & \dot{a} \\ 2 & -1 & 0 & \dot{b} \\ 0 & 1 & -2 & \dot{c} \end{array} \;\yields\; \begin{array}{rrrr} -1 & 0 & 1 & \dot{a} \\ 0 & -1 & 2 & \dot{b} + 2\dot{a} \\ 0 & 1 & -2 & \dot{c} \end{array} \;\yields \\ \\ \begin{array}{rrrr} -1 & 0 & 1 & \dot{a} \\ 0 & -1 & 2 & \dot{b} + 2\dot{a} \\ 0 & 0 & 0 & \dot{c} + \dot{b} + 2 \dot{a} \end{array} \;\yields\; \begin{array}{rrrr} 1 & 0 & -1 & -\dot{a} \\ 0 & 1 & -2 & -\dot{b} - 2\dot{a} \\ 0 & 0 & 0 & \dot{c} + \dot{b} + 2 \dot{a} \end{array} \end{split} \] We see that the free variable is the third rate \(v_3\) and that the system is only solvable if \[ \dot{c} + \dot{b} + 2\dot{a} = \dd{c}{t} + \dd{b}{t} + 2 \dd{a}{t} = \dd{(c+b+2a)}{t} = 0 \] This is a conservation relation stating that the sum of concentrations \(c+b+2a\) must be constant. This is called a conserved moiety. Obvious conserved moieties are often enzyme cofactors like \(\ce{NAD + NADH}\) or \(\ce{ATP + ADP + AMP}\) but the example above shows that conserved moieties are not limited to cofactors. The dimension of the null space (solutions to the steady state equation \(\mat{N} \vec{v} = \vec{0}\)) equals \(\#columns - \#pivots = 3 - 2 = 1\). It can be found in the column corresponding to the third (free) variable, namely (1,2,1). So, the complete steady state solution is \[ \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \\ 1\end{bmatrix} v_3 \] It says that in steady state \(v_1\) and \(v_3\) are equal, and that \(v_2\) is twice their value. In the metabolic network analysis terminology the matrix \((1,2,1)\) of vectors spanning the null space (all solutions to the steady state equation!) is the K-matrix. The complete non-steady state solution is \[ \begin{bmatrix}v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix}1 \\ 2 \\ 1 \end{bmatrix} v_3 + \begin{bmatrix} -\dot{a} \\ -\dot{b} - 2 \dot{a} \\ 0 \end{bmatrix} = \begin{bmatrix}1 \\ 2 \\ 1 \end{bmatrix} v_3 + \begin{bmatrix} -1 & 0 \\ -2 & -1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \dot{a} \\ \dot{b} \end{bmatrix} \] but in metabolic system analysis actually we would usually write \(\dot{a}\) and \(\dot{b}\) as functions of \(v_1\), \(v_2\) and \(v_3\), which, by the enzyme rate equations are again functions of \(A\) and \(B\), if we’re interested in the dynamics of the system. The L-matrix used in metabolic network analysis records the conservation relations, which as we saw, follow from the solvability conditions in the \(\vec{b}\)-vector. Here \(\begin{bmatrix} 2 & 1 & 1 \end{bmatrix} \begin{bmatrix} \dot{a} \\ \dot{b} \\ \dot{c} \end{bmatrix} = 0\). This dependency can also be written as \(\begin{bmatrix} \dot{a} \\ \dot{b} \\ \dot{c} \end{bmatrix} = \begin{bmatrix*}[r] 1 & 0 \\ 0 & 1 \\ -2 & -1 \end{bmatrix*} \begin{bmatrix} \dot{a} \\ \dot{b} \end{bmatrix}\) where \(\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ -2 & -1 \end{bmatrix}\) is called the L-matrix.
c. \[ \begin{array}{rrr|rrr} -1 & 0 & 1 & 1 & 0 & 0 \\ 2 & -1 & 0 & 0 & 1 & 0 \\ 0 & 1 & -2 & 0 & 0 & 1 \end{array} \;\yields\; \begin{array}{rrr|rrr} -1 & 0 & 1 & 1 & 0 & 0 \\ 0 & -1 & 2 & 2 & 1 & 0 \\ 0 & 1 & -2 & 0 & 0 & 1 \end{array} \;\yields\; \begin{array}{rrr|rrr} -1 & 0 & 1 & 1 & 0 & 0 \\ 0 & -1 & 2 & 2 & 1 & 0 \\ 0 & 0 & 0 & 2 & 1 & 1 \end{array} \] The last row in which the left hand side equals0 has the conserved moiety (\(2\dot{a} + \dot{b} + \dot{c}\)) in the augmented part. Notice that this is actually not much different from what we did above when augmenting the vector \((\dot{a},\dot{b},\dot{c})\).
Exercise 40.
a. \[ \begin{align} \frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} \mathrm{C} \\ \mathrm{H} \\ \mathrm{O} \end{bmatrix} = \begin{bmatrix} -7 & 0 & 1 & 0 \\ -16 & 0 & 0 & 2\\ 0 & -2 & 2 & 1 \end{bmatrix} \begin{bmatrix} a \\ b \\ c \\ d \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \end{align} \]
b. \[ \begin{bmatrix} -16 & 0 & 0 & 2 \\ 0 & -2 & 2 & 1 \\ -7 & 0 & 1 & 0 \end{bmatrix} \yields \begin{bmatrix} -1 & 0 & 0 & \frac{1}{8} \\ 0 & -1 & 1 & \frac{1}{2} \\ 0 & 0 & 1 & -\frac{7}{8} \end{bmatrix} \yields \begin{bmatrix} 1 & 0 & 0 & -\frac{1}{8} \\ 0 & 1 & 0 & -\frac{11}{8} \\ 0 & 0 & 1 & -\frac{7}{8} \end{bmatrix} \] \[ \begin{align}\begin{bmatrix} a \\ b \\ c \\ d \end{bmatrix} = \begin{bmatrix} 1 \\ 11 \\ 7 \\ 8 \end{bmatrix} \end{align} \]
c. \[ \begin{align} \frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} \mathrm{C} \\ \mathrm{H} \\ \mathrm{O} \end{bmatrix} = \begin{bmatrix} -7 \\ -16 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 2\\ -2 & 2 & 1 \end{bmatrix} \begin{bmatrix} b \\ c \\ d \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \end{align} \] \[ \begin{align} \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 2 \\ -2 & 2 & 1 \end{bmatrix} \begin{bmatrix} b \\ c \\ d \end{bmatrix} = \begin{bmatrix} 7 \\ 16 \\ 0 \end{bmatrix} \end{align} \]
\[ \begin{align} \left[ \begin{array}{ccc|c} -2 & 2 & 1 & 0 \\ 0 & 1 & 0 & 7 \\ 0 & 0 & 2 & 16 \end{array} \right] \yields \left[ \begin{array}{ccc|c} -2 & 0 & 0 & -22 \\ 0 & 1 & 0 & 7 \\ 0 & 0 & 2 & 16 \end{array} \right] \yields \left[ \begin{array}{ccc|c} 1 & 0 & 0 & 11 \\ 0 & 1 & 0 & 7 \\ 0 & 0 & 1 & 8 \end{array} \right]\end{align} \]
d. \[ \begin{align} \begin{bmatrix} b \\ c \\ d \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 2\\ -2 & 2 & 1 \end{bmatrix}^{-1} \begin{bmatrix} 7 \\ 16 \\ 0 \end{bmatrix} \end{align} \]
\[ \begin{align} \left[ \begin{array}{ccc|ccc} 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 & 1 & 0 \\-2 & 2 & -1 & 0 & 0 & 1 \end{array} \right] \yields \left[ \begin{array}{ccc|ccc} -2 & 2 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 & 1 & 0 \end{array} \right] \yields \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & \frac{1}{4} & -\frac{1}{2} \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & \frac{1}{2} & 0 \end{array} \right] \end{align} \] \[ \begin{align} \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 2\\ -2 & 2 & 1 \end{bmatrix}^{-1} = \begin{bmatrix} 1 & \frac{1}{4} & -\frac{1}{2} \\ 1 & 0 & 0\\ 0 & \frac{1}{2} & 0 \end{bmatrix} \end{align} \]
\[ \begin{align} \begin{bmatrix} b \\ c \\ d \end{bmatrix} = \begin{bmatrix} 1 & \frac{1}{4} & -\frac{1}{2} \\ 1 & 0 & 0\\ 0 & \frac{1}{2} & 0 \end{bmatrix} \begin{bmatrix} 7 \\ 16 \\ 0 \end{bmatrix} = \begin{bmatrix} 11 \\ 7 \\ 8 \end{bmatrix} \end{align} \]
Exercise 41.
a. TODO: drawing
b. \[ \begin{align*} \dd{x}{t} & = - v_1 + v_3 \\ \dd{y}{t} & = 3 v_1 - 3 v_2 \\ \dd{z}{t} & = 2 v_2 - 2 v_3 \end{align*} \] or in matrix notation, and using the dot notation (\(\dd{x}{t} \equiv \dot{x}\) for derivatives:) \[ \begin{bmatrix}\dot{x} \\ \dot{y} \\ \dot{z} \end{bmatrix} = \begin{bmatrix} -1 & 0 & 1 \\ 3 & -3 & 0 \\ 0 & 2 & -2 \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} \]
c. We use Gaussian elimination again \[ \begin{split} \begin{array}{rrr|l} -1 & 0 & 1 & \dot{x} \\ 3 & -3 & 0 & \dot{y} \\ 0 & 2 & -2 & \dot{z} \end{array} \;\yields\; \begin{array}{rrr|l} -1 & 0 & 1 & \dot{x} \\ 0 & -3 & 3 & \dot{y} + 3 \dot{x} \\ 0 & 2 & -2 & \dot{z} \end{array} \;\yields\; \\ \begin{array}{rrr|l} -1 & 0 & 1 & \dot{x} \\ 0 & -3 & 3 & \dot{y} + 3 \dot{x} \\ 0 & 0 & 0 & \dot{z} + \frac{2}{3}(\dot{y} + 3 \dot{x}) \end{array} \end{split} \] We see that the free variable is the third rate \(v_3\) and that the system is only solvable if \[ \begin{align*} \dot{z} + \frac{2}{3}(\dot{y} + 3 \dot{x}) & = 0 \\ 3 \dot{z} + 2 \dot{y} + 6 \dot{x} & =0 \\ 3 \dd{z}{t} + 2 \dd{y}{t} + 6 \dd{x}{t} & = \\ \dd{(3 z + 2 y + 6 x)}{t} & = 0 \end{align*} \] This is a conservation relation stating that the sum of concentrations \(3z + 2y + 6 x\) must be constant.
d. \[ \begin{bmatrix} -1 & 0 & 1 \\ 3 & -3 & 0 \\ 0 & 2 & -2 \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \]
e. The complete steady state solution can be deduced from the Gaussian reduced form: \[ \begin{bmatrix} -1 & 0 & 1 \\ 0 & -1 & 1 \\ \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \] yielding \(v_1 = v_3\) and \(v_2 = v_3\) \[ \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1\end{bmatrix} v_3 \] with \(v_3 \in \mathbb{R}\)
Exercise 42. The perceptron binary classifier
a. The solution to \(f(\vec{x})=0\) corresponds to the solution space of the inhomogeneous linear equation \(\mat{W}\vec{x}^T = \vec{b}\), where \(\mat{W}\) is a matrix with one row (a row vector) and \(\vec{b}\) is a vector with one element (a scalar). The solution space is a \(n-1\)-dimensional hyperplane in \(n\)-dimensional space.
b. These points correspond to the points on one or the other side of the hyperplane defined by \(f(\vec{x})=0\). Ideally, the training samples of the two classes lie on either side of this hyperplane.
13.6 Working with matrices
Multiplying matrices
Exercise 43.
\[ \begin{bmatrix} 17 & 1 & 0 \\ 4 & 8 & 0 \end{bmatrix} \]
Exercise 44.
That’s funny, the rows are exchanged! \[ \begin{bmatrix} 7 & -1 & 1 \\ 3 & 2 & 4 \end{bmatrix} \]
Exercise 45.
\[ \begin{bmatrix} a & b \\ 2a + 3c & 2b + 3d \end{bmatrix} \]
Exercise 46.
\(1^2 + -2^2 + 7^2 = 54\) This corresponds to the square of the length of the line segment
Exercise 47.
\(3 + -10 + 7 = 0\)
Exercise 48.
\[ \begin{bmatrix} 6 & 3 \\ 1 & -1 \end{bmatrix} \]
Exercise 49.
\[ \begin{bmatrix} 6 & 1 \\ 3 & -1 \end{bmatrix} \]
Inverses of matrices
Exercise 50.
Augment Matrix (A) with the Identity Matrix:
\[\begin{equation*} \left[ A | I \right] = \left[ \begin{array}{cc|cc} 2 & 1 & 1 & 0 \\ 1 & 3 & 0 & 1 \end{array} \right] \end{equation*}\]
Divide the first row by 2:
\[\begin{equation*} \left[ \begin{array}{cc|cc} 1 & \frac{1}{2} & \frac{1}{2} & 0 \\ 1 & 3 & 0 & 1 \end{array} \right] \end{equation*}\]
Subtract the first row from the second row:
\[\begin{equation*} \left[ \begin{array}{cc|cc} 1 & \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & \frac{5}{2} & -\frac{1}{2} & 1 \end{array} \right] \end{equation*}\]
Divide the second row by \(\frac{5}{2}\):
\[\begin{equation*} \left[ \begin{array}{cc|cc} 1 & \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & 1 & -\frac{1}{5} & \frac{2}{5} \end{array} \right] \end{equation*}\]
Subtract \(\frac{1}{2}\) times the second row from the first row:
\[\begin{equation*} \left[ \begin{array}{cc|cc} 1 & 0 & \frac{3}{5} & -\frac{1}{5} \\ 0 & 1 & -\frac{1}{5} & \frac{2}{5} \end{array} \right] \end{equation*}\]
The matrix on the right side of the augmented matrix is the inverse of \(\mat{A}\):
\[\begin{equation*} \mat{A}^{-1} = \begin{bmatrix} \frac{3}{5} & -\frac{1}{5} \\ -\frac{1}{5} & \frac{2}{5} \end{bmatrix} \end{equation*}\]
Exercise 51.
Augment Matrix $ with the Identity Matrix:
\[\begin{equation*} \left[ B | I \right] = \left[ \begin{array}{ccc|ccc} 4 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 2 & 0 & 1 & 0 \\ 1 & 3 & 5 & 0 & 0 & 1 \end{array} \right] \end{equation*}\]
Subtract \(2\) times the second row from the first row:
\[\begin{equation*} \left[ \begin{array}{ccc|ccc} 4 & 0 & -3 & 1 & -2 & 0 \\ 0 & 1 & 2 & 0 & 1 & 0 \\ 1 & 3 & 5 & 0 & 0 & 1 \end{array} \right] \end{equation*}\]
Subtract \(3\) times the second row from the third row:
\[\begin{equation*} \left[ \begin{array}{ccc|ccc} 4 & 0 & -3 & 1 & -2 & 0 \\ 0 & 1 & 2 & 0 & 1 & 0 \\ 1 & 0 & -1 & 0 & -3 & 1 \end{array} \right] \end{equation*}\]
Subtract \(3\) times the third row to the first row:
\[\begin{equation*} \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & 7 & -3 \\ 0 & 1 & 2 & 0 & 1 & 0 \\ 1 & 0 & -1 & 0 & -3 & 1 \end{array} \right] \end{equation*}\]
Subtract the first row from the third row:
\[\begin{equation*} \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & 7 & -3 \\ 0 & 1 & 2 & 0 & 1 & 0 \\ 0 & 0 & -1 & -1 & -10 & 4 \end{array} \right] \end{equation*}\]
Add \(2\) times the third row to the second row:
\[\begin{equation*} \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & 7 & -3 \\ 0 & 1 & 0 & -2 & -19 & 8 \\ 0 & 0 & -1 & -1 & -10 & 4 \end{array} \right] \end{equation*}\]
Multiply the third row by \(-1\):
\[\begin{equation*} \left[ \begin{array}{ccc|ccc} 1 & 0 & 0 & 1 & 7 & -3 \\ 0 & 1 & 0 & -2 & -19 & 8 \\ 0 & 0 & 1 & 1 & 10 & -4 \end{array} \right] \end{equation*}\]
The matrix on the right side of the augmented matrix is the inverse of \(\mat{B}\):
\[\begin{equation*} B^{-1} = \begin{bmatrix} 1 & 7 & 3 \\ -2 & -19 & 8 \\ 1 & 10 & -4 \end{bmatrix} \end{equation*}\]
Exercise 52.
a. No, \(\mat{0}\) does not have an inverse, because we can not reconstruct the original equations from a set of equations \(0=0\).
b. In general not. In general (.e. if the original equation did not already have that solution set) it increases the solution set to vectors in \(\mathbb{R}^n\), where \(n\) is the number of variables.
Applications
Exercise 53. Representing graphs as matrices
a. The adjacency matrix of the network is \[ \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \]
b. If node \(i\) is connected to node \(j\), then in an undirected graph node \(j\) is also connected to node \(i\). Conversely, If node \(i\) is not connected to node \(j\), then neither is node \(j\) to node \(i\). Therefore, the entry \(a_{ij} = a_{ji}\), hence the matrix is symmetric multiplying this matrix with itself yields
c. \[ \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \cdot \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 2 & 0 & 1 \\ 1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 1 \end{bmatrix} \]
d. The non-zero entries in the resulting matrix yield (the number of) paths of length 2 between nodes.
13.7 Orthogonal vectors and vector spaces
Inner product
Exercise 54.
Proving the commutative property: \[ \vec{x} \cdot \vec{y} = \sum x_i y_i = \sum y_i x_i = \vec{y} \cdot \vec{x} \] Proving the distributive property \[ \vec{x} \cdot (\vec{a} + \vec{b}) = \sum x_i (a_i + b_i) = \sum \left( x_i a_i + x_i b_i \right) = \sum x_i a_i + \sum x_i b_i = \vec{x} \cdot \vec{a} + \vec{x} \cdot \vec{b} \]
Norms
Exercise 55.
\[ ||a\vec{x}|| = \sqrt{\sum_{i=1}^n a^2 x_i^2} = \sqrt{a^2 \sum_{i=1}^n x_i^2} = |a| \sqrt{\sum_{i=1}^n x_i^2} = |a| ||\vec{x}|| \]
Exercise 56.
Since \(||\vec{x}'||\) is a (positive) scalar
\[ \left| \left| \frac{\vec{x}'}{||\vec{x}'||} \right| \right| = \frac{1}{||\vec{x}'||} ||\vec{x}'|| = 1 \]
Exercise 57.
\[ \overline{x} = \frac{\vec{1}^T \vec{x} }{n} = \frac{\vec{1} \cdot \vec{x}}{n} \] Hence \[ z(\vec{x}) \equiv \frac{ \vec{x} - \frac{1}{n} \vec{1}^T \vec{x} \vec{1}}{||\vec{x} - \frac{1}{n} \vec{1}^T \vec{x} \vec{1}||} \]
Least squares
Exercise 58.
\(\mat{X}^T\) and \(\mat{X}\) are \((m+1) \times n\) and \(n \times (m+1)\) matrices, respectively, hence \(\mat{X}^T \mat{X}\) is a \((m+1) \times (m+1)\) matrix. Then \(\mat{X}^T \mat{X} \vec{b}\) is a \((m+1) \times 1\) vector and so is \(\mat{X}^T \vec{y}\). This shows that the dimensions are compatible. Furthermore, if the columns of \(\mat{X}\) are independent then \(\mat{X}\) has full column rank (\(\rank{\mat{X}}=m+1\)) and \(\mat{X}^T\mat{X}\) will have rank \(m+1\) as well.
Applications
Exercise 59. Deconvolution using single cell RNA signatures
a. Call the unknown numbers of cells the vector \(\vec{b}\). Then, when the experiment would be infinitely accurate, the following should hold:
\[ \begin{bmatrix} 5 & 300 & 500 \\ 395 & 100 & 190 \\ 200 & 540 & 10 \\ 400 & 60 & 300 \\ \end{bmatrix} \vec{b} = \begin{bmatrix} 321000 \\ 145000 \\ 362000 \\ 171000 \end{bmatrix} \] This equation is generally not solvable because we have 4 equations with 3 unknown variables. The corresponding least squares problem is
\[ \begin{align*} \begin{bmatrix} 5 & 395 & 200 & 400 \\ 300 & 100 & 540 & 60 \\ 500 & 190 & 10 & 300 \end{bmatrix} \begin{bmatrix} 5 & 300 & 500 \\ 395 & 100 & 190 \\ 200 & 540 & 10 \\ 400 & 60 & 300 \\ \end{bmatrix} \vec{b} &= \begin{bmatrix} 5 & 395 & 200 & 400 \\ 300 & 100 & 540 & 60 \\ 500 & 190 & 10 & 300 \end{bmatrix} \begin{bmatrix} 321000 \\ 145000 \\ 362000 \\ 171000 \end{bmatrix} \\ \begin{bmatrix} 356050 & 173000 & 199550 \\ 173000 & 395200 & 192400 \\ 199550 & 192400 & 376200 \end{bmatrix} \vec{b} &= \begin{bmatrix} 199680000 \\ 316540000 \\ 242970000 \end{bmatrix} \end{align*} \]
b. The numbers of cells should be \((a, b, c) \approx (106, 622, 271)\).
For example, the R-code is
A <- matrix(c(5, 300, 500, 395, 100, 190, 200, 540, 10, 400, 60, 300),
nrow=4, byrow = TRUE)
x <- matrix(c(321000, 145000, 362000, 171000), ncol=1)
solve(t(A) %*% A, t(A) %*% x)where the solve() command performs te Gauss-Jordan elimination.
c. Because we would be throwing away measurements. Since every measurement has an error, the more measurements we add the better our estimates of the numbers of cells would become. The least squares procedure “averages out” the errors between the measurements of RNA molecules. Compare it to calculating a sample mean as an estimate of the population mean: the more measurements we have, the better our estimate will become.