Linear Algebra

\[\newcommand{\qzero}[0]{\left|0\right\rangle} \newcommand{\qone}[0]{\left|1\right\rangle} \newcommand{\qplus}[0]{\left| + \right\rangle} \newcommand{\qminus}[0]{\left| - \right\rangle} \newcommand{\dirac}[1]{\left| #1 \right\rangle} \newcommand{\ident}[0]{\mathrm{I}}\]

Vectors

A column vector is a \(n\) by 1 ordered array of complex numbers.

\( \begin{align} v =& \begin{bmatrix} a \\ b \end{bmatrix} \end{align} \)

An 8 by 1 column vector will have 8 values.

\( \begin{align} v =& \begin{bmatrix} a \\ b \\ c \\ d \\ e \\ f \\ g \\ h \end{bmatrix} \end{align} \)

The magnitude of a vector is its length. The length of a vector \(\left| v \right|\) with value \(a_1,\cdots,a_{n}\) is

\( \begin{align} \left| v \right| =& \sqrt{a_{1}^2+a_{2}^2+\cdots+a_{n}^2} \end{align} \)

For example, the following vector has a magnitude of 1.

\( \begin{align} \left| v \right| =& \left| \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \right| \\ =& \sqrt{ \left(\frac{1}{\sqrt{2}}\right)^2 + \left(\frac{1}{\sqrt{2}}\right)^2 } \\ =& \sqrt{1} = 1 \end{align} \)

Square Matrix

A square matrix is a n by n array of complex numbers. A matrix represents a linear transformation of a vector.

\( \begin{align} M=& \begin{bmatrix} w & x \\ y & z \end{bmatrix} \end{align} \)

Multiplication of a Vector by a Matrix is a Linear Transform.

\( \begin{align} Mv =& \begin{bmatrix} w & x \\ y & z \end{bmatrix} \begin{bmatrix} a \\ b \end{bmatrix} \\ =& \begin{bmatrix} aw+bx \\ ay+bz \end{bmatrix} \end{align} \)

Operations

The transpose \(M^{T}\) of a matrix is formed by turning all rows into columns and all columns into rows.

\( \begin{align} M=& \begin{bmatrix} w & x \\ y & z \end{bmatrix} \\ M^{T}=& \begin{bmatrix}w & y \\ x & z \end{bmatrix} \end{align} \)

The Hermitian Conjugate \(M^{\dagger}\) is the transpose combined with the complex conjugate.

\( \begin{align} M=& \begin{bmatrix} w & x \\ y & z \end{bmatrix} \\ M^{\dagger}=& \begin{bmatrix}w^{*} & y^{*} \\ x^{*} & z^{*} \end{bmatrix} \end{align} \)

The dot product is one way to multiply two Matrixes. We treat every column as a vector.

\( \begin{align} \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix} \begin{bmatrix} w & x \\ y & z \\ \end{bmatrix} = \begin{bmatrix} aw+by & ax+bz \\ cw+dy & cx+dz \\ \end{bmatrix} \end{align} \)

The tensor product is another way to multiply two Matrixes. Every element in the first matrix is multiplied by every element in the second matrix.

\( \begin{align} \begin{bmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{bmatrix} \otimes \begin{bmatrix} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \end{bmatrix} =& \begin{bmatrix} a_{1,1} b_{1,1} & a_{1,1} b_{1,2} & a_{1,2}b_{1,1} & a_{1,2}b_{1,2} \\ a_{1,1} b_{2,1} & a_{1,1} b_{2,2} & a_{1,2}b_{2,1} & a_{1,2}b_{2,2} \\ a_{2,1} b_{1,1} & a_{2,1} b_{1,2} & a_{2,2}b_{1,1} & a_{2,2}b_{1,2} \\ a_{2,1} b_{2,1} & a_{2,1} b_{2,2} & a_{2,2}b_{2,1} & a_{2,2}b_{2,2} \\ \end{bmatrix} \end{align} \)

The Hilbert Space is named after David Hilbert. It is a vector space of \(n\) dimensions. It must possess an inner product that allows length and angle to be computed. The inner product of vectors \(v\) and \(w\) is defined below for our Hilbert Space.

\( \begin{align} (v,w) =& v^{\dagger} w \\ =& \sum_{i=1}^{n} v_{i}^{*}w_{i} \end{align} \)