Chapter 2

Operators and Eigenfunctions

p. 32 - Intro

p. 32 - Section 2.1

p. 37 - Section 2.2

p. 43 - Section 2.3

p. 49 - Section 2.4

p. 56 - Section 2.5

For vector \(\vec{A}=A_x\hat{\imath}+A_y\hat{\jmath}\) and matrix \(\bar{\bar{R}}=\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\), what does operator \(\bar{\bar{R}}\) do to vector \(\vec{A}\)? (Hint: consider \(\bar{\bar{R}}\vec{A}\) for the cases \(\theta=90^\circ\) and \(\theta=180^\circ\).)

The process of a matrix operating on a vector is shown in Eq. 2.3:
\begin{equation*}
\bar{\bar{R}}\vec{A}=\begin{pmatrix}R_{11}&R_{12}\\R_{21}&R_{22}\end{pmatrix}\begin{pmatrix}A_1\\A_2\end{pmatrix}=\begin{pmatrix}R_{11}A_1+R_{12}A_2\\R_{21}A_1+R_{22}A_2\end{pmatrix}.
\end{equation*}

In this case, operating on vector \(\vec{A}=A_x\hat{\imath}+A_y\hat{\jmath}\) with matrix \(\bar{\bar{R}}=\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\) looks like this:
\begin{equation*}
\bar{\bar{R}}\vec{A}=\begin{pmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\begin{pmatrix}A_x\\A_y\end{pmatrix}=\begin{pmatrix}A_x\cos{\theta}+A_y\sin{\theta}\\-A_x\sin{\theta}+A_y\cos{\theta}\end{pmatrix}.
\end{equation*}

Using \(\theta=90^\circ\) in the expression from the previous hint gives
\begin{align*}
\bar{\bar{R}}\vec{A}&=\begin{pmatrix}A_x\cos{90^\circ}+A_y\sin{90^\circ}\\-A_x\sin{90^\circ}+A_y\cos{90^\circ}\end{pmatrix}\\
&=\begin{pmatrix}A_x(0)+A_y(1)\\-A_x(1)+A_y(0)\end{pmatrix}=\begin{pmatrix}A_y\\-A_x\end{pmatrix}.
\end{align*}

So operating on vector \(\vec{A}\) with matrix \(\bar{\bar{R}}\) has produced a new vector, and the x-component of that new vector is the y-component of \(\vec{A}\), while the y-component of the new vector is the negative of the x-component of \(\vec{A}\).

That means that if vector \(\vec{A}\) originally pointed along the x-axis (so \(A_x\) was positive and \(A_y\) was zero), the new vector will point along the negative y-axis (since its x-component will be zero and its y-component will be negative).

Note also that the new vector has the same magnitude as \(\vec{A}\), since the sum of the squares of the components has not changed. Thus operating on \(\vec{A}\) with \(\bar{\bar{R}}\) with \(\theta=90^\circ\) rotates vector \(\vec{A}\) through an angle of \(90^\circ\) in the clockwise direction.

For \(\theta=180^\circ\), the operation is
\begin{align*}
\bar{\bar{R}}\vec{A}&=\begin{pmatrix}A_x\cos{180^\circ}+A_y\sin{180^\circ}\\-A_x\sin{180^\circ}+A_y\cos{180^\circ}\end{pmatrix}\\
&=\begin{pmatrix}A_x(-1)+A_y(0)\\-A_x(0)+A_y(-1)\end{pmatrix}=\begin{pmatrix}-A_x\\-A_y\end{pmatrix}=-\vec{A}.
\end{align*}

So in this case the new vector produced by operating on vector \(\vec{A}\) with matrix \(\bar{\bar{R}}\) is equal to \(-\vec{A}\), that is, it has the same magnitude as \(\vec{A}\) but points in the opposite direction. Just as in the \(\theta=90^\circ\) case, matrix \(\bar{\bar{R}}\) has rotated vector \(\vec{A}\) through angle \(\theta\) in the clockwise direction.

The process of a matrix operating on a vector is shown in Eq. 2.3:
\begin{equation*}
\bar{\bar{R}}\vec{A}=\begin{pmatrix}R_{11}&R_{12}\\R_{21}&R_{22}\end{pmatrix}\begin{pmatrix}A_1\\A_2\end{pmatrix}=\begin{pmatrix}R_{11}A_1+R_{12}A_2\\R_{21}A_1+R_{22}A_2\end{pmatrix}.
\end{equation*}

In this case, operating on vector \(\vec{A}=A_x\hat{\imath}+A_y\hat{\jmath}\) with matrix \(\bar{\bar{R}}=\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\) looks like this:
\begin{equation*}
\bar{\bar{R}}\vec{A}=\begin{pmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\begin{pmatrix}A_x\\A_y\end{pmatrix}=\begin{pmatrix}A_x\cos{\theta}+A_y\sin{\theta}\\-A_x\sin{\theta}+A_y\cos{\theta}\end{pmatrix},
\end{equation*}

and using \(\theta=90^\circ\) in this expression gives
\begin{align*}
\bar{\bar{R}}\vec{A}&=\begin{pmatrix}A_x\cos{90^\circ}+A_y\sin{90^\circ}\\-A_x\sin{90^\circ}+A_y\cos{90^\circ}\end{pmatrix}\\
&=\begin{pmatrix}A_x(0)+A_y(1)\\-A_x(1)+A_y(0)\end{pmatrix}=\begin{pmatrix}A_y\\-A_x\end{pmatrix}.
\end{align*}

So operating on vector \(\vec{A}\) with matrix \(\bar{\bar{R}}\) has produced a new vector, and the x-component of that new vector is the y-component of \(\vec{A}\), while the y-component of the new vector is the negative of the x-component of \(\vec{A}\).

That means that if vector \(\vec{A}\) originally pointed along the x-axis (so \(A_x\) was positive and \(A_y\) was zero), the new vector will point along the negative y-axis (since its x-component will be zero and its y-component will be negative).

Note also that the new vector has the same magnitude as \(\vec{A}\), since the sum of the squares of the components has not changed. Thus operating on \(\vec{A}\) with \(\bar{\bar{R}}\) with \(\theta=90^\circ\) rotates vector \(\vec{A}\) through an angle of \(90^\circ\) in the clockwise direction.

For \(\theta=180^\circ\), the operation is
\begin{align*}
\bar{\bar{R}}\vec{A}&=\begin{pmatrix}A_x\cos{180^\circ}+A_y\sin{180^\circ}\\-A_x\sin{180^\circ}+A_y\cos{180^\circ}\end{pmatrix}\\
&=\begin{pmatrix}A_x(-1)+A_y(0)\\-A_x(0)+A_y(-1)\end{pmatrix}=\begin{pmatrix}-A_x\\-A_y\end{pmatrix}=-\vec{A}.
\end{align*}

So in this case the new vector produced by operating on vector \(\vec{A}\) with matrix \(\bar{\bar{R}}\) is equal to \(-\vec{A}\), that is, it has the same magnitude as \(\vec{A}\) but points in the opposite direction. Just as in the \(\theta=90^\circ\) case, matrix \(\bar{\bar{R}}\) has rotated vector \(\vec{A}\) through angle \(\theta\) in the clockwise direction.

Show that the complex vectors \(\begin{pmatrix} 1\\i \end{pmatrix}\) and \(\begin{pmatrix} 1\\-i \end{pmatrix}\) are eigenvectors of matrix \(\bar{\bar{R}}\) in Problem 1, and find the eigenvalues of each eigenvector.

Eigenvalues and eigenvectors must satisfy the eigenvalue equation (Eq. 2.6):
\begin{equation*}
\bar{\bar{R}}\vec{A}=\lambda\vec{A}.
\end{equation*}

Inserting \(\bar{\bar{R}}=\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\) and vector \(\vec{A}=\begin{pmatrix} 1\\i \end{pmatrix}\) into Eq. 2.6 gives
\begin{equation*}
\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\begin{pmatrix} 1\\i \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix}.
\end{equation*}

Performing the matrix multiplication in the previous hint gives
\begin{equation*}
\begin{pmatrix} \cos{\theta}(1)+\sin{\theta}(i)\\-\sin{\theta}(1)+\cos{\theta}(i) \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix}.
\end{equation*}

But Euler’s relation tells you that\(\cos{\theta}+i\sin{\theta}=e^{i\theta}\) and that\(-\sin{\theta}+i\cos{\theta}=ie^{i\theta}\), so the eigenvalue equation in this case is
\begin{equation*}
\begin{pmatrix} e^{i\theta}\\ie^{i\theta} \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix}.
\end{equation*}

Pulling a factor of \(e^{i\theta}\) out of the left side of the expression in the previous hint gives
\begin{equation*}
e^{i\theta}\begin{pmatrix} 1\\i \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix}
\end{equation*}

which is true if \(\lambda=e^{i\theta}\).

The same process for the vector \(\vec{A}=\begin{pmatrix} 1\\-i \end{pmatrix}\) gives
\begin{equation*}
\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\begin{pmatrix} 1\\-i \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i \end{pmatrix}
\end{equation*}

or
\begin{equation*}
\begin{pmatrix} \cos{\theta}(1)+\sin{\theta}(-i)\\-\sin{\theta}(1)+\cos{\theta}(-i) \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i \end{pmatrix}.
\end{equation*}

Now use the fact that \(\cos{\theta}-i\sin{\theta}=e^{-i\theta}\) and that \(-\sin{\theta}-i\cos{\theta}=-ie^{-i\theta}\), which makes the eigenvalue equation
\begin{equation*}
\begin{pmatrix} e^{-i\theta}\\-ie^{-i\theta} \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i \end{pmatrix}
\end{equation*}

and pulling a factor of \(e^{-i\theta}\) out of the left side of this expression gives
\begin{equation*}
e^{-i\theta}\begin{pmatrix} 1\\-i \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i \end{pmatrix}
\end{equation*}

which is true if \(\lambda=e^{-i\theta}\).

Eigenvalues and eigenvectors must satisfy the eigenvalue equation (Eq. 2.6):
\begin{equation*}
\bar{\bar{R}}\vec{A}=\lambda\vec{A}.
\end{equation*}

Inserting \(\bar{\bar{R}}=\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\) and vector \(\vec{A}=\begin{pmatrix} 1\\i \end{pmatrix}\) into Eq. 2.6 gives
\begin{equation*}
\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\begin{pmatrix} 1\\i \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix},
\end{equation*}
and performing the matrix multiplication gives
\begin{equation*}
\begin{pmatrix} \cos{\theta}(1)+\sin{\theta}(i)\\-\sin{\theta}(1)+\cos{\theta}(i) \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix}.
\end{equation*}

But Euler’s relation tells you that \(\cos{\theta}+i\sin{\theta}=e^{i\theta}\) and that \(-\sin{\theta}+i\cos{\theta}=ie^{i\theta}\), so the eigenvalue equation in this case is
\begin{equation*}
\begin{pmatrix} e^{i\theta}\\ie^{i\theta} \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix},
\end{equation*}

and pulling a factor of \(e^{i\theta}\) out of the left side of this expression gives
\begin{equation*}
e^{i\theta}\begin{pmatrix} 1\\i \end{pmatrix}=\lambda\begin{pmatrix} 1\\i \end{pmatrix}
\end{equation*}

which is true if \(\lambda=e^{i\theta}\).

The same process for the vector \(\vec{A}=\begin{pmatrix} 1\\-i \end{pmatrix}\) gives
\begin{equation*}
\begin{pmatrix} \cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{pmatrix}\begin{pmatrix} 1\\-i \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i \end{pmatrix}
\end{equation*}

or
\begin{equation*}
\begin{pmatrix} \cos{\theta}(1)+\sin{\theta}(-i)\\-\sin{\theta}(1)+\cos{\theta}(-i) \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i \end{pmatrix}.
\end{equation*}

Using the fact that \(\cos{\theta}-i\sin{\theta}=e^{-i\theta}\) and that \(-\sin{\theta}-i\cos{\theta}=-ie^{-i\theta}\) makes the eigenvalue equation
\begin{equation*}
\begin{pmatrix} e^{-i\theta}\\-ie^{-i\theta} \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i, \end{pmatrix}
\end{equation*}

and pulling a factor of \(e^{-i\theta}\) out of the left side of this expression gives
\begin{equation*}
e^{-i\theta}\begin{pmatrix} 1\\-i \end{pmatrix}=\lambda\begin{pmatrix} 1\\-i \end{pmatrix}
\end{equation*}

which is true if \(\lambda=e^{-i\theta}\).

Hence the vectors \(\begin{pmatrix} 1\\i \end{pmatrix}\) and \(\begin{pmatrix} 1\\-i \end{pmatrix}\) are eigenvectors of the rotation matrix from Problem 1, with eigenvalues of \(e^{i\theta}\) and \(e^{-i\theta}\), respectively.

The discussion around Eq. 2.8 shows that \(\sin{(kx)}\) is not an eigenfunction of the spatial first-derivative operator \(d/dx\). Is \(\cos{(kx)}\) an eigenvector of that operator? What about \(\cos{(kx)}+i\sin{(kx)}\) or \(\cos{(kx)}-i\sin{(kx)}\)? If so, find the eigenvalues for these eigenvectors.

As described in Section 2.1, to determine whether the function \(f(x)=\cos{kx}\) is an eigenfunction of spatial first-derivative operator \(d/dx=\widehat{D}\), apply \(\widehat{D}\) to\(f(x)\) and see if the result is proportional to \(f(x)\):
\begin{equation*}
\widehat{D}f(x)=\frac{d(\cos{kx})}{dx}=-k\sin{kx}\overset{?}{=}\lambda(\cos{kx}).
\end{equation*}

Since there’s no constant \(\lambda\) that, multiplied by \(\cos{kx}\), can produce \(-k\sin{kx}\), the function \(\cos{(kx)}\) is not an eigenvector of the spatial first-derivative operator \(d/dx\).

Using the same logic for the function \(\cos{(kx)}+i\sin{(kx)}\) and \(\cos{(kx)}-i\sin{(kx)}\), applying the spatial first-derivative operator \(d/dx=\widehat{D}\) gives
\begin{align*}
\widehat{D}f(x)&=\frac{d(\cos{kx}+i\sin{kx})}{dx}\\
&=-k\sin{kx}+ik\cos{kx}\overset{?}{=}\lambda(\cos{kx}+i\sin{kx})
\end{align*}

and
\begin{align*}
\widehat{D}f(x)&=\frac{d(\cos{kx}-i\sin{kx})}{dx}\\
&=-k\sin{kx}-ik\cos{kx}\overset{?}{=}\lambda(\cos{kx}+i\sin{kx}).
\end{align*}

Since \(-k\sin{kx}+ik\cos{kx}=ik(\cos{kx}+i\sin{kx})\), the first expression in the previous hint is
\begin{equation*}
\widehat{D}f(x)=ik(\cos{kx}+i\sin{kx})\overset{?}{=}\lambda(\cos{kx}+i\sin{kx})
\end{equation*}

which is true if \(\lambda=ik\).

Likewise, since \(-k\sin{kx}-ik\cos{kx}=-ik(\cos{kx}-i\sin{kx})\), the second expression in the previous hint is
\begin{equation*}
\widehat{D}f(x)=-ik(\cos{kx}-i\sin{kx})\overset{?}{=}\lambda(\cos{kx}-i\sin{kx})
\end{equation*}

which is true if \(\lambda=-ik\).

As described in Section 2.1, to determine whether the function \(f(x)=\cos{kx}\) is an eigenfunction of spatial first-derivative operator \(d/dx=\widehat{D}\), apply \(\widehat{D}\) to \(f(x)\) and see if the result is proportional to \(f(x)\):
\begin{equation*}
\widehat{D}f(x)=\frac{d(\cos{kx})}{dx}=-k\sin{kx}\overset{?}{=}\lambda(\cos{kx}),
\end{equation*}

and since there’s no constant \(\lambda\) that, multiplied by \(\cos{kx}\), can produce \(-k\sin{kx}\), the function \(\cos{(kx)}\) is not an eigenvector of the spatial first-derivative operator \(d/dx\).

Using the same logic for the function \(\cos{(kx)}+i\sin{(kx)}\) and \(\cos{(kx)}-i\sin{(kx)}\), applying the spatial first-derivative operator \(d/dx=\widehat{D}\) gives
\begin{align*}
\widehat{D}f(x)&=\frac{d(\cos{kx}+i\sin{kx})}{dx}\\
&=-k\sin{kx}+ik\cos{kx}\overset{?}{=}\lambda(\cos{kx}+i\sin{kx})
\end{align*}

and
\begin{align*}
\widehat{D}f(x)&=\frac{d(\cos{kx}-i\sin{kx})}{dx}\\
&=-k\sin{kx}-ik\cos{kx}\overset{?}{=}\lambda(\cos{kx}+i\sin{kx}).
\end{align*}

And since \(-k\sin{kx}+ik\cos{kx}=ik(\cos{kx}+i\sin{kx})\), the first expression is
\begin{equation*}
\widehat{D}f(x)=ik(\cos{kx}+i\sin{kx})\overset{?}{=}\lambda(\cos{kx}+i\sin{kx})
\end{equation*}

which is true if \(\lambda=ik\).

Likewise, since \(-k\sin{kx}-ik\cos{kx}=-ik(\cos{kx}-i\sin{kx})\), the second expression is
\begin{equation*}
\widehat{D}f(x)=-ik(\cos{kx}-i\sin{kx})\overset{?}{=}\lambda(\cos{kx}-i\sin{kx})
\end{equation*}

which is true if \(\lambda=-ik\).

Thus both \(\cos{kx}+i\sin{kx}\) and \(\cos{kx}-i\sin{kx}\) are eigenvectors of the spatial first-derivative operator \(d/dx\) with eigenvalues of \(ik\) and \(-ik\), respectively.

If operator \(\widehat{M}\) has matrix representation \(\bar{\bar{M}}=\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\) in 2-D Cartesian coordinates,

  1. Show that \(\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) and \(\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}\) are eigenvectors of \(\widehat{M}\).
  2. Normalize these eigenvectors and show that they’re orthogonal.
  3. Find the eigenvalues for these eigenvectors.
  4. Find the matrix representation of operator \(\widehat{M}\) in the basis system of these eigenvectors.

If vectors \(\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) and \(\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}\) are eigenvectors of \(\widehat{M}\), they must satisfy the eigenvalue equation (Eq. 2.6):
\begin{equation*}
\bar{\bar{R}}\vec{A}=\lambda\vec{A}.
\end{equation*}

Inserting \(\bar{\bar{M}}=\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\) and vectors \(\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) and \(\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}\) into Eq. 2.6 gives
\begin{equation*}
\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\begin{pmatrix} 1+i\\-1 \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Performing the matrix multiplications in the previous hint gives
\begin{equation*}
\begin{pmatrix} (2)(1+i)+(1+i)(-1)\\(1-i)(1+i)+(3)(-1) \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} (2)(\frac{1+i}2)+(1+i)(1)\\(1-i)(\frac{1+i}2)+(3)(1) \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}
\end{equation*}

or

\begin{equation*}
\begin{pmatrix} 1+i\\-1 \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} 2(1+i)\\4 \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Recall that you can normalize a vector or function by dividing by its magnitude, and that the inner product between two orthogonal vectors or functions must be zero.

The vectors specified in this problem have magnitudes given by
\begin{equation*}
\vert \vec{\epsilon}_1 \vert =\sqrt{\braket{\epsilon_1\vert\epsilon_1}}=\sqrt{\begin{pmatrix}1-i & -1\end{pmatrix}\begin{pmatrix} 1+i\\-1 \end{pmatrix}}
\end{equation*}

and

\begin{equation*}
\vert \vec{\epsilon}_2 \vert =\sqrt{\braket{\epsilon_2\vert\epsilon_2}}=\sqrt{\begin{pmatrix}\frac{1-i}{2}&1\end{pmatrix}\begin{pmatrix} \frac{1+i}{2}\\1 \end{pmatrix}}.
\end{equation*}

Carrying out the multiplications in the previous hint gives
\begin{equation*}
\vert \vec{\epsilon}_1 \vert =\sqrt{(1-i)(1+i)+(-1)(-1)}=\sqrt{3}
\end{equation*}

and

\begin{equation*}
\vert \vec{\epsilon}_2 \vert =\sqrt{\left(\frac{1-i}{2}\right)\left(\frac{1+i}{2}\right)+(1)(1)}=\sqrt{\frac{3}{2}}.
\end{equation*}

So dividing \(\vec{\epsilon}_1\) and \(\vec{\epsilon}_2\) by these values will normalize them.

In Part (a) of this problem, the eigenvalue equation led to the following expressions:
\begin{equation*}
\begin{pmatrix} 1+i\\-1 \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}.
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} 2(1+i)\\4 \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Solving the equations in the previous hint for the values of \(\lambda\) gives the eignevalues; those values are 1 for the vector \(\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) and 4 for the vector \(\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}\).

You can determine the matrix form of an operator in a basis system with basis vectors \(\hat{\epsilon}_i\) using Eq. 2.16:
\begin{equation*}
A_{ij}=\bra{\epsilon_i}\widehat{A}\ket{\epsilon_j}
\end{equation*}

Inserting the normalized basis vector \(\vec{\epsilon}_1=\frac{1}{\sqrt{3}}\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) into the expression in the previous hint gives
\begin{equation*}
M_{11}=\bra{\epsilon_1}\widehat{M}\ket{\epsilon_1}=\frac{1}{\sqrt{3}}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\frac{1}{\sqrt{3}}\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

Pulling out the constants and performing the right-side matrix multiplication in the expression in the previous hint gives
\begin{align*}
M_{11}&=\frac{1}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} (2)(1+i)+(1+i)(-1)\\(1-i)(1+i)+(3)(-1)\end{pmatrix}\\
&=\frac{1}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} 1+i\\-1\end{pmatrix}.
\end{align*}

Performing the matrix multiplication in the previous hint gives
\begin{equation*}
M_{11}=\frac{1}{3}[(1-i)(1+i)+(-1)(-1)]=\frac{1}{3}(3)=1.
\end{equation*}

Using the same process for elements \(M_{12}\),\(M_{21}\), and \(M_{22}\) gives
\begin{equation*}
M_{12}=\bra{\epsilon_1}\widehat{M}\ket{\epsilon_2}=\frac{1}{\sqrt{3}}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}
\end{equation*}

\begin{equation*}
M_{21}=\bra{\epsilon_2}\widehat{M}\ket{\epsilon_1}=\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\frac{1}{\sqrt{3}}\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

and

\begin{equation*}
M_{22}=\bra{\epsilon_2}\widehat{M}\ket{\epsilon_2}=\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}
\end{equation*}

Performing the matrix multiplications in the previous hint gives
\begin{align*}
M_{12}&=\frac{\sqrt{2}}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} (2)(\frac{1+i}{2})+(1+i)(1)\\(1-i)(\frac{1+i}{2})+(3)(1)\end{pmatrix}\\
&=\frac{\sqrt{2}}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} (2)(1+i)\\4\end{pmatrix}\\
&=\frac{\sqrt{2}}{3}[(1-i)(2)(1+i)+(-1)(4)]=0.
\end{align*}
\begin{align*}
M_{21}&=\frac{\sqrt{2}}{3}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix} (2)(1+i)+(1+i)(-1)\\(1-i)(1+i)+(3)(-1)\end{pmatrix}\\
&=\frac{\sqrt{2}}{3}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix}1+i\\-1 \end{pmatrix}\\
&=\frac{\sqrt{2}}{3}\left[\left(\frac{1-i}{2}\right)(1+i)+(1)(-1)\right]=0
\end{align*}

and
\begin{align*}
M_{22}&=\frac{2}{3}\begin{pmatrix} \frac{1-i}{2}&1 \end{pmatrix}\begin{pmatrix} (2)\left(\frac{1+i}{2}\right)+(1+i)(1)\\(1-i)\left(\frac{1+i}{2}\right)+(3)(1)\end{pmatrix}\\
&=\frac{2}{3}\begin{pmatrix} \frac{1-i}{2}&1 \end{pmatrix}\begin{pmatrix} (2)(1+i)\\4\end{pmatrix}\\
&=\frac{2}{3}\left[\left(\frac{1-i}{2}\right)(2)(1+i)+(1)(4)\right]=4.
\end{align*}

So the matrix representation of operator \(\widehat{M}\) in the basis system of its normalized eigenvectors is
\begin{equation*}
\bar{\bar{M}}=\begin{pmatrix} 1&0\\0&4\end{pmatrix},
\end{equation*}

as you may have guessed, since any matrix in expressed in the basis of its own normalized eigenvectors must be diagonal, and the value of the diagonal elements must be the eigenvalues of those eigenvectors.

Part (a):

If vectors \(\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) and \(\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}\) are eigenvectors of \(\widehat{M}\), they must satisfy the eigenvalue equation (Eq. 2.6):
\begin{equation*}
\bar{\bar{R}}\vec{A}=\lambda\vec{A}.
\end{equation*}

Inserting \(\bar{\bar{M}}=\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\) and vectors \(\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) and \(\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}\) into Eq. 2.6 gives
\begin{equation*}
\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\begin{pmatrix} 1+i\\-1 \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}.
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Performing the matrix multiplications gives
\begin{equation*}
\begin{pmatrix} (2)(1+i)+(1+i)(-1)\\(1-i)(1+i)+(3)(-1) \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}.
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} (2)(\frac{1+i}2)+(1+i)(1)\\(1-i)(\frac{1+i}2)+(3)(1) \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix},
\end{equation*}

or

\begin{equation*}
\begin{pmatrix} 1+i\\-1 \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} 2(1+i)\\4 \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Part (b):

Recall that you can normalize a vector or function by dividing by its magnitude, and that the inner product between two orthogonal vectors or functions must be zero.

The vectors specified in this problem have magnitudes given by
\begin{equation*}
\vert \vec{\epsilon}_1 \vert =\sqrt{\braket{\epsilon_1\vert\epsilon_1}}=\sqrt{\begin{pmatrix}1-i & -1\end{pmatrix}\begin{pmatrix} 1+i\\-1 \end{pmatrix}}
\end{equation*}

and

\begin{equation*}
\vert \vec{\epsilon}_2 \vert =\sqrt{\braket{\epsilon_2\vert\epsilon_2}}=\sqrt{\begin{pmatrix}\frac{1-i}{2}&1\end{pmatrix}\begin{pmatrix} \frac{1+i}{2}\\1 \end{pmatrix}}.
\end{equation*}

Carrying out these multiplications gives
\begin{equation*}
\vert \vec{\epsilon}_1 \vert =\sqrt{(1-i)(1+i)+(-1)(-1)}=\sqrt{3}
\end{equation*}

and

\begin{equation*}
\vert \vec{\epsilon}_2 \vert =\sqrt{\left(\frac{1-i}{2}\right)\left(\frac{1+i}{2}\right)+(1)(1)}=\sqrt{\frac{3}{2}}.
\end{equation*}

So dividing \(\vec{\epsilon}_1\) and \(\vec{\epsilon}_2\) by these values will normalize them. Thus the normalized eigenvectors are
\begin{equation*}
\vec{\epsilon}_1=\frac{1}{\sqrt{3}}\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

and
\begin{equation*}
\vec{\epsilon}_2=\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Part (c):

Full Solution: In Part (a) of this problem, the eigenvalue equation led to the following expressions:
\begin{equation*}
\begin{pmatrix} 1+i\\-1 \end{pmatrix}=\lambda\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

and
\begin{equation*}
\begin{pmatrix} 2(1+i)\\4 \end{pmatrix}=\lambda\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Solving these equations for the \(\lambda\)’s gives the eignevalues, and those values are 1 for the vector \(\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) and 4 for the vector \(\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}\).

Part (d):

You can determine the matrix form of an operator in a basis system with basis vectors \(\hat{\epsilon}_i\) using Eq. 2.16:
\begin{equation*}
A_{ij}=\bra{\epsilon_i}\widehat{A}\ket{\epsilon_j}
\end{equation*}

Inserting the normalized basis vector \(\vec{\epsilon}_1=\frac{1}{\sqrt{3}}\begin{pmatrix} 1+i\\-1 \end{pmatrix}\) into this expression gives
\begin{equation*}
M_{11}=\bra{\epsilon_1}\widehat{M}\ket{\epsilon_1}=\frac{1}{\sqrt{3}}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\frac{1}{\sqrt{3}}\begin{pmatrix} 1+i\\-1 \end{pmatrix},
\end{equation*}

and pulling out the constants and performing the right-side matrix multiplication gives
\begin{align*}
M_{11}&=\frac{1}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} (2)(1+i)+(1+i)(-1)\\(1-i)(1+i)+(3)(-1)\end{pmatrix}\\
&=\frac{1}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} 1+i\\-1\end{pmatrix}.
\end{align*}

Performing this matrix multiplication gives
\begin{equation*}
M_{11}=\frac{1}{3}[(1-i)(1+i)+(-1)(-1)]=\frac{1}{3}(3)=1.
\end{equation*}

Using the same process for elements \(M_{12}\), \(M_{21}\), and \(M_{22}\) gives
\begin{equation*}
M_{12}=\bra{\epsilon_1}\widehat{M}\ket{\epsilon_2}=\frac{1}{\sqrt{3}}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}
\end{equation*}

\begin{equation*}
M_{21}=\bra{\epsilon_2}\widehat{M}\ket{\epsilon_1}=\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\frac{1}{\sqrt{3}}\begin{pmatrix} 1+i\\-1 \end{pmatrix}
\end{equation*}

\begin{equation*}
M_{22}=\bra{\epsilon_2}\widehat{M}\ket{\epsilon_2}=\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\sqrt{\frac{2}{3}}\begin{pmatrix} \frac{1+i}2\\1 \end{pmatrix}.
\end{equation*}

Performing these matrix multiplications gives
\begin{align*}
M_{12}&=\frac{\sqrt{2}}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} (2)(\frac{1+i}{2})+(1+i)(1)\\(1-i)(\frac{1+i}{2})+(3)(1)\end{pmatrix}\\
&=\frac{\sqrt{2}}{3}\begin{pmatrix} 1-i&-1 \end{pmatrix}\begin{pmatrix} (2)(1+i)\\4\end{pmatrix}\\
&=\frac{\sqrt{2}}{3}[(1-i)(2)(1+i)+(-1)(4)]=0.
\end{align*}
\begin{align*}
M_{21}&=\frac{\sqrt{2}}{3}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix} (2)(1+i)+(1+i)(-1)\\(1-i)(1+i)+(3)(-1)\end{pmatrix}\\
&=\frac{\sqrt{2}}{3}\begin{pmatrix} \frac{1-i}2&1 \end{pmatrix}\begin{pmatrix}1+i\\-1 \end{pmatrix}\\
&=\frac{\sqrt{2}}{3}\left[\left(\frac{1-i}{2}\right)(1+i)+(1)(-1)\right]=0
\end{align*}

\begin{align*}
M_{22}&=\frac{2}{3}\begin{pmatrix} \frac{1-i}{2}&1 \end{pmatrix}\begin{pmatrix} (2)\left(\frac{1+i}{2}\right)+(1+i)(1)\\(1-i)\left(\frac{1+i}{2}\right)+(3)(1)\end{pmatrix}\\
&=\frac{2}{3}\begin{pmatrix} \frac{1-i}{2}&1 \end{pmatrix}\begin{pmatrix} (2)(1+i)\\4\end{pmatrix}\\
&=\frac{2}{3}\left[\left(\frac{1-i}{2}\right)(2)(1+i)+(1)(4)\right]=4.
\end{align*}

So the matrix representation of operator \(\widehat{M}\) in the basis system of its normalized eigenvectors is
\begin{equation*}
\bar{\bar{M}}=\begin{pmatrix} 1&0\\0&4\end{pmatrix},
\end{equation*}

as you may have guessed, since any matrix in expressed in the basis of its own normalized eigenvectors must be diagonal, and the value of the diagonal elements must be the eigenvalues of those eigenvectors.

Consider the matrices \(\bar{\bar{A}}=\begin{pmatrix} 5&0\\0&i\end{pmatrix}\) and \(\bar{\bar{B}}=\begin{pmatrix} 3+i&0\\0&2\end{pmatrix}\).

  1. Do these matrices commute?
  2. Do matrices \(\bar{\bar{C}}=\begin{pmatrix} a&0\\0&b\end{pmatrix}\) and \(\bar{\bar{D}}=\begin{pmatrix} c&0\\0&d\end{pmatrix}\) commute?
  3. For matrices \(\bar{\bar{E}}=\begin{pmatrix} 2&i\\3&5i\end{pmatrix}\) and \(\bar{\bar{F}}=\begin{pmatrix} a&b\\c&d\end{pmatrix}\) find the relationships between \(a\), \(b\), \(c\), and \(d\) that ensure that \(\bar{\bar{E}}\) and \(\bar{\bar{F}}\) commute.
  4. Find the matrix representation of operator \(\widehat{M}\) in the basis system of these eigenvectors.

As explained in Section 2.2, two operators (and the matrices representing them) commute if their commutator, defined by Eq. 2.18, equals zero.

The commutator \([\bar{\bar{A}}, \bar{\bar{B}}]\) is defined as
\begin{equation*}
[\bar{\bar{A}},\bar{\bar{B}}]=\bar{\bar{A}}\bar{\bar{B}}-\bar{\bar{B}}\bar{\bar{A}}
\end{equation*}

and plugging in the two matrices given in this problem makes the commutator
\begin{equation*}
[\bar{\bar{A}},\bar{\bar{B}}]=\begin{pmatrix}5&0\\0&i\end{pmatrix}\begin{pmatrix}3+i&0\\0&2\end{pmatrix}-\begin{pmatrix}3+i&0\\0&2\end{pmatrix}\begin{pmatrix}5&0\\0&i\end{pmatrix}.
\end{equation*}

Performing the matrix multiplications gives
\begin{align*}
[\bar{\bar{A}},\bar{\bar{B}}]&=\begin{pmatrix}(5)(3+i)+(0)(0)&(5)(0)+(0)(2)\\(0)(3+i)+(i)(0)&(0)(0)+(i)(2)\end{pmatrix}\\
&\hspace{0.5cm}-\begin{pmatrix}(3+i)(5)+(0)(0)&(3+i)(0)+(0)(i)\\(0)(5)+(2)(0)&(0)(0)+(2)(i)\end{pmatrix}
\end{align*}

or
\begin{equation*}
[\bar{\bar{A}},\bar{\bar{B}}]=\begin{pmatrix}15+5i&0\\0&2i\end{pmatrix}-\begin{pmatrix}15+5i&0\\0&2i\end{pmatrix}.
\end{equation*}

Using the same approach as in Part (a), to determine whether matrices \(\bar{\bar{C}}=\begin{pmatrix} a&0\\0&b\end{pmatrix}\) and \(\bar{\bar{D}}=\begin{pmatrix} c&0\\0&d\end{pmatrix}\) commute, form the commutator
\begin{equation*}
[\bar{\bar{C}},\bar{\bar{D}}]=\bar{\bar{C}}\bar{\bar{D}}-\bar{\bar{D}}\bar{\bar{C}}
\end{equation*}

to see if it equals zero.

In this case the commutator is
\begin{equation*}
[\bar{\bar{C}},\bar{\bar{D}}]=\begin{pmatrix} a&0\\0&b\end{pmatrix}\begin{pmatrix} c&0\\0&d\end{pmatrix}-\begin{pmatrix} c&0\\0&d\end{pmatrix}\begin{pmatrix} a&0\\0&b\end{pmatrix}
\end{equation*}

and performing the matrix multiplications gives
\begin{equation*}
[\bar{\bar{C}},\bar{\bar{D}}]=\begin{pmatrix} ac&0\\0&bd\end{pmatrix}-\begin{pmatrix} ca&0\\0&db\end{pmatrix}.
\end{equation*}

If matrices \(\bar{\bar{E}}\) and \(\bar{\bar{F}}\) commute, the commutator \([\bar{\bar{E}},\bar{\bar{F}}]=\bar{\bar{E}}\bar{\bar{F}}-\bar{\bar{F}}\bar{\bar{E}}\) must equal zero.

In this case, the commutator is
\begin{equation*}
[\bar{\bar{E}},\bar{\bar{F}}]=\begin{pmatrix} 2&i\\3&5i\end{pmatrix}\begin{pmatrix} a&b\\c&d\end{pmatrix}-\begin{pmatrix} a&b\\c&d\end{pmatrix}\begin{pmatrix} 2&i\\3&5i\end{pmatrix}
\end{equation*}

or
\begin{align*}
[\bar{\bar{E}},\bar{\bar{F}}]&=\begin{pmatrix} (2)(a)+(i)(c)&(2)(b)+(i)(d)\\(3)(a)+(5i)(c)&(3)(b)+(5i)(d)\end{pmatrix}\\
&\hspace{0.5cm}-\begin{pmatrix} (a)(2)+(b)(3)&(a)(i)+(b)(5i)\\(c)(2)+(d)(3)&(c)(i)+(d)(5i)\end{pmatrix}.
\end{align*}

For the difference between the two matrices shown in the previous hint to equal zero, the difference between each pair of corresponding elements must equal zero. This leads to four (non-independent) equations involving the four unknowns \(a\), \(b\), \(c\), and \(d\):
\begin{align*}
2a+ic&=2a+3b\\
2b+id&=ia+5ib\\
3a+5ic&=2c+3d\\
3b+5id&=ic+5id
\end{align*}

which can be solved to give two of the variables (for example, \(c\) and \(d\)) in terms of the other two \(a\) and \(b\).

The first of the equations in the previous hint is easily solved for \(c\) in terms of \(b\), giving
\begin{equation*}
c=\frac{3b}{i}=-3ib
\end{equation*}

and the second equation can be solved to give \(d\) in terms of \(a\) and \(b\):
\begin{equation*}
d=\frac{ia}{i}+\frac{5i-2}{i}b=a+(5+2i)b.
\end{equation*}

Part (a):

As explained in Section 2.2, two operators (and the matrices representing them) commute if their commutator, defined by Eq. 2.18, equals zero.

The commutator \([\bar{\bar{A}},\bar{\bar{B}}]\) is defined as
\begin{equation*}
[\bar{\bar{A}},\bar{\bar{B}}]=\bar{\bar{A}}\bar{\bar{B}}-\bar{\bar{B}}\bar{\bar{A}}
\end{equation*}

and plugging in the two matrices given in this problem makes the commutator
\begin{equation*}
[\bar{\bar{A}},\bar{\bar{B}}]=\begin{pmatrix}5&0\\0&i\end{pmatrix}\begin{pmatrix}3+i&0\\0&2\end{pmatrix}-\begin{pmatrix}3+i&0\\0&2\end{pmatrix}\begin{pmatrix}5&0\\0&i\end{pmatrix}.
\end{equation*}

Performing the matrix multiplications gives
\begin{align*}
[\bar{\bar{A}},\bar{\bar{B}}]&=\begin{pmatrix}(5)(3+i)+(0)(0)&(5)(0)+(0)(2)\\(0)(3+i)+(i)(0)&(0)(0)+(i)(2)\end{pmatrix}\\
&\hspace{0.5cm}-\begin{pmatrix}(3+i)(5)+(0)(0)&(3+i)(0)+(0)(i)\\(0)(5)+(2)(0)&(0)(0)+(2)(i)\end{pmatrix}
\end{align*}

or
\begin{equation*}
[\bar{\bar{A}},\bar{\bar{B}}]=\begin{pmatrix}15+5i&0\\0&2i\end{pmatrix}-\begin{pmatrix}15+5i&0\\0&2i\end{pmatrix}=0
\end{equation*}
so these matrices do commute.

Part (b):

Using the same approach as in Part (a), to determine whether matrices \(\bar{\bar{C}}=\begin{pmatrix} a&0\\0&b\end{pmatrix}\) and \(\bar{\bar{D}}=\begin{pmatrix} c&0\\0&d\end{pmatrix}\) commute, form the commutator
\begin{equation*}
[\bar{\bar{C}},\bar{\bar{D}}]=\bar{\bar{C}}\bar{\bar{D}}-\bar{\bar{D}}\bar{\bar{C}}
\end{equation*}
to see if it equals zero. In this case the commutator is
\begin{equation*}
[\bar{\bar{C}},\bar{\bar{D}}]=\begin{pmatrix} a&0\\0&b\end{pmatrix}\begin{pmatrix} c&0\\0&d\end{pmatrix}-\begin{pmatrix} c&0\\0&d\end{pmatrix}\begin{pmatrix} a&0\\0&b\end{pmatrix}
\end{equation*}

and performing the matrix multiplications gives
\begin{equation*}
[\bar{\bar{C}},\bar{\bar{D}}]=\begin{pmatrix} ac&0\\0&bd\end{pmatrix}-\begin{pmatrix} ca&0\\0&db\end{pmatrix}
\end{equation*}

so these matrices do commute, which illustrates the fact that two diagonal matrices (that is, all off-diagonal elements in both matrices are zero) always commute.

Part (c):

Full Solution: If matrices \(\bar{\bar{E}}\) and \(\bar{\bar{F}}\) commute, the commutator \([\bar{\bar{E}},\bar{\bar{F}}]=\bar{\bar{E}}\bar{\bar{F}}-\bar{\bar{F}}\bar{\bar{E}}\) must equal zero.

In this case, the commutator is
\begin{equation*}
[\bar{\bar{E}},\bar{\bar{F}}]=\begin{pmatrix} 2&i\\3&5i\end{pmatrix}\begin{pmatrix} a&b\\c&d\end{pmatrix}-\begin{pmatrix} a&b\\c&d\end{pmatrix}\begin{pmatrix} 2&i\\3&5i\end{pmatrix}
\end{equation*}

or
\begin{align*}
[\bar{\bar{E}},\bar{\bar{F}}]&=\begin{pmatrix} (2)(a)+(i)(c)&(2)(b)+(i)(d)\\(3)(a)+(5i)(c)&(3)(b)+(5i)(d)\end{pmatrix}\\
&\hspace{0.5cm}-\begin{pmatrix} (a)(2)+(b)(3)&(a)(i)+(b)(5i)\\(c)(2)+(d)(3)&(c)(i)+(d)(5i)\end{pmatrix}.
\end{align*}

For the difference between these two matrices to equal zero, the difference between each pair of corresponding elements must equal zero. This leads to four (non-independent) equations involving the four unknowns \(a\), \(b\), \(c\), and \(d\):
\begin{align*}
2a+ic&=2a+3b\\
2b+id&=ia+5ib\\
3a+5ic&=2c+3d\\
3b+5id&=ic+5id
\end{align*}
which can be solved to give two of the variables (for example, \(c\) and \(d\)) in terms of the other two \(a\) and\(b\)).

The first of the equations in the previous hint is readily solved for \(c\) in terms of \(b\), giving
\begin{equation*}
c=\frac{3b}{i}=-3ib
\end{equation*}

and the second equation can be solved to give \(d\) in terms of \(a\) and \(b\):
\begin{equation*}
d=\frac{ia}{i}+\frac{5i-2}{i}b=a+(5+2i)b
\end{equation*}
so as long as these relationships are satisfied, these matrices will commute.

Specify whether each of the following matrices are Hermitian (for parts d through f, fill in the missing elements to make these matrices Hermitian, if possible):

a) \(\bar{\bar{A}}=\begin{pmatrix} 5&1\\1&2\end{pmatrix}\)
b) \(\bar{\bar{B}}=\begin{pmatrix} i&-3i\\3i&0\end{pmatrix}\)
c) \(\bar{\bar{C}}=\begin{pmatrix} 2&1+i\\1-i&3\end{pmatrix}\)
d) \(\bar{\bar{D}}=\begin{pmatrix} 0&\frac{i}{2}\\ &4\end{pmatrix}\)
e) \(\bar{\bar{E}}=\begin{pmatrix} i&3\\3& \end{pmatrix}\)
f) \(\bar{\bar{F}}=\begin{pmatrix} 2& \\5i&1\end{pmatrix}\).

Recall from the discussion before and after Eq. 2.28 that Hermitian operators must equal their own adjoints, which means that the matrix representation of a Hermitian operator must have real diagonal elements and that each off-diagonal element must be the complex conjugate of the corresponding element on the other side of the diagonal.

For each of the three matrices \(\bar{\bar{A}}\), \(\bar{\bar{B}}\), and \(\bar{\bar{C}}\), ask yourself:

– Is every diagonal element real, and is every off-diagonal element the complex conjugate of the corresponding element across the diagonal?

For each matrix, if the answer to both of these questions is “Yes”, then the matrix is Hermitian. If the answer to either question is “No”, then the matrix is not Hermitian.

For matrix \(\bar{\bar{D}}\), the diagonal elements are real, and the missing element must be the complex conjugate of the corresponding element across the diagonal.

For matrix \(\bar{\bar{E}}\), no matter what value you choose for the missing diagonal element, the first diagnonal element is not real.

For matrix \(\bar{\bar{F}}\), the diagonal elements are real, and the missing element must be the complex conjugate of the corresponding element across the diagonal.

Recall from the discussion before and after Eq. 2.28 that Hermitian operators must equal their own adjoints, which means that the matrix representation of a Hermitian operator must have real diagonal elements and that each off-diagonal element must be the complex conjugate of the corresponding element on the other side of the diagonal.

So for each of the three matrices \(\bar{\bar{A}}\),\(\bar{\bar{B}}\), and \(\bar{\bar{C}}\), ask yourself:

– Is every diagonal element real, and is every off-diagonal element the complex conjugate of the corresponding element across the diagonal?

For each matrix, if the answer to both of these questions is “Yes”, then the matrix is Hermitian. If the answer to either question is “No”, then the matrix is not Hermitian.

Thus matrix \(\bar{\bar{A}}\) is Hermitian (remember that real numbers have zero imaginary component, so every real number is its own complex conjugate). But matrix \(\bar{\bar{B}}\) has an imaginary diagonal element, so it cannot be Hermitian. Matrix \(\bar{\bar{C}}\) has real diagonal elements and the off-diagonal element \(1-i\) is the complex conjugate of the corresponding element \(1+i\), so this matrix is Hermitian.

For matrix \(\bar{\bar{D}}\), the diagonal elements are real, and the missing element must be the complex conjugate of the corresponding element across the diagonal. Hence the missing element must be \(\frac{i}{2}\).

For matrix \(\bar{\bar{E}}\), no matter what value you choose for the missing diagonal element, the first diagonal element is not real, so there is no value that you can insert for the missing element to make the matrix Hermitian.

For matrix \(\bar{\bar{F}}\), the diagonal elements are real, and the missing element must be the complex conjugate of the corresponding element across the diagonal. Hence the missing element must be \(-5i\).

Find the elements of the matrices representing the projection operators \(\widehat{P}_1\), \(\widehat{P}_2\), and \(\widehat{P}_3\) in the coordinate system with orthogonal basis vectors \(\vec{\epsilon}_1=4\hat{\imath}-2\hat{\jmath}\), \(\vec{\epsilon}_2=3\hat{\imath}+6\hat{\jmath}\), and \(\vec{\epsilon}_3=\hat{k}\).

The matrix representing the projection operator for a normalized vector \(\vec{\epsilon}_i\) can be determined using Eq. 2.41:
\begin{equation*}
\widehat{P}_i=\ket{\epsilon_i}\bra{\epsilon_i}.
\end{equation*}

The three orthogonal basis vectors given in this problem can be normalized by dividing each by its magnitude. Those magnitudes are
\begin{equation*}
\vert \vec{\epsilon}_1 \vert=\sqrt{\braket{\epsilon_1\vert\epsilon_1}}=\sqrt{{4}^2+(-2)^2+(0)^2}=\sqrt{20}
\end{equation*}

\begin{equation*}
\vert \vec{\epsilon}_2 \vert=\sqrt{\braket{\epsilon_2\vert\epsilon_2}}=\sqrt{{3}^2+(6)^2+(0)^2}=\sqrt{45}
\end{equation*}

and

\begin{equation*}
\vert \vec{\epsilon}_3 \vert=\sqrt{\braket{\epsilon_3\vert\epsilon_3}}=\sqrt{{0}^2+(0)^2+(1)^2}=1.
\end{equation*}

Plugging the normalized basis vectors into Eq. 2.41 gives
\begin{equation*}
\widehat{P}_1=\frac{1}{\braket{\epsilon_1\vert\epsilon_1}}\ket{\epsilon_1}\bra{\epsilon_1}=\frac{1}{20}\begin{pmatrix}4\\-2\\0\end{pmatrix}\begin{pmatrix}4&-2&0\end{pmatrix}
\end{equation*}

\begin{equation*}
\widehat{P}_2=\frac{1}{\braket{\epsilon_2\vert\epsilon_2}}\ket{\epsilon_2}\bra{\epsilon_2}=\frac{1}{45}\begin{pmatrix}3\\6\\0\end{pmatrix}\begin{pmatrix}3&6&0\end{pmatrix}
\end{equation*}

and

\begin{equation*}
\widehat{P}_3=\frac{1}{\braket{\epsilon_3\vert\epsilon_3}}\ket{\epsilon_3}\bra{\epsilon_3}=\frac{1}{1}\begin{pmatrix}0\\0\\1\end{pmatrix}\begin{pmatrix}0&0&1\end{pmatrix}.
\end{equation*}

Performing the outer products in the previous hint gives
\begin{align*}
\widehat{P}_1&=\frac{1}{20}\begin{pmatrix}(4)(4)&(4)(-2)&(4)(0)\\(-2)(4)&(-2)(-2)&(-2)(0)\\(0)(4)&(0)(-2)&(0)(0)\end{pmatrix}\\
&=\frac{1}{20}\begin{pmatrix}16&-8&0\\-8&4&0\\0&0&0\end{pmatrix}
\end{align*}

\begin{align*}
\widehat{P}_2&=\frac{1}{45}\begin{pmatrix}(3)(3)&(3)(6)&(3)(0)\\(6)(3)&(6)(6)&(6)(0)\\(0)(3)&(0)(6)&(0)(0)\end{pmatrix}\\
&=\frac{1}{45}\begin{pmatrix}9&18&0\\18&36&0\\0&0&0\end{pmatrix}
\end{align*}

and

\begin{equation*}
\widehat{P}_3=\frac{1}{1}\begin{pmatrix}(0)(0)&(0)(0)&(0)(1)\\(0)(0)&(0)(0)&(0)(1)\\(1)(0)&(1)(0)&(1)(1)\end{pmatrix}
=\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}.
\end{equation*}

The matrix representing the projection operator for a normalized vector \(\vec{\epsilon}_i\) can be determined using Eq. 2.41:
\begin{equation*}
\widehat{P}_i=\ket{\epsilon_i}\bra{\epsilon_i}.
\end{equation*}

The three orthogonal basis vectors given in this problem can be normalized by dividing each by its magnitude. Those magnitudes are
\begin{equation*}
\vert \vec{\epsilon}_1 \vert=\sqrt{\braket{\epsilon_1\vert\epsilon_1}}=\sqrt{{4}^2+(-2)^2+(0)^2}=\sqrt{20}=2\sqrt{5}
\end{equation*}

\begin{equation*}
\vert \vec{\epsilon}_2 \vert=\sqrt{\braket{\epsilon_2\vert\epsilon_2}}=\sqrt{{3}^2+(6)^2+(0)^2}=\sqrt{45}=3\sqrt{5}
\end{equation*}

and

\begin{equation*}
\vert \vec{\epsilon}_3 \vert=\sqrt{\braket{\epsilon_3\vert\epsilon_3}}=\sqrt{{0}^2+(0)^2+(1)^2}=1,
\end{equation*}

and plugging the normalized basis vectors into Eq. 2.41 gives
\begin{equation*}
\widehat{P}_1=\frac{1}{\braket{\epsilon_1\vert\epsilon_1}}\ket{\epsilon_1}\bra{\epsilon_1}=\frac{1}{20}\begin{pmatrix}4\\-2\\0\end{pmatrix}\begin{pmatrix}4&-2&0\end{pmatrix}
\end{equation*}

\begin{equation*}
\widehat{P}_2=\frac{1}{\braket{\epsilon_2\vert\epsilon_2}}\ket{\epsilon_2}\bra{\epsilon_2}=\frac{1}{45}\begin{pmatrix}3\\6\\0\end{pmatrix}\begin{pmatrix}3&6&0\end{pmatrix}
\end{equation*}

and

\begin{equation*}
\widehat{P}_3=\frac{1}{\braket{\epsilon_3\vert\epsilon_3}}\ket{\epsilon_3}\bra{\epsilon_3}=\frac{1}{1}\begin{pmatrix}0\\0\\1\end{pmatrix}\begin{pmatrix}0&0&1\end{pmatrix}.
\end{equation*}

Performing the outer products gives
\begin{align*}
\widehat{P}_1&=\frac{1}{20}\begin{pmatrix}(4)(4)&(4)(-2)&(4)(0)\\(-2)(4)&(-2)(-2)&(-2)(0)\\(0)(4)&(0)(-2)&(0)(0)\end{pmatrix}\\
&=\frac{1}{20}\begin{pmatrix}16&-8&0\\-8&4&0\\0&0&0\end{pmatrix}
\end{align*}

\begin{align*}
\widehat{P}_2&=\frac{1}{45}\begin{pmatrix}(3)(3)&(3)(6)&(3)(0)\\(6)(3)&(6)(6)&(6)(0)\\(0)(3)&(0)(6)&(0)(0)\end{pmatrix}\\
&=\frac{1}{45}\begin{pmatrix}9&18&0\\18&36&0\\0&0&0\end{pmatrix}
\end{align*}

and

\begin{equation*}
\widehat{P}_3=\frac{1}{1}\begin{pmatrix}(0)(0)&(0)(0)&(0)(1)\\(0)(0)&(0)(0)&(0)(1)\\(1)(0)&(1)(0)&(1)(1)\end{pmatrix}
=\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}.
\end{equation*}

Use the projection operators from Problem 7 to project vector \(\vec{A}=7\hat{\imath}-3\hat{\jmath}+2\hat{k}\) onto the directions of \(\vec{\epsilon}_1\), \(\vec{\epsilon}_2\), and \(\vec{\epsilon}_3\).

As described in Section 2.4, applying a projection operator to a vector produces a new vector in the direction of the vector used to form the projection operator.

Application of a projection operator to a vector is illustrated in Eq. 2.42:
\begin{equation*}
\widehat{P}_1\ket{A}=\ket{\epsilon_1}\braket{\epsilon_1|A}=A_1\ket{\epsilon_1},
\end{equation*}

and inserting the projection operator \(\widehat{P}_1\) from Problem 7 to vector \(\vec{A}=7\hat{\imath}-3\hat{\jmath}+2\hat{k}\) looks like this:
\begin{equation*}
\widehat{P}_1\ket{A}=\frac{1}{20}\begin{pmatrix}16&-8&0\\-8&4&0\\0&0&0\end{pmatrix}\begin{pmatrix}7\\-3\\2\end{pmatrix}
\end{equation*}

Performing the matrix multiplication gives
\begin{equation*}
\widehat{P}_1\ket{A}=\frac{1}{20}\begin{pmatrix}136\\-68\\0\end{pmatrix},
\end{equation*}

which is a vector in the direction of \(\vec{\epsilon}_1\) (as you can tell by the ratio of the components) and with magnitude equal to the projection of vector \(\vec{A}\) onto the direction of \(\vec{\epsilon}_1\). That magnitude is
\begin{equation*}
\vert \widehat{P}_1\ket{A} \vert=\sqrt{\left(\frac{136}{20}\right)^2+\left(\frac{-68}{20}\right)^2+\left(\frac{0}{20}\right)^2}=7.6
\end{equation*}

which you can verify by taking the inner product between vector \(\vec{A}\) and the normalized basis vector \(\vec{\epsilon}_1/\vert \vec{\epsilon}_1\vert\).

Using the same approach with \(\widehat{P}_2\) and \(\widehat{P}_3\) gives
\begin{equation*}
\widehat{P}_2\ket{A}=\frac{1}{45}\begin{pmatrix}9&18&0\\18&36&0\\0&0&0\end{pmatrix}\begin{pmatrix}7\\-3\\2\end{pmatrix}=\frac{1}{45}\begin{pmatrix}9\\18\\0\end{pmatrix},
\end{equation*}

which has magnitude
\begin{equation*}
\vert \widehat{P}_2\ket{A} \vert=\sqrt{\left(\frac{9}{45}\right)^2+\left(\frac{18}{45}\right)^2+\left(\frac{0}{45}\right)^2}=0.45
\end{equation*}

and
\begin{equation*}
\widehat{P}_3\ket{A}=\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}\begin{pmatrix}7\\-3\\2\end{pmatrix}=\begin{pmatrix}0\\0\\2\end{pmatrix}.
\end{equation*}

which has magnitude
\begin{equation*}
\vert \widehat{P}_3\ket{A} \vert=\sqrt{0^2+0^2+2^2}=2.
\end{equation*}

As described in Section 2.4, applying a projection operator to a vector produces a new vector in the direction of the vector used to form the projection operator.

Application of a projection operator to a vector is illustrated in Eq. 2.42:
\begin{equation*}
\widehat{P}_1\ket{A}=\ket{\epsilon_1}\bra{\epsilon_1|A}=A_1\ket{\epsilon_1},
\end{equation*}
and inserting the projection operator\(\widehat{P}_1\) from Problem 7 to vector\(\vec{A}=7\hat{\imath}-3\hat{\jmath}+2\hat{k}\) looks like this:
\begin{equation*}
\widehat{P}_1\ket{A}=\frac{1}{20}\begin{pmatrix}16&-8&0\\-8&4&0\\0&0&0\end{pmatrix}\begin{pmatrix}7\\-3\\2\end{pmatrix}.
\end{equation*}

Performing the matrix multiplication gives
\begin{equation*}
\widehat{P}_1\ket{A}=\frac{1}{20}\begin{pmatrix}136\\-68\\0\end{pmatrix},
\end{equation*}

which is a vector in the direction of \(\vec{\epsilon}_1\) (as you can tell by the ratio of the components) and with magnitude equal to the projection of vector \(\vec{A}\) onto the direction of \(\vec{\epsilon}_1\). That magnitude is
\begin{equation*}
\vert \widehat{P}_1\ket{A} \vert=\sqrt{\left(\frac{136}{20}\right)^2+\left(\frac{-68}{20}\right)^2+\left(\frac{0}{20}\right)^2}=7.6
\end{equation*}

which you can verify by taking the inner product between vector \(\vec{A}\) and the normalized basis vector \(\vec{\epsilon}_1/\vert \vec{\epsilon}_1\vert\).

Using the same approach with \(\widehat{P}_2\) and \(\widehat{P}_3\) gives
\begin{equation*}
\widehat{P}_2\ket{A}=\frac{1}{45}\begin{pmatrix}9&18&0\\18&36&0\\0&0&0\end{pmatrix}\begin{pmatrix}7\\-3\\2\end{pmatrix}=\frac{1}{45}\begin{pmatrix}9\\18\\0\end{pmatrix},
\end{equation*}

which has magnitude
\begin{equation*}
\vert \widehat{P}_2\ket{A} \vert=\sqrt{\left(\frac{9}{45}\right)^2+\left(\frac{18}{45}\right)^2+\left(\frac{0}{45}\right)^2}=0.45
\end{equation*}

and
\begin{equation*}
\widehat{P}_3\ket{A}=\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}\begin{pmatrix}7\\-3\\2\end{pmatrix}=\begin{pmatrix}0\\0\\2\end{pmatrix}.
\end{equation*}

which has magnitude
\begin{equation*}
\vert \widehat{P}_3\ket{A} \vert=\sqrt{0^2+0^2+2^2}=2.
\end{equation*}

So as you may have guessed by comparing the components of vector \(\vec{A}\) to the components of each of the basis vectors \(\vec{\epsilon}_1\), \(\vec{\epsilon}_2\), and \(\vec{\epsilon}_3\), the projection of \(\vec{A}\) onto the direction of \(\vec{\epsilon}_1\) is considerably larger than the projection of \(\vec{A}\) onto the direction of \(\vec{\epsilon}_2\), and the projection of \(\vec{A}\) onto the direction of \(\vec{\epsilon}_3\) is just the \(\hat{k}\)-component of \(\vec{A}\).

Consider a six-sided die labeled with numbers 1 through 6.

(a) If the die is fair, the probability of occurrence of any number (1 through 6) is equal. Find the expectation value and standard deviation in this case.
(b) If the die is “loaded”, the probability of occurrence might be:

  1. 10%
  2. 70%
  3. 15%
  4. 3%
  5. 1%

What are the expectation value and standard deviation in this case?

As explained in Section 2.5, the expectation value can be determined by multiplying each possible outcome by the probability of that outcome, as shown in Eq. 2.56:
\begin{equation*}
\braket{g}=\sum_{n=1}^N\lambda_nP_n.
\end{equation*}

If each of the six possible outcomes is equally likely, the probability of any one outcome is \(1/6\).

Inserting the equal probability of \(1/6\) for the six possible outcomes for the fair die into Eq. 2.56 gives
\begin{align*}
\braket{x}&=\sum_{n=1}^6\lambda_nP_n\\
&=(1)(\frac{1}{6})+(2)(\frac{1}{6})+(3)(\frac{1}{6})+(4)(\frac{1}{6})+(5)(\frac{1}{6})+(6)(\frac{1}{6})\\
&=21(\frac{1}{6})=3.5.
\end{align*}

To find the standard deviation of the outcomes for the fair die, remember that the standard deviation is the square root of the variance, and you can find the variance using Eq. 2.63:
\begin{equation*}
\textrm{Variance of \(x\)}=(\Delta x)^2 \equiv \braket{(x-\braket{x})^2}
\end{equation*}
in which \(\braket{x}\) is the expectation value of \(x\).

Inserting the values of the possible outcomes and the expectation value of 3.5 for the fair die into Eq. 2.63 gives the variance:
\begin{align*}
(\Delta x)^2 &= \braket{(x-\braket{x})^2}=\Sigma_nP_n(x_n-3.5)^2\\
&=\frac{1}{6}[(1-3.5)^2+(2-3.5)^2+(3-3.5)^2\\
&\hspace{0.5cm}+(4-3.5)^2+(5-3.5)^2+(6-3.5)^2]=2.92
\end{align*}
and taking the square root of this variance gives the standard deviation of 1.71.

Using the same approach for the loaded die,
\begin{align*}
\braket{x}&=\sum_{n=1}^6\lambda_nP_n\\
&=(1)(0.1)+(2)(0.7)+(3)(.15)+(4)(.03)+(5)(.01)+(6)(.01)\\
&=2.18
\end{align*}

and

\begin{align*}
(\Delta x)^2 &= \braket{(x-\braket{x})^2}=\Sigma_nP_n(x_n-2.18)^2\\
&=[0.1*(1-2.18)^2+0.7*(2-2.18)^2+0.15*(3-2.18)^2\\
&\hspace{0.5cm}+0.03*(4-2.18)^2+0.01*(5-2.18)^2+0.01*(6-2.18)^2]\\
&=0.59
\end{align*}

and taking the square root of this variance gives the standard deviation of 0.77 in this case.

Part (a):

As explained in Section 2.5, the expectation value can be determined by multiplying each possible outcome by the probability of that outcome, as shown in Eq. 2.56:
\begin{equation*}
\braket{g}=\sum_{n=1}^N\lambda_nP_n.
\end{equation*}

If each of the six possible outcomes is equally likely, the probability of any one outcome is \(1/6\), and inserting the equal probability of \(1/6\) for the six possible outcomes for the fair die into Eq. 2.56 gives
\begin{align*}
\braket{x}&=\sum_{n=1}^6\lambda_nP_n\\
&=(1)(\frac{1}{6})+(2)(\frac{1}{6})+(3)(\frac{1}{6})+(4)(\frac{1}{6})+(5)(\frac{1}{6})+(6)(\frac{1}{6})\\
&=21(\frac{1}{6})=3.5.
\end{align*}

To find the standard deviation of the outcomes for the fair die, remember that the standard deviation is the square root of the variance, and you can find the variance using Eq. 2.63:
\begin{equation*}
\textrm{Variance of \(x\)}=(\Delta x)^2 \equiv \braket{(x-\braket{x})^2}
\end{equation*}
in which\(\braket{x}\) is the expectation value of \(x\).

Inserting the values of the possible outcomes and the expectation value of 3.5 for the fair die into Eq. 2.63 gives the variance:
\begin{align*}
(\Delta x)^2 &= \braket{(x-\braket{x})^2}=\Sigma_nP_n(x_n-3.5)^2\\
&=\frac{1}{6}[(1-3.5)^2+(2-3.5)^2+(3-3.5)^2\\
&\hspace{0.5cm}+(4-3.5)^2+(5-3.5)^2+(6-3.5)^2]=2.92
\end{align*}
and taking the square root of this variance gives the standard deviation of 1.71.

Part (b):

Using the same approach for the loaded die,
\begin{align*}
\braket{x}&=\sum_{n=1}^6\lambda_nP_n\\
&=(1)(0.1)+(2)(0.7)+(3)(.15)+(4)(.03)+(5)(.01)+(6)(.01)\\
&=2.18
\end{align*}

and

\begin{align*}
(\Delta x)^2 &= \braket{(x-\braket{x})^2}=\Sigma_nP_n(x_n-2.18)^2\\
&=[0.1*(1-2.18)^2+0.7*(2-2.18)^2+0.15*(3-2.18)^2\\
&\hspace{0.5cm}+0.03*(4-2.18)^2+0.01*(5-2.18)^2+0.01*(6-2.18)^2]\\
&=0.59
\end{align*}

and taking the square root of this variance gives the standard deviation of 0.77 in this case.

Operating on the orthonormal basis kets \(\ket{\epsilon_1}\), \(\ket{\epsilon_2}\), and \(\ket{\epsilon_3}\) with operator \(\widehat{O}\) produces the results \(\widehat{O}\ket{\epsilon_1}=2\ket{\epsilon_1}\), \(\widehat{O}\ket{\epsilon_2}=-i\ket{\epsilon_1}+\ket{\epsilon_2}\), and \(\widehat{O}\ket{\epsilon_3}=\ket{\epsilon_3}\). If \(\psi=4\ket{\epsilon_1}+2\ket{\epsilon_2}+3\ket{\epsilon_3}\), what is the expectation value \(\braket{o}\)?

As described in Section 2.5, the expectation value can be found using Eq. 2.60, which looks like this for operator \(\widehat{O}\):
\begin{equation*}
\braket{o}=\bra{\psi}\widehat{O}\ket{\psi}.
\end{equation*}

The columns of the matrix representation of operator \(\widehat{O}\) can be found by applying the operator to each basis vector. Since \(\widehat{O}\ket{\epsilon_1}=2\ket{\epsilon_1}\), the first column of the matrix \(\bar{\bar{O}}\) is \(\begin{pmatrix}2\\0\\0\end{pmatrix}\).

Since \(\widehat{O}\ket{\epsilon_2}=-i\ket{\epsilon_1}+\ket{\epsilon_2}\), the second column of \(\bar{\bar{O}}\) is \(\begin{pmatrix}-i\\1\\0\end{pmatrix}\), and since \(\widehat{O}\ket{\epsilon_3}=\ket{\epsilon_3}\), the third column of \(\bar{\bar{O}}\) is \(\begin{pmatrix}0\\0\\1\end{pmatrix}\).

Inserting the values for \(\psi\) and \(\widehat{O}\) into Eq. gives
\begin{equation*}
\braket{o}=\bra{\psi}\widehat{O}\ket{\psi}=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}2&-i&0\\0&1&0\\0&0&1\end{pmatrix}\begin{pmatrix}4\\2\\3\end{pmatrix}.
\end{equation*}

Performing the matrix multiplications gives
\begin{align*}
\braket{o}&=\bra{\psi}\widehat{O}\ket{\psi}=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}2&-i&0\\0&1&0\\0&0&1\end{pmatrix}\begin{pmatrix}4\\2\\3\end{pmatrix}\\
&=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}(2)(4)+(-i)(2)+(0)(3)\\(0)(4)+(1)(2)+(0)(3)\\(0)(4)+(0)(2)+(1)(3)\end{pmatrix}\\
&=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}8-2i\\2\\3\end{pmatrix}\\
&=(4)(8-2i)+(2)(2)+(3)(3)=45-8i.
\end{align*}

As described in Section 2.5, the expectation value can be found using Eq. 2.60, which looks like this for operator \(\widehat{O}\):
\begin{equation*}
\braket{o}=\bra{\psi}\widehat{O}\ket{\psi}.
\end{equation*}

The columns of the matrix representation of operator \(\widehat{O}\) can be found by applying the operator to each basis vector. Since \(\widehat{O}\ket{\epsilon_1}=2\ket{\epsilon_1}\), the first column of the matrix \(\bar{\bar{O}}\) is \(\begin{pmatrix}2\\0\\0\end{pmatrix}\).

Since \(\widehat{O}\ket{\epsilon_2}=-i\ket{\epsilon_1}+\ket{\epsilon_2}\), the second column of \(\bar{\bar{O}}\) is \(\begin{pmatrix}-i\\1\\0\end{pmatrix}\), and since \(\widehat{O}\ket{\epsilon_3}=\ket{\epsilon_3}\), the third column of \(\bar{\bar{O}}\) is \(\begin{pmatrix}0\\0\\1\end{pmatrix}\).

Inserting the values for \(\psi\) and \(\widehat{O}\) into Eq. gives
\begin{equation*}
\braket{o}=\bra{\psi}\widehat{O}\ket{\psi}=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}2&-i&0\\0&1&0\\0&0&1\end{pmatrix}\begin{pmatrix}4\\2\\3\end{pmatrix}
\end{equation*}

And performing the matrix multiplications gives
\begin{align*}
\braket{o}&=\bra{\psi}\widehat{O}\ket{\psi}=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}2&-i&0\\0&1&0\\0&0&1\end{pmatrix}\begin{pmatrix}4\\2\\3\end{pmatrix}\\
&=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}(2)(4)+(-i)(2)+(0)(3)\\(0)(4)+(1)(2)+(0)(3)\\(0)(4)+(0)(2)+(1)(3)\end{pmatrix}\\
&=\begin{pmatrix}4&2&3\end{pmatrix}\begin{pmatrix}8-2i\\2\\3\end{pmatrix}\\
&=(4)(8-2i)+(2)(2)+(3)(3)=45-8i.
\end{align*}

Note that the expectation value \(\braket{o}\) is complex, but if the operator \(\widehat{O}\) been Hermitian (for example, had \(\widehat{O}\ket{\epsilon_1}=2\ket{\epsilon_1}+i\ket{\epsilon_2}\)), the expectation value would have been real.

After working through this chapter, readers will be able to explain the effect of applying an operator to a vector or function, define eigenvectors, eigenfunctions, and eigenvalues, manipulate operators using Dirac notation, explain the characteristics of Hermitian and projection operators, and calculate expectation values.

Express linear operators as matrices

Determine whether a vector is an eigenvector and a function is an eigenfunction

Manipulate operators within inner products using Dirac notation

Find the matrix representation of an operator in a given basis system

Determine whether two operators commute

Explain why Hermitian operators must have real eigenvalues

Construct and apply projection operators

Determine expectation values given possible outcomes and probabilities

Welcome to your Chapter 2 Quiz

1) Operating on a vector with a matrix produces

2) If a vector is an eigenvector of a certain matrix, operating on the eigenvector with that matrix produces a vector in the same direction and with the same length as the original eigenvector.

3) "Sandwiching" an operator between a bra and a ket produces

4) If an operator is represented as a matrix in the basis system of its own non-degenerate eigenvectors, you can be sure that

5) To find the adjoint (Hermitian conjugate) of a matrix, you must

6) Which of the follow properties must the matrix representation of a Hermitian operator have?

7) Hermitian operators are useful in quantum mechanics because

8) The result of applying a projection operator to a ket is

9) In quantum mechanics, the expectation value is best defined as

10) You can determine the uncertainty in a measurement of a quantum observable if you know the expectation value the observable and the expectation value of the square of the observable.