Similarity transformation

A similarity transformation of a matrix  to a matrix  is expressed as , where

    1. is an invertible matrix called the change of basis matrix.
    2. is an linear transformation matrix with respect to the basis .
    3. is the transformed representation of , such that  performs the same linear transformation as  but with respect to another basis .

Let a vector be with respect to the basis  and  with respect to the basis . Let another vector be with respect to the basis  and with respect to the basis . Consider the transformation of these vectors as follows:

where the first two equations describe change of basis transformations and the last equation is a linear transformation of  to in the same basis .

Combining the three equation, we have . If  is invertible, we can multiply  by  on the left to give , where . Comparing and ,  is the transformed representation of , where  performs the same linear transformation as  but with respect to another basis . We say that  is similar to  because  has properties that are similar to . For example, the trace of , which is defined as , is the same as the trace of .

To show that , we have,

where  is the identity matrix and where we have used the identity for the second equality.


Proof that .



The common properties of similar matrices are useful in explaining certain group theory concepts, e.g. why there are exactly 32 crystallographic point groups.

One of the most common applications of similarity transformations is to transform a matrix to a diagonal matrix . Consider the eigenvalue problem  and let the eigenvectors of  be the columns of :

Since , where , we have

If the eigenvectors of  are linearly independent, then  is non-singular (i.e. invertible). This allows us to multiply  on the left by  to give , with the diagonal entries of  being the eigenvalues of .


Why is  non-singular if the eigenvectors of  are linearly independent?


The eigenvectors  are linearly independent if the only solution to is when  for all . In other words,

or simply .

We need to show that the only solution to  is  and this is possible if  is invertible such that


Using the same eigenvalue problem, we can show that Hermitian operators  are diagonalisable, i.e.  (see this article).


Show that a Hermitian matrix  can be diagonalised by , i.e. , where is a unitary matrix, and that is also Hermitian.


A unitary matrix has the property: . If a complete set of orthonormal eigenvectors of  are the columns of , we have  and  because orthonormal eigenvectors are linearly independent. The remaining step is to show that .

For example, . Since  is non-singular, multiplying  on the right of  gives . So, .

To show that  is also Hermitian, we have



As mentioned above, . The -th column  of is , while the -th column of  is . So, , where  is an eigenvector of  and  is the corresponding eigenvalue. Therefore, the order of the columns of the change of basis matrix corresponds to the order of the diagonal entries in the diagonal matrix.


Next article: Crystallographic Point groups
Previous article: Molecular point group
Content page of group theory
Content page of advanced chemistry
Main content page

Leave a Reply

Your email address will not be published. Required fields are marked *