Determinant of a matrix

The determinant is a number associated with an  matrix . It is defined as

where

    1. is an element of in the first row of .
    2. is the cofactor associated with .
    3. , the minor of the element , is the determinant of the matrix obtained by removing the  row and -th column of .

In the case of , we say that the summation is a cofactor expansion along row 1. For example, the determinant of  is

For any square matrix, the cofactor expansion along any row and any column results in the same determinant, i.e.

To prove this, we begin with the proof that the cofactor expansion along any row results in the same determinant, i.e.  for . Consider a matrix , which is obtained from  by swapping row  consecutively with rows above it  times until it resides in row 1. According to property 8 (see below), we have

According to property 10, and therefore, the cofactor expansion along any column also results in the same determinant. This concludes the proof.

In short, to calculate the determinant of an matrix, we can carry out the cofactor expansion along any row or column. If we expand along a row, we have . We then select any row  to execute the summation. Conversely, if we expand along a column, we get .

The following are some useful properties of determinants:

    1. , where is the identity matrix. If one of the diagonal elements of is , then .
    2. If the elements of one of the columns of are all zero, .
    3. If is obtained from  by multiplying the -th row of  by , .
    4. If is obtained from  by swapping two rows or columns of , then
    5. If two rows or two columns of are the same, .
    6. The inverse of a matrix exists only if .
    7. If , then .
    8. If , then . If , then .
    9. If is diagonal, then .

 

Proof of property 1

We shall proof this property by induction.

For ,

For ,

Let’s assume that for , . Then for ,

We can repeat the above induction logic to prove that  if one of the diagonal elements of is .

 

Proof of property 2

Again, we shall proof this property by induction.

For ,

For ,

Let’s assume that for , we have . Then for ,

 

Proof of property 3

For , where , we have . For , the definition allows us to sum by row or by column. Suppose we sum by row, we have . Since we are allowed to choose any column  to execute the summation, we can always select the column  such that . Therefore,  if the elements of one of the columns of  are all zero.

 

Proof of property 4

Let’s suppose  is obtained from  by multiplying the -th row of  by . If we expand  and  along row , cofactor  is equal to cofactor . Therefore,

 

Proof of property 5

For a type I elementary matrix,  transforms  by swapping two rows of . So,  due to property 8. Since  is obtained from by swapping two rows of , we have  according to property 1 and property 8, which implies that . Therefore, .

For a type II elementary matrix,  due to property 4 and  because of property 1. So, .

For a type III elementary matrix,

is computed by expanding along row . The equation  means that when  is computed by expanding along row , it has the same cofactor as when  is computed by expanding along row . This implies that . Since the definition of the determinant of  is , which in our case is equivalent to , we have . Thus , which according to property 9, gives:

Since ,

according to eq5 and property 1.

Comparing eq5 and eq6, .

 

Proof of property 6

Case 1

If  is singular, where , then  is also singular according to property 12. So, .

Case 2

If  is non-singular, it can always be expressed as a product of elementary matrices: . So,

Since property 5 states that ,

Similarly, . Substitute this in the above equation, .

 

Proof of property 7

Using property 6 and then property 2,

 

Proof of property 8

We shall proof this property by induction.  is the trivial case, where is the rank of a square matrix.

For , let and , which is obtained from  by swapping two adjacent rows. Furthermore, let  and . Clearly,

Let’s assume that for , when two adjacent rows are swapped. For , we have:

Case 1: Suppose that the first row of  is not swapped when making .

is the determinant of a rank  matrix, which is the same as  except for two adjacent rows being swapped. Therefore,  and .

Case 2: If the first two rows of  are swapped when making ,

We have  and . The minors and  can be expressed as

where  is  with the first two rows, and the -th and -th columns removed.

Question

Why is each of the minors expressed as two separate sums?

Answer

The minor  is the determinant of a submatrix of with the first row and the -th column of  removed. If , the term with the Kronecker delta disappears and , where  is the determinant of  with the first two rows, and the -th and 1st columns removed. If , one of the columns between the 1st and the last columns of  is removed in forming the submatrix. Therefore, both summations are employed in determining , with the first from  to  and the second from  to . The two summations also apply to the case when . Finally, the same logic can be used to explain the formula for . You can validate the formulae of both minors using the example of:

 

Therefore,

For any pair of values of and , where , the terms in are , which differ from the terms in , i.e. , by a factor of -1. Similarly, for any pair of values of and , where , the terms in  are , which again differ from the terms in , i.e. , by a factor of -1. Since all terms in differ from all corresponding terms in  by a factor of -1, .

In general, the swapping of any two rows and of , where , is equivalent to the swapping of  adjacent rows of , with each swap changing  by a factor of -1. Therefore,

 

Question

How do we get ?

Answer

Firstly, we swap row  consecutively with each row below it until row is swapped, resulting in swaps. Then, swap the previous row , which now resides in what was row , consecutively with each row above it until it becomes what used to be row , resulting in swaps. These two actions combined are equivalent to the swapping of  with , with a total of  swaps of adjacent rows. The diagram below illustrates an example of the swaps:

 

Finally, the swapping of any two columns is proven in a similar way.

 

Proof of property 9

Consider the swapping of two equal rows of  to form , resulting in  and . However, property 8 states that  if any two rows of  are swapped. Therefore,  if two rows of  are equal. The same logic applies to proving  if there are two equal columns of .

 

Proof of property 10

Case 1:

If , then according to property 13. So, .

Case 2:

Let’s first consider elementary matrices . A type I elementary matrix is symmetrical about its diagonal, while a type II elementary matrix has one diagonal element equal to . Therefore,  and thus for type I or II elementary matrices. A type III elementary matrix is an identity matrix with one of the non-diagonal elements replaced by a constant . Therefore, if  is a type III elementary matrix, then  is also one. According to eq6,  for a type III elementary matrix. Hence,  for all elementary matrices.

Next, consider an invertible matrix , which (as proven in the previous article) can be expressed as . Thus,  (see Q&A in the proof of property 13). According to property 5,

and

Therefore, .

 

Proof of property 11

We have , or in terms of matrix components:

Consider the matrix  that is obtained from the matrix  by replacing the -th column of  with the -th column, i.e.  for and . According to property 9,  because  has two equal columns. Furthermore, cofactor is equal to cofactor  for . Therefore,

When , the last summation in eq8 becomes

Combining eq8 and eq9, we have , which when substituted in eq7 gives:

Therefore, , which implies that the inverse of a matrix is undefined if . In other words, the inverse of a matrix  is undefined if . We call such a matrix, a singular matrix, and a matrix with an associated inverse, a non-singular matrix.

 

Proof of property 12

We shall prove by contradiction. According to property 11,  has no inverse if . If  has no inverse and  has an inverse, then . This implies that  has an inverse , where , which contradicts the initial assumption that  has no inverse. Therefore, if  has no inverse, then  must also have no inverse.

 

Proof of property 13

Question

Show that .

Answer

because

because

which can be extended to .

 

Using the identity in the above Q&A, . If is invertible, then . This implies that  is the inverse of and therefore that is invertible if is invertible.

The last part shall be proven by contradiction. Suppose  is singular and  is non-singular, there would be a matrix  such that , Furthermore, , which implies that . This contradicts our initial assumption that  is singular. Therefore, if  is singular,  must also be singular.

 

Proof of property 14

We shall proof this property by induction. For ,

Let’s assume that for . Then for , the cofactor expansion along the first row is

 

Next article: Schur’s first lemma
Previous article: Elementary row operation and elementary matrix
Content page of group theory
Content page of advanced chemistry
Main content page

Schur’s first lemma

Schur’s first lemma states that a non-zero matrix that commutes with all matrices of an irreducible representation of a group is a multiple of the identity matrix.

The proof of Schur’s first lemma involves the following steps:

    1. Consider a representation of a group , i.e. , where each element of is an  matrix, which can be regarded as a unitary matrix  according to a previous article.
    2. Proof that a Hermitian matrix that commutes with the irreducible representation element , where , is a constant multiple of the identity matrix.
    3. Infer from step 2 that any arbitrary non-zero matrix that commutes with the irreducible representation element  is a multiple of the identity matrix.

Step 1 is self-explanatory. For step 2, we begin with a Hermitian matrix  that commutes with :

Multiplying the above equation on the left and right by  and  respectively,

Since

or

where .

Question

Show that is also a representation of .

Answer

If  is also a representation of , its elements must multiply according to the multiplication table of . Since , we have

The third equality ensures that the closure property of  is satisfied for  and hence . In other words, the elements of  multiply according to the multiplication table of .

 

As a Hermitian matrix can undergo a similarity transformation by a unitary matrix to give another Hermitian matrix  which is diagonal, i.e. , we have

Rewriting  in terms of its matrix elements, we have  or , which can be rearranged to

Consider the following cases for the above equation:

Case 1: All diagonal elements of are distinct, i.e.  if .

We have  for , which means that all off-diagonal elements of are zero. In other words, is an element of a reducible representation that is a direct sum of elements of one-dimensional matrix representations. Furthermore, the definition of a reducible representation implies that  is also an element of a reducible representation of  because .

Case 2: All diagonal elements of  are equal, i.e. .

can be any finite number, and consequently  may be either an element of a reducible or an irreducible representation. However, the diagonal matrix  must be a multiple of the identity matrix if .

Case 3: Some but not all diagonal elements of  are equal.

Instead of considering all possible permutations of equal and unequal diagonal entries in , we rearrange the columns of  such that equal diagonal entries of  are in adjacent columns of . This is always possible as the order of the columns of  corresponds to the order of the diagonal entries in  (see this article). Let’s suppose the first  diagonal entries are the same, while the rest are distinct, i.e. . With reference to Case 1 and Case 2,  must be an element of a reducible representation with the block diagonal form:

For example, if  in the following  matrix,

then  can be any finite number, while all other off-diagonal elements are zero.

Combining all three cases, if  is an irreducible representation, the diagonal matrix  must be a multiple of the identity matrix. Since , where  is Hermitian, we have proven step 2.

For the last step, let’s consider an arbitrary non-zero matrix  that commutes with :

Since  is unitary,  and so , which when multiplied from the left and right by  gives . This implies that if commutes with , then also commutes with .

Question

i) Show that if  and commutes with , then any linear combination of  and  also commutes with .
ii) Show that the linear combinations  and are Hermitian.
iii) Show that .

Answer

i)

ii)



iii) Substitute and in , we get .

 

With reference to step 2, must be a constant multiple of the identity matrix and so must . Therefore,  is also a constant multiple of the identity matrix. This concludes that proof of Schur’s first lemma, which together with Schur’s second lemma, is used to proof the great orthogonality theorem.

 

Next article: Schur’s second lemma
Previous article: determinant of a matrix
Content page of group theory
Content page of advanced chemistry
Main content page

Schur’s second lemma

Schur’s second lemma describes the restrictions on a matrix that commutes with elements of two distinct irreducible representations, which may have different dimensions.

Consider an arbitrary matrix  and two irreducible representations of a group,  of dimension  and of dimension , such that

where .

Taking the conjugate transpose of eq10 and using the matrix identity , we have . As every element of a representation of a group can be expressed as a unitary matrix via a similarity transformation without any loss of generality, and as ,

Since the inverse property of a group states that  and , we can express eq10 as

Multiplying eq11 on the left by  and using eq12, we have  and therefore , which implies that  commutes with all elements of an irreducible representation of . With reference to Schur’s first lemma,

where  is a constant and  is the identity matrix.

If we multiply eq11 on the right by  and repeat the steps above, we have

Let’s consider the following cases for eq13:

Case 1:

Let the -th entry of  be . If , we can rewrite eq13 in terms of matrix entries: . If , we have , which implies that  is the zero matrix because  for all .

Combining eq13 and eq14, we have  or . This implies that  exists if . We can therefore rewrite eq10 as , which is a similarity transformation if .

Case 2:

If , the arbitrary matrix (denoted by ) is an  matrix with reference to eq10. Suppose ; we have:

If we enlarge  to form an  matrix  with the additional elements equal to zero, we have

Due to the zeroes,  . Taking the determinants,

Using the determinant identities ,  and  , we have

Since one of the columns of  is zero, . So, , which implies that  must be a zero matrix according to the results of case 1.

Finally, we can summarise Schur’s second lemma as follows:

Given an arbitrary matrix  and two irreducible representations,  of dimension  and  of dimension , where  , then

    1. if , either  or the representations are related by a similarity transformation, i.e. equivalent representations.
    2. If , .

 

Next article: Great orthogonality theorem
Previous article: Schur’s first lemma
Content page of group theory
Content page of advanced chemistry
Main content page

Great orthogonality theorem

The great orthogonality theorem establishes the orthogonal relation between entries of matrices of irreducible representations of a group. Mathematically, it is expressed as

where

    1. refers to the matrix entry in -th row and -th column of the -th matrix of the -th irreducible representation.
    2. is the order of the group .
    3. is the dimension of the irreducible representation.

The proof of eq14 involves analysing two cases and then combining the results. Consider the matrix

where  is an  matrix associated with representation ,  is and  matrix associated with representation  and  is an arbitrary matrix with  rows and  columns.

Multiplying eq15 on the left by some matrix  associated with representation ,

Using the matrix identity ,

 

Question

Proof the matrix identity .

Answer

and so .

 

According to the closure property of a group,  and  and thus

Case 1: .

Eq15 and eq16 becomes  and respectively (we have changed the dummy index from  to in eq16). As every element of a representation can undergo a similarity transformation to a unitary matrix, we shall assume  and its inverse are unitary matrices. According to Schur’s first lemma, eq16 implies that  and eq15 becomes

is a similarity transformation, where  is similar to some other arbitrary matrix. Since the traces of similar matrices are the same,

where  is the identity matrix’s dimension, which is equal to the dimension of the matrix .

Substitute eq18 in eq17, we have , or in terms of matrix entries,

The RHS of the above equation is a finite summation of the product of three scalars and their order can be changed. So,

With ,

Since  is an arbitrary matrix, the above equation must satisfy any. This is only possible if

Since  is unitary,

Case 2: and  is not equivalent to .

According to Schur’s second lemma, eq15 becomes , or in terms of matrix entries,

Since  is an arbitrary matrix, the above equation must satisfy any . This is only possible if

Since  is unitary,

Combining eq19 and eq20, we have the expression for the great orthogonality theorem:

which can also be expressed as

because the RHS vanishes if , which renders the subscript  unnecessary for  .

Question

What about the case where  and  is equivalent to ?

Answer

In this case,  can undergo a similarity transformation to become , which in turn can undergo a similarity transformation to become elements of a unitary representation. This implies that both  and  can be expressed as the same unitary representation because if  is similar to  and  is similar to , then  is similar to . In other words, we have  (where is  unitary), which is case 1.

 

Let’s rewrite eq20b as

Eq21 has the form of the inner product of two vectors, where  and  are  components of vectors  and  respectively in a -dimensional vector space. We can regard the components of the vectors as a function of three indices ,  and  such that the two vectors are orthogonal to each other when . This orthogonal relation of matrix entries is why eq21 is called the great orthogonality theorem.

 

Question

Verify eq21 for

i) ,
ii)  with ,  and
iii) , with .

Answer

i)
ii)
iii)

 

Next article: Little orthogonality theorem
Previous article: Schur’s second lemma
Content page of group theory
Content page of advanced chemistry
Main content page

Little orthogonality theorem

The little orthogonality theorem consists of two relations that are reduced forms of the mathematical expression for the great orthogonality theorem.

1st little orthogonality relation

The 1st relation is derived from eq21 by letting  and , resulting in

Since , we have  and

Question

Show that .

Answer

 

Since the traces of a matrices of the same class are the same, eq22 is equivalent to

where

    1. is the number of classes.
    2. is the number of elements in the -th class.
    3. , called the character of a class, is the trace of a matrix belonging to the -th class of the -th irreducible representation.

Eq23 is known as the 1st little orthogonality relation.

Question

Determine whether the representations  and  of the point group are reducible or irreducible.

Answer

Since the 1st little orthogonality relation is derived from the great orthogonality theorem, which pertains only to irreducible representations, characters of a representation that do not satisfy the relation belong to a reducible representation. Using eq23,

Therefore,  is an irreducible representation of the  point group, while  is a reducible representation of the  point group.

 

Let’s rewrite eq23 as

Eq24 has the form of the inner product of two vectors  and  in a -dimensional vector space, with components  and respectively. The components are functions of  and the two vectors are orthogonal to each other when . A -dimensional vector space is spanned by  orthogonal vectors, each with  components. Since the number of vectors and the number of components of each vector are denoted by  and  respectively, the number of irreducible representations of a group is equal to the number of classes of that group. We say that the irreducible representations of a group form a complete set of basis vectors in the -dimensional vector space, with the components of each basis vector being .

2nd little orthogonality relation

Consider the matrices  and  with entries  and respectively.

Both are square matrices because the number of irreducible representations of a group is equal to the number of classes of that group. Comparing with eq24, the entries of the matrix product are given by:

Since  and  are square matrices and , then , i.e.

Eq24b is the 2nd little orthogonality relation.

Question

Show that if  and  are square matrices and , then .

Answer

Using the determinant identities of and , we have , which implies that and are not zero and are therefore non-singular. So,

 

Next article: Decomposition of group representations
Previous article: Great orthogonality theorem
Content page of group theory
Content page of advanced chemistry
Main content page

Decomposition of group representations

The decomposition of a reducible representation of a group reduces it to the direct sum of irreducible representations of the group. We have shown in an earlier article that this involves decomposing a block diagonal matrix into the direct sum of its constituent square matrices. In this article, we shall derive some useful equations involving the characters of the representations of a group (see eq25, eq27a, eq29 and eq30 below).

Consider an element of a reducible representation that has undergone a similarity transformation to the following block diagonal matrix:

where , ,  and .

Clearly, . Since the trace of a matrix belonging to the -th class of the -th irreducible representation is called the character of a class , we have . However, a particular constituent irreducible representation may appear more than once in the decomposition. Therefore,

where  is the number of times  appear in the direct sum.

Multiplying eq25 by , where  is the number of elements in the -th class, and sum over ,

Substituting eq23 in eq26,

Since ,

Eq27 is sometimes written as a sum over the individual symmetry operations  rather than over the various classes:

Question

Show that .

Answer

The complex conjugate of eq25, with a change of the dummy index from to , is . Multiply this by eq25 and by and sum over ,

Swap the dummy indices of eq23 to , which when substituted in the above equation, gives

Since ,

 

With reference to eq25 and eq28, if one of the is equal to 1, with the rest of the  equal to zero, then  is an irreducible representation. This implies that

 

Question

Show that if two reducible representations and of a group are equivalent, they decompose into the same direct sum of irreducible representations of .

Answer

Since and  are related by a similarity transformation,  for every . Using eq27a,

 

 

Next article: Regular representation
Previous article: Little orthogonality theorem
Content page of group theory
Content page of advanced chemistry
Main content page

Regular representation

The regular representation of a group is a reducible representation that is generated from a rearranged multiplication table of the group. It is used to derive an important property (see eq40 below) for constructing character tables.

Consider the re-arranged multiplication table for the  point group such that all the identity elements are along the diagonal:

An element of the regular representation of the group  is derived from table II in the form of a  matrix, whose entries are 1 when the element of  occurs in table II and zero otherwise. For example,

In other words, we have

where  is the -th row and -th column matrix entry of the -th element of the  representation of .

Question

Show that each matrix of the regular representation of the point group has an inverse.

Answer

The entry 1 appears only once in every row (or column) of a matrix, e.g. , of the regular representation. We can therefore swap the rows (or columns) of  to form . According to determinant property 1,  and consequently, according to determinant property 8, . Therefore,  is non-singular, according to determinant property 11. The same logic applies to the rest of the matrices.

 

If these derived matrices are truly elements of a representation of , then they must satisfy the closure property of , i.e.

where

Therefore,  in eq35 needs to be equal to  to satisfy eq32. To prove this, we multiply the condition  in eq31 on the left by  to give . Similarly, we multiply the condition  in eq34 on the left by  and then on the right by  to give . Combining the two results, we have , which when multiplied on the right by  and then on the left by , gives , which completes the proof.

By inspecting table II, the character of the regular representation is

Question

Show that the regular representation is reducible.

Answer

If the regular representation is reducible, the LHS of eq28 must be greater than . Applying the LHS of eq28 to the regular representation and using eq36,

For non-single element groups, and hence .

 

Finally, we shall prove that . From eq25 and eq27, we have

where  and are the characters for the -th class of the regular representation and irreducible representation respectively.

Expanding eq38 and using eq36,

Question

Why is ?

Answer

The regular representation for  is a reducible representation that is the direct sum of irreducible representations under the class . Each of these constituent irreducible representations is in general a  matrix with trace of .

 

Eq39 therefore states that each constituent irreducible representation occurs in the regular representation a number of times that is equal to the dimension of the corresponding irreducible representation. For example, with reference to the  table below, , where ,  and . In other words, each of the irreducible representations  and  appears once in the  regular representation matrix element , while the irreducible representation  appears twice.

Since the matrix dimension for each element of the regular representation is ,

where we have used eq39 and where  refers to the dimension of the -th irreducible representation of a group.

 

Next article: Character table
Previous article: Decomposition of group representations
Content page of group theory
Content page of advanced chemistry
Main content page

Character table

A character table is a square matrix whose rows and columns are irreducible representations  of a point group  and classes  of , respectively, with the entries being the characters  of elements of the corresponding irreducible representation.

It has the general form:

where  is the number of elements in the -th class.

The construction of a character table is based on the following properties, some of which are related to the little orthogonality theorem:

    1. There is always a one-dimensional irreducible representation, called a totally symmetric representation, in which all the characters are 1. It is conventionally denoted by . This is because a representation of a group is a collection of matrices that multiply according to the multiplication table of the group, and the collection of matrices, each with entry 1, always multiply according to the multiplication table of that group.
    2. The number of irreducible representations is equal to the number of classes of the group. This is a consequence of the 1st little orthogonality relation.
    3. The sum of the squares of the dimensions of all the irreducible representations of a group is equal to the order of the group, i.e. , which is eq40.
    4. Each irreducible representation is regarded as a basis vector in a -dimensional vector space, with the weighted characters of an irreducible representation being the components of the corresponding basis vector. Therefore, irreducible representations have orthonormal relations with one another. These relations are characterised by either
      a)     (see eq24); or
      b)     (see eq24b)
    5. The weighted sum of the squares of the characters of an irreducible representation is equal to the order of the group, i.e. , which is eq29.

For example, the character table for the  point group is constructed by first noting that there are six symmetry operations () that belong to three classes. Using properties 1 and 2 mentioned above, there are three irreducible representations, with the first being the totally symmetric representation:

Using the 3rd property, . Since , let  and . Furthermore, the characters of a  identity matrix and a  identity matrix are 1 and 2 respectively. Therefore, we have

For orthogonal vectors and , property 4a states that

with the 5th property requiring

Since a representation of a group is a collection of matrices that multiply according to the multiplication table of the group, the  matrices of  (where the character itself is a matrix) must multiply according to the  multiplication table as follows:

In other words,  and , where the elements (in matrix form) of the same class in a group have the same trace. Substituting  and  in eq42 gives  ( is rejected, see Q&A below for explanation), which when substituted in eq41 gives . We now have:

Question
    1. How about the solution set of and ?
    2. Can we use the same logic in determining the characters of to derive the characters of ?
Answer
    1. This solution set is rejected, as it is not consistent with property 5.
    2. No, because we will end up with more variables than we can solve, e.g. for , we have , where and . Therefore, the method used to determine the characters of only works if  is one-dimensional.

 

The remaining characters are obtained using property 4b, where  and . Therefore,  and , with


The character table of a point group can also be generated using basis functions, which are defined in the next article. Some point groups, e.g.  and , have a two-dimensional irreducible representation that contains two rows of characters that are complex conjugates of each other (). Each of these rows of character is an irreducible representation in its own right, with the total number of irreducible representations equal to the number of classes of the group (property 2).

Finally, the generic irreducible representation symbols of  are replaced by Mulliken symbols in character tables for clarity, where

    1. : one-dimensional representation that is symmetric with respect to rotation about the principal axis, i.e. .
    2. : one-dimensional representation that is antisymmetric with respect to rotation about the principal axis, i.e. .
    3. : two-dimensional representation (not to be confused with the identity element of the group).
    4. : three-dimensional representation.
    5. If there are more than one -dimensional representation, subscripts are used to differentiate them, e.g.  and .
    6. Molecular term symbols (, , and ), in addition to the Mulliken and , are used to label the irreducible representations of the axial point groups and .  The molecular term symbol for an irreducible representation corresponds to the magnitude of the quantum number  of a hydrogenic basis wavefunction that is used to generate the representation, e.g. (see this article for derivation), where . We have,
      The use of molecular term symbols in character tables have useful applications in spectroscopy.
    7. The subscript (from the German gerade, meaning even) refers to representations that are symmetric with respect to inversion, while the subscript  (from the German ungerade, meaning uneven) refers to representations that are antisymmetric with respect to inversion.
    8. The superscript ‘ refers to representations that are symmetric with respect to , while the superscript ” refers to representations that are antisymmetric with respect to .
    9. The superscript + refers to representations that are symmetric with respect to , while the superscript – refers to representations that are antisymmetric with respect to .

So, we have

The last part of a character table lists basis functions that transform according to the irreducible representations of the group. We shall elaborate on this in the next article.

 

  1. Next article: Basis
    Previous article: Regular representation
    Content page of group theory
    Content page of advanced chemistry
    Main content page

Basis

A basis of a group is a set of objects that transforms according to representations of the group. The objects may be vectors, pseudovectors, functions, bond angles, etc. They are not necessarily orthogonal but are usually chosen to be orthogonal for certain applications.

To explain the definition of a basis, we consider the  point group with the following character table:

Let’s examine how symmetry operators of  transform  -orbital wavefunctions.

, by inspection, is invariant when acted upon by each of the four symmetry operators of  (see diagram below). Mathematically, we obtain the same eigenvalue of +1 after each operation, e.g. . We say that  transforms according to the totally symmetric representation . Repeating the logic,  and  transform according to and  respectively.

From this article,

where  and  is the distance from the origin.

Hence, the function  is invariant to all symmetry operations of a point group. This implies that the -orbital wavefunctions transform in the same way as the linear functions ,  and . We call these functions basis functions and include them in the character table as follows:

The basis functions , and in the character table of a point group also represent the independent translational motions of a molecule of that point group. To elaborate further, let’s define the independent translational motions of a molecule as the displacement of the molecule in the , and directions. For example, the diagram below describes the displacement of in the -direction in terms of unit instantaneous displacements vectors that are centred on the atoms, or equivalently, a single instantaneous displacement vector on the centre of mass.

Similarly, the translational motions of in the -direction and -direction are described by corresponding displacement vectors on the centre of mass. These three linearly independent displacement vectors form a basis set of a representation  of  because any one of the vectors is transformed by the symmetry operations of into a linear combination of the vectors in the set. Symmetry operations acting on the basis set produce the following results:

The corresponding matrix transformation equations are:

The transformation matrices form , which is reducible and in block-diagonal form, with , and transforming according to ,  and respectively. Therefore, the bases , and in the character table represent the independent translational motions of .

We can also use instantaneous displacement vectors to describe rotational motions of a molecule. The rotation of  about the -axis is shown in the diagram below, with three instantaneous displacement vectors tangent to the two circular paths of motion.

Just like the way we reduce the three translational displacement vectors to a single vector centred on the centre of mass of , we represent the three rotational vectors with , which is a pseudovector characterised by two quantities: a rotation direction that is defined by the right-hand rule and a magnitude that represents the rotation angle . Therefore, the transformation of the three instantaneous vectors by the symmetry operation is equivalent to the reflection of the rotation direction of (curved grey arrow) about the  plane. The symmetry operation leaves the instantaneous vectors, and hence the rotation direction of , unchanged.

Symmetry operations acting on the basis set produce the following results:

The corresponding matrix transformation equations are:

The transformation matrices form , which is again reducible and in block-diagonal form, with , and  transforming according to , and respectively. Therefore, the bases , and  in the character table represent the independent rotational motions of .

As such, we can revise the character table as follows:

-orbitals wavefunctions, which are quadratic polynomials, also form a basis. For example, the orbital wavefunction transforms according to (see diagram below).

Consequently, we have

Question

Can -orbital wavefunctions form a basis for a group?

Answer

Yes, they are classified as a basis of cubic functions.

 

Lastly, character tables for some point groups have certain elements of a basis in parentheses. An example is the  point group:

which in the matrix representation form is

The notation of  means that  and  transform together but not independently under a symmetry operation. To elaborate on this, let’s consider the wavefunctions and . Separately, each wavefunction does not transform according to any of the three irreducible representations of . This can be verified by letting any point on any of the wavefunctions undergo the  symmetry operation. The eigenvalue obtained does not match any element of any irreducible representation of .

However, if we subject both wavefunctions together to the symmetry operation , e.g. by choosing the points  and  on and respectively (see diagram below), we obtain, after some simple geometry, the transformed points  and .

In matrix form, we have

or, in a new notation,

Repeating the same logic for the rest of the symmetry operators and comparing the results with the table depicting the matrix representation of , we find that the wavefunctions and , which are equivalent to the basis functions and , transform together according to the irreducible representation . We also find that a basis can be used to give rise to matrix representations of a group, since the matrix form of  is generated in the process.

Using the same logic for the rotation vectors and the -orbitals, we find that the pairs ,  and transform according to the irreducible representation .

An important implication of the basis functions and  transforming together under the symmetry operations of a particular point group is that there is no difference in symmetry between them with respect to that point group. In other words, the two basis functions and certain properties they possess may be treated as equivalent and indistinguishable. An example is the common energy state that corresponds to and , i.e. a degenerate energy state. Therefore, group theory plays a useful role in spectroscopic analysis.

 

Next article: Symmetry of the Hamiltonian
Previous article: Character table
Content page of group theory
Content page of advanced chemistry
Main content page

Symmetry of the Hamiltonian

The Hamiltonian is invariant under a symmetry operation if its expression in different bases related by the symmetry operation is the same.

Our objective is to prove the above statement and that the Hamiltonian is invariant under a symmetry operation  if it commutes with .

Let’s consider the non-relativistic multi-electron Hamiltonian :

where , and .

The first term on the RHS of eq43 is the kinetic energy operator. To prove that it is invariant under any symmetry operation, we need to show that it is rotation-invariant and reflection-invariant. As mentioned in this article, we can analyse a symmetry operation in terms of the change of basis of a coordinate system. The following matrix equation expresses the change of basis of the operator by a rotation about the -axis:

where  are components of a vector in the old basis, while  are components of the same vector in the new basis.

Since , we have . That leaves us to show that . From eq44,

Using the multivariable chain rule ,

Substituting eq47 in  and , we have  and  respectively. We then substitute these two equalities in eq48 to give:

Repeating the above logic for , we have

Adding eq49 and eq50 completes the proof.

The change of basis by a reflection, e.g. along the -plane, is given by

Since  and , we only need to show that , which is achieved using the above logic for proving .

The second and third terms on the RHS of eq43 (potential energy operator) are dependent on the electron-nuclear distance  and the inter-electron distance  respectively. Since electron-nuclear distances and inter-electron distances do not change under symmetry operations,  and .

Hence,  is invariant to a change of basis related by , and therefore, invariant under a symmetry operation . Since  is invariant to a change of basis, it undergoes the following similarity transformation:

or equivalently,

Eq53 states that commutes with , which is a consequence of  being invariant under the symmetry operation .

 

Next article: Symmetry and degeneracy
Previous article: Basis
Content page of group theory
Content page of advanced chemistry
Main content page
Mono Quiz