Class

A class consists of elements of a group that are conjugate to one another. Two elements  are conjugate to each other if , where . If  is conjugate to , then  is conjugate to because

where  (note that  according to the inverse property of a group).

The identity element  is a class by itself since .

If the elements of  are represented by matrices,  is called a similarity transformation. Furthermore, if  and  are conjugate to each other, and  and  are conjugate to each other, then  and  are conjugate to each other. This is because  and  and therefore, , where .

Question

Show that the symmetry operators ,  and  of the point group belong to the same class.

Answer

With reference to the multiplication table,

we have

Similarly, . Since  and  are conjugate to each other, and and  are conjugate to each other, then  and  are conjugate to each other. Therefore,  form a class. Using the same logic, we find that  and  form another class.

 

All elements of the same class in a group  have the same order, which is defined as the smallest value of  such that , where . This is because if is conjugate to , we have

The above equation of  is valid if and only if . This means that the smallest value of  in and in  must be the same. Therefore, elements  and  of the same class in a group have the same order and is denoted by .

Question

Verify that the symmetry operators , and of the point group have the same order of 2.

Answer

It is clear that when the reflection operator  acts on a shape twice, it sends the shape into itself. The same goes for  and . Hence, .

 

As mentioned in an earlier article, the similarity transformation of a matrix  to a matrix  leaves the trace of , which is defined as , invariant. This implies that elements of the same class in a group have the same trace.

 

Next article: Group representations
Previous article: Crystallographic point groups
Content page of group theory
Content page of advanced chemistry
Main content page

Group representations

A representation of a group  is a collection of square matrices that multiply according to the multiplication table of the group.

Consider the multiplication table for the point group :

Clearly, the collections of  matrices  and  are representations of the  point group because they multiply according to the above multiplication table:

An example of a representation of the point group consisting of matrices is:

It is easy to verify that the elements of  multiply according to the multiplication table of .

These three representations of  can be summarised as follows:

Question

Show that there are three classes for the  point group.

Answer

This is easily accomplished by inspecting the elements of and noting that group elements of the same class have the same trace.

 

The dimension of a representation refers to the common order of its square matrices, e.g. the dimensions of  and  are 1 and 2 respectively. A particular element of a representation is denoted by , where  refers to the representation and  refers to the corresponding symmetry operator. For example, . The matrix element of a particular element of a representation is denoted by , e.g. .

There are other representations of the  point group, e.g.  with the following elements:

If we inspect these matrices, we realise that they are of the same form, in the sense that we can group the matrix elements into submatrices that lie along the diagonals as illustrated. We call such submatrices: blocks, and the matrices containing blocks along their diagonals: block diagonal matrices.

The consequence of the submatrices in each of the  matrices being isolated by zeros is that the smaller submatrices in the multiplication of any two  matrices do not interfere with the larger submatrices. Since the collection of elements in the 1×1 blocks is the same as the collection of elements of , and the 2×2 submatrices of the bigger blocks are the same as the collection of elements of , we can conclude that the collection of the  matrices multiply according to the multiplication table of  without actually having to work out all the multiplications.

Another consequence of the way block diagonal matrices multiply with one another is that an infinite number of representations can be formed by adding elements of and  to the matrices of  to form larger matrices. For example, the addition of the elements of  to the matrices gives , with the following elements:

We call this matrix operation of adding blocks to form a larger block diagonal matrix: direct sum, which is denoted by the symbol . So, and .

With an infinite number of representations, we have to figure out which handful of representations of a group is sufficient for classifying molecules by symmetry and for analysing molecular properties. To do so, we first need to understand the difference between a reducible representation and an irreducible representation.

 

Next article: Reducible and irreducible representations
Previous article: Class
Content page of group theory
Content page of advanced chemistry
Main content page

Reducible and irreducible representations

A reducible representation is a group representation whose elements either have the same block diagonal matrix form or can undergo similarity transformations with the same invertible matrix to form block diagonal matrices of the same form.

Consider the following representations of the  point group:

By inspection, all the elements of have the same block diagonal matrix form of . Therefore,  is a reducible representation of the  point group.

Let’s consider another representation  of the  point group:

The elements do not have the same block diagonal matrix form. However, all of them undergo similarity transformations  with the same invertible matrix , where  and  to give . Hence,  is also a reducible representation of the  point group. Representations that are associated with a similarity transformation are called equivalent representations, i.e.  is equivalent to . It is evident that the elements of a reducible representation may not be in the same block diagonal form, and will only have this form if the appropriate basis is chosen.

A final point about reducible representations is that an element of a reducible representation of a group  is composed of the direct sum of the matrices of other representations of  that correspond to the same element of . For example,  of the  point group is a result of the direct sum of  and . In other words, a reducible representation can be decomposed or reduced to representations of lower dimensions.

An irreducible representation is a group representation whose elements cannot undergo similarity transformations with the same invertible matrix to form block diagonal matrices of the same form. Hence, an irreducible representation cannot be decomposed or reduced further to a representation of lower dimension. and are examples of irreducible representations of the point group. Every point group has a trivial, one-dimensional irreducible representation with each element being 1.

 

Next article: Unitary representation
Previous article: Group representations
Content page of group theory
Content page of advanced chemistry
Main content page

Unitary representation

A unitary representation of a group consists of elements that are unitary matrices.

Every representation of a group can be described in terms of unitary matrices. Specifically, matrices of a representation of a group can be expressed as unitary matrices via a common similarity transformation without any loss of generality. The proof is as follows:

Consider a group , where each element is an  matrix. Let’s construct a new matrix  out of the elements of , where .

Question

Proof the matrix identity .

Answer

 

Using the identity mentioned above,

Therefore,  is a Hermitian matrix, which can be diagonalised by a unitary matrix , i.e. . Using  and the above identity again, we have,

or

where .

The diagonal elements of  are  and hence, the diagonal elements of  are . Moreover, one of the elements  of  is the identity matrix, with  and . So, .

Question

Why is ?

Answer

implies that is a complex number. The modulus of a complex number is . Hence, .

 

Let  and  be

where ,  and .

Consider a new set of matrices , where . Since  is diagonal and  is real and positive, we have . Therefore,  and

Substitute eq1 in the above equation and changing the dummy index from  to ,

Question

Show that the set  is also a representation of .

Answer

If the set  is also a representation of , its elements must multiply according to the multiplication table of . Since, , we have . The third equality ensures that the closure property of  is satisfied for the set  and hence the set . In other words, the elements of  multiply according to the multiplication table of .

If , then . Since, , the only possibility is that  for . Therefore, the set  has the identity element.

To show that each  has an inverse, we have

Finally, the associativity property of the group is evident, due to the fact that the set consists of matrices, which are associative.

 

Since the set  is also a representation of , we can express eq2 as:

Question

Explain why the 2nd equality in the above equation is valid.

Answer

According to the rearrangement theorem, each summand in  is a unique element of , which is denoted by . Therefore, the 2nd equality in the equation before the Q&A holds.

 

Repeating the steps from eq2 for , we have . Therefore,  is a unitary matrix.

Since ,  and , we have

In other words, every element  of a representation of  can undergo a similarity transformation that results in , which is unitary. This is necessary in proving Schur’s first lemma and Schur’s second lemma.

Question

Show that the set  is also a representation of .

Answer

If the set  is also a representation of , its elements must multiply according to the multiplication table of . Since , we have

The third equality ensures that the closure property of  is satisfied for the set  and hence the set . In other words, the elements of  multiply according to the multiplication table of .

 

 

Next article: Elementary row operation and elementary matrix
Previous article: Reducible and irreducible representations
Content page of group theory
Content page of advanced chemistry
Main content page

Elementary row operation and elementary matrix

An elementary row operation is a linear transformation , where the transformation matrix  performs one of the following on :

If is the identity matrix , the transformed matrix is called an elementary matrix, which is denoted by  in place of . In other words, an elementary matrix  is a square matrix that is related to an identity matrix by a single elementary row operation.

For example,

are elementary matrices, where  and  are obtained from  by

Type 1. Swapping rows 1 and 2 of .
Type 2. Multiplying row 2 of by 7
Type 3. Adding 4 times row 2 of to row 1 of

respectively.

Interestingly,  itself is a transformation matrix if  because . Therefore, when we multiply  by a matrix , we are performing an elementary row operation on . For example,

An elementary matrix of dimension  has an inverse if , where the inverse  is a matrix that reverses the transformation carried out by . Every elementary matrix has an inverse because

Type 1. Two successive row swapping operations of a matrix  returns , i.e. . Comparing  with , we have .

Type 2. It is always possible to satisfy  when  and  differ by one diagonal matrix element , with ,  and .

Type 3. It is always possible to satisfy  when  and differ by one matrix element , where  and  with , ,  and .

Thus, all elementary matrices have corresponding inverses, which are themselves elementary matrices. For example, the inverses of  and  are

Finally, a non-singular matrix can always be expressed as a product of elementary matrices. The proof is as follows:

Let . Since every elementary matrix is non-singular, we can multiply the inverses of the elementary matrices successively on the left of  to give:

Similarly, we can multiply the inverses of the elementary matrices successively on the right of  to give:

Combining eq3 and eq4, we have , where , which completes the proof.

 

Next article: Determinant of a matrix
Previous article: Unitary representation
Content page of group theory
Content page of advanced chemistry
Main content page

Determinant of a matrix

The determinant is a number associated with an  matrix . It is defined as

where

    1. is an element of in the first row of .
    2. is the cofactor associated with .
    3. , the minor of the element , is the determinant of the matrix obtained by removing the  row and -th column of .

In the case of , we say that the summation is a cofactor expansion along row 1. For example, the determinant of  is

For any square matrix, the cofactor expansion along any row and any column results in the same determinant, i.e.

To prove this, we begin with the proof that the cofactor expansion along any row results in the same determinant, i.e.  for . Consider a matrix , which is obtained from  by swapping row  consecutively with rows above it  times until it resides in row 1. According to property 8 (see below), we have

According to property 10, and therefore, the cofactor expansion along any column also results in the same determinant. This concludes the proof.

In short, to calculate the determinant of an matrix, we can carry out the cofactor expansion along any row or column. If we expand along a row, we have . We then select any row  to execute the summation. Conversely, if we expand along a column, we get .

The following are some useful properties of determinants:

    1. , where is the identity matrix. If one of the diagonal elements of is , then .
    2. If the elements of one of the columns of are all zero, .
    3. If is obtained from  by multiplying the -th row of  by , .
    4. If is obtained from  by swapping two rows or columns of , then
    5. If two rows or two columns of are the same, .
    6. The inverse of a matrix exists only if .
    7. If , then .
    8. If , then . If , then .
    9. If is diagonal, then .

 

Proof of property 1

We shall proof this property by induction.

For ,

For ,

Let’s assume that for , . Then for ,

We can repeat the above induction logic to prove that  if one of the diagonal elements of is .

 

Proof of property 2

Again, we shall proof this property by induction.

For ,

For ,

Let’s assume that for , we have . Then for ,

 

Proof of property 3

For , where , we have . For , the definition allows us to sum by row or by column. Suppose we sum by row, we have . Since we are allowed to choose any column  to execute the summation, we can always select the column  such that . Therefore,  if the elements of one of the columns of  are all zero.

 

Proof of property 4

Let’s suppose  is obtained from  by multiplying the -th row of  by . If we expand  and  along row , cofactor  is equal to cofactor . Therefore,

 

Proof of property 5

For a type I elementary matrix,  transforms  by swapping two rows of . So,  due to property 8. Since  is obtained from by swapping two rows of , we have  according to property 1 and property 8, which implies that . Therefore, .

For a type II elementary matrix,  due to property 4 and  because of property 1. So, .

For a type III elementary matrix,

is computed by expanding along row . The equation  means that when  is computed by expanding along row , it has the same cofactor as when  is computed by expanding along row . This implies that . Since the definition of the determinant of  is , which in our case is equivalent to , we have . Thus , which according to property 9, gives:

Since ,

according to eq5 and property 1.

Comparing eq5 and eq6, .

 

Proof of property 6

Case 1

If  is singular, where , then  is also singular according to property 12. So, .

Case 2

If  is non-singular, it can always be expressed as a product of elementary matrices: . So,

Since property 5 states that ,

Similarly, . Substitute this in the above equation, .

 

Proof of property 7

Using property 6 and then property 2,

 

Proof of property 8

We shall proof this property by induction.  is the trivial case, where is the rank of a square matrix.

For , let and , which is obtained from  by swapping two adjacent rows. Furthermore, let  and . Clearly,

Let’s assume that for , when two adjacent rows are swapped. For , we have:

Case 1: Suppose that the first row of  is not swapped when making .

is the determinant of a rank  matrix, which is the same as  except for two adjacent rows being swapped. Therefore,  and .

Case 2: If the first two rows of  are swapped when making ,

We have  and . The minors and  can be expressed as

where  is  with the first two rows, and the -th and -th columns removed.

Question

Why is each of the minors expressed as two separate sums?

Answer

The minor  is the determinant of a submatrix of with the first row and the -th column of  removed. If , the term with the Kronecker delta disappears and , where  is the determinant of  with the first two rows, and the -th and 1st columns removed. If , one of the columns between the 1st and the last columns of  is removed in forming the submatrix. Therefore, both summations are employed in determining , with the first from  to  and the second from  to . The two summations also apply to the case when . Finally, the same logic can be used to explain the formula for . You can validate the formulae of both minors using the example of:

 

Therefore,

For any pair of values of and , where , the terms in are , which differ from the terms in , i.e. , by a factor of -1. Similarly, for any pair of values of and , where , the terms in  are , which again differ from the terms in , i.e. , by a factor of -1. Since all terms in differ from all corresponding terms in  by a factor of -1, .

In general, the swapping of any two rows and of , where , is equivalent to the swapping of  adjacent rows of , with each swap changing  by a factor of -1. Therefore,

 

Question

How do we get ?

Answer

Firstly, we swap row  consecutively with each row below it until row is swapped, resulting in swaps. Then, swap the previous row , which now resides in what was row , consecutively with each row above it until it becomes what used to be row , resulting in swaps. These two actions combined are equivalent to the swapping of  with , with a total of  swaps of adjacent rows. The diagram below illustrates an example of the swaps:

 

Finally, the swapping of any two columns is proven in a similar way.

 

Proof of property 9

Consider the swapping of two equal rows of  to form , resulting in  and . However, property 8 states that  if any two rows of  are swapped. Therefore,  if two rows of  are equal. The same logic applies to proving  if there are two equal columns of .

 

Proof of property 10

Case 1:

If , then according to property 13. So, .

Case 2:

Let’s first consider elementary matrices . A type I elementary matrix is symmetrical about its diagonal, while a type II elementary matrix has one diagonal element equal to . Therefore,  and thus for type I or II elementary matrices. A type III elementary matrix is an identity matrix with one of the non-diagonal elements replaced by a constant . Therefore, if  is a type III elementary matrix, then  is also one. According to eq6,  for a type III elementary matrix. Hence,  for all elementary matrices.

Next, consider an invertible matrix , which (as proven in the previous article) can be expressed as . Thus,  (see Q&A in the proof of property 13). According to property 5,

and

Therefore, .

 

Proof of property 11

We have , or in terms of matrix components:

Consider the matrix  that is obtained from the matrix  by replacing the -th column of  with the -th column, i.e.  for and . According to property 9,  because  has two equal columns. Furthermore, cofactor is equal to cofactor  for . Therefore,

When , the last summation in eq8 becomes

Combining eq8 and eq9, we have , which when substituted in eq7 gives:

Therefore, , which implies that the inverse of a matrix is undefined if . In other words, the inverse of a matrix  is undefined if . We call such a matrix, a singular matrix, and a matrix with an associated inverse, a non-singular matrix.

 

Proof of property 12

We shall prove by contradiction. According to property 11,  has no inverse if . If  has no inverse and  has an inverse, then . This implies that  has an inverse , where , which contradicts the initial assumption that  has no inverse. Therefore, if  has no inverse, then  must also have no inverse.

 

Proof of property 13

Question

Show that .

Answer

because

because

which can be extended to .

 

Using the identity in the above Q&A, . If is invertible, then . This implies that  is the inverse of and therefore that is invertible if is invertible.

The last part shall be proven by contradiction. Suppose  is singular and  is non-singular, there would be a matrix  such that , Furthermore, , which implies that . This contradicts our initial assumption that  is singular. Therefore, if  is singular,  must also be singular.

 

Proof of property 14

We shall proof this property by induction. For ,

Let’s assume that for . Then for , the cofactor expansion along the first row is

 

Next article: Schur’s first lemma
Previous article: Elementary row operation and elementary matrix
Content page of group theory
Content page of advanced chemistry
Main content page

Schur’s first lemma

Schur’s first lemma states that a non-zero matrix that commutes with all matrices of an irreducible representation of a group is a multiple of the identity matrix.

The proof of Schur’s first lemma involves the following steps:

    1. Consider a representation of a group , i.e. , where each element of is an  matrix, which can be regarded as a unitary matrix  according to a previous article.
    2. Proof that a Hermitian matrix that commutes with the irreducible representation element , where , is a constant multiple of the identity matrix.
    3. Infer from step 2 that any arbitrary non-zero matrix that commutes with the irreducible representation element  is a multiple of the identity matrix.

Step 1 is self-explanatory. For step 2, we begin with a Hermitian matrix  that commutes with :

Multiplying the above equation on the left and right by  and  respectively,

Since

or

where .

Question

Show that is also a representation of .

Answer

If  is also a representation of , its elements must multiply according to the multiplication table of . Since , we have

The third equality ensures that the closure property of  is satisfied for  and hence . In other words, the elements of  multiply according to the multiplication table of .

 

As a Hermitian matrix can undergo a similarity transformation by a unitary matrix to give another Hermitian matrix  which is diagonal, i.e. , we have

Rewriting  in terms of its matrix elements, we have  or , which can be rearranged to

Consider the following cases for the above equation:

Case 1: All diagonal elements of are distinct, i.e.  if .

We have  for , which means that all off-diagonal elements of are zero. In other words, is an element of a reducible representation that is a direct sum of elements of one-dimensional matrix representations. Furthermore, the definition of a reducible representation implies that  is also an element of a reducible representation of  because .

Case 2: All diagonal elements of  are equal, i.e. .

can be any finite number, and consequently  may be either an element of a reducible or an irreducible representation. However, the diagonal matrix  must be a multiple of the identity matrix if .

Case 3: Some but not all diagonal elements of  are equal.

Instead of considering all possible permutations of equal and unequal diagonal entries in , we rearrange the columns of  such that equal diagonal entries of  are in adjacent columns of . This is always possible as the order of the columns of  corresponds to the order of the diagonal entries in  (see this article). Let’s suppose the first  diagonal entries are the same, while the rest are distinct, i.e. . With reference to Case 1 and Case 2,  must be an element of a reducible representation with the block diagonal form:

For example, if  in the following  matrix,

then  can be any finite number, while all other off-diagonal elements are zero.

Combining all three cases, if  is an irreducible representation, the diagonal matrix  must be a multiple of the identity matrix. Since , where  is Hermitian, we have proven step 2.

For the last step, let’s consider an arbitrary non-zero matrix  that commutes with :

Since  is unitary,  and so , which when multiplied from the left and right by  gives . This implies that if commutes with , then also commutes with .

Question

i) Show that if  and commutes with , then any linear combination of  and  also commutes with .
ii) Show that the linear combinations  and are Hermitian.
iii) Show that .

Answer

i)

ii)



iii) Substitute and in , we get .

 

With reference to step 2, must be a constant multiple of the identity matrix and so must . Therefore,  is also a constant multiple of the identity matrix. This concludes that proof of Schur’s first lemma, which together with Schur’s second lemma, is used to proof the great orthogonality theorem.

 

Next article: Schur’s second lemma
Previous article: determinant of a matrix
Content page of group theory
Content page of advanced chemistry
Main content page

Schur’s second lemma

Schur’s second lemma describes the restrictions on a matrix that commutes with elements of two distinct irreducible representations, which may have different dimensions.

Consider an arbitrary matrix  and two irreducible representations of a group,  of dimension  and of dimension , such that

where .

Taking the conjugate transpose of eq10 and using the matrix identity , we have . As every element of a representation of a group can be expressed as a unitary matrix via a similarity transformation without any loss of generality, and as ,

Since the inverse property of a group states that  and , we can express eq10 as

Multiplying eq11 on the left by  and using eq12, we have  and therefore , which implies that  commutes with all elements of an irreducible representation of . With reference to Schur’s first lemma,

where  is a constant and  is the identity matrix.

If we multiply eq11 on the right by  and repeat the steps above, we have

Let’s consider the following cases for eq13:

Case 1:

Let the -th entry of  be . If , we can rewrite eq13 in terms of matrix entries: . If , we have , which implies that  is the zero matrix because  for all .

Combining eq13 and eq14, we have  or . This implies that  exists if . We can therefore rewrite eq10 as , which is a similarity transformation if .

Case 2:

If , the arbitrary matrix (denoted by ) is an  matrix with reference to eq10. Suppose ; we have:

If we enlarge  to form an  matrix  with the additional elements equal to zero, we have

Due to the zeroes,  . Taking the determinants,

Using the determinant identities ,  and  , we have

Since one of the columns of  is zero, . So, , which implies that  must be a zero matrix according to the results of case 1.

Finally, we can summarise Schur’s second lemma as follows:

Given an arbitrary matrix  and two irreducible representations,  of dimension  and  of dimension , where  , then

    1. if , either  or the representations are related by a similarity transformation, i.e. equivalent representations.
    2. If , .

 

Next article: Great orthogonality theorem
Previous article: Schur’s first lemma
Content page of group theory
Content page of advanced chemistry
Main content page

Great orthogonality theorem

The great orthogonality theorem establishes the orthogonal relation between entries of matrices of irreducible representations of a group. Mathematically, it is expressed as

where

    1. refers to the matrix entry in -th row and -th column of the -th matrix of the -th irreducible representation.
    2. is the order of the group .
    3. is the dimension of the irreducible representation.

The proof of eq14 involves analysing two cases and then combining the results. Consider the matrix

where  is an  matrix associated with representation ,  is and  matrix associated with representation  and  is an arbitrary matrix with  rows and  columns.

Multiplying eq15 on the left by some matrix  associated with representation ,

Using the matrix identity ,

 

Question

Proof the matrix identity .

Answer

and so .

 

According to the closure property of a group,  and  and thus

Case 1: .

Eq15 and eq16 becomes  and respectively (we have changed the dummy index from  to in eq16). As every element of a representation can undergo a similarity transformation to a unitary matrix, we shall assume  and its inverse are unitary matrices. According to Schur’s first lemma, eq16 implies that  and eq15 becomes

is a similarity transformation, where  is similar to some other arbitrary matrix. Since the traces of similar matrices are the same,

where  is the identity matrix’s dimension, which is equal to the dimension of the matrix .

Substitute eq18 in eq17, we have , or in terms of matrix entries,

The RHS of the above equation is a finite summation of the product of three scalars and their order can be changed. So,

With ,

Since  is an arbitrary matrix, the above equation must satisfy any. This is only possible if

Since  is unitary,

Case 2: and  is not equivalent to .

According to Schur’s second lemma, eq15 becomes , or in terms of matrix entries,

Since  is an arbitrary matrix, the above equation must satisfy any . This is only possible if

Since  is unitary,

Combining eq19 and eq20, we have the expression for the great orthogonality theorem:

which can also be expressed as

because the RHS vanishes if , which renders the subscript  unnecessary for  .

Question

What about the case where  and  is equivalent to ?

Answer

In this case,  can undergo a similarity transformation to become , which in turn can undergo a similarity transformation to become elements of a unitary representation. This implies that both  and  can be expressed as the same unitary representation because if  is similar to  and  is similar to , then  is similar to . In other words, we have  (where is  unitary), which is case 1.

 

Let’s rewrite eq20b as

Eq21 has the form of the inner product of two vectors, where  and  are  components of vectors  and  respectively in a -dimensional vector space. We can regard the components of the vectors as a function of three indices ,  and  such that the two vectors are orthogonal to each other when . This orthogonal relation of matrix entries is why eq21 is called the great orthogonality theorem.

 

Question

Verify eq21 for

i) ,
ii)  with ,  and
iii) , with .

Answer

i)
ii)
iii)

 

Next article: Little orthogonality theorem
Previous article: Schur’s second lemma
Content page of group theory
Content page of advanced chemistry
Main content page

Little orthogonality theorem

The little orthogonality theorem consists of two relations that are reduced forms of the mathematical expression for the great orthogonality theorem.

1st little orthogonality relation

The 1st relation is derived from eq21 by letting  and , resulting in

Since , we have  and

Question

Show that .

Answer

 

Since the traces of a matrices of the same class are the same, eq22 is equivalent to

where

    1. is the number of classes.
    2. is the number of elements in the -th class.
    3. , called the character of a class, is the trace of a matrix belonging to the -th class of the -th irreducible representation.

Eq23 is known as the 1st little orthogonality relation.

Question

Determine whether the representations  and  of the point group are reducible or irreducible.

Answer

Since the 1st little orthogonality relation is derived from the great orthogonality theorem, which pertains only to irreducible representations, characters of a representation that do not satisfy the relation belong to a reducible representation. Using eq23,

Therefore,  is an irreducible representation of the  point group, while  is a reducible representation of the  point group.

 

Let’s rewrite eq23 as

Eq24 has the form of the inner product of two vectors  and  in a -dimensional vector space, with components  and respectively. The components are functions of  and the two vectors are orthogonal to each other when . A -dimensional vector space is spanned by  orthogonal vectors, each with  components. Since the number of vectors and the number of components of each vector are denoted by  and  respectively, the number of irreducible representations of a group is equal to the number of classes of that group. We say that the irreducible representations of a group form a complete set of basis vectors in the -dimensional vector space, with the components of each basis vector being .

2nd little orthogonality relation

Consider the matrices  and  with entries  and respectively.

Both are square matrices because the number of irreducible representations of a group is equal to the number of classes of that group. Comparing with eq24, the entries of the matrix product are given by:

Since  and  are square matrices and , then , i.e.

Eq24b is the 2nd little orthogonality relation.

Question

Show that if  and  are square matrices and , then .

Answer

Using the determinant identities of and , we have , which implies that and are not zero and are therefore non-singular. So,

 

Next article: Decomposition of group representations
Previous article: Great orthogonality theorem
Content page of group theory
Content page of advanced chemistry
Main content page
Mono Quiz