Tight-binding model

The tight-binding model incorporates the Hückel approximation to describe the electronic structure of solids by treating electrons as localised around atoms and hopping between neighbouring sites.

Consider a linear chain of identical atoms located at position , with representing the s-orbital of atom . According to the Hückel approximation, the Hamiltonian is defined by two parameters:

1) is the energy of an electron localised on a single atom.

2) is the energy associated with an electron hopping between adjacent atoms.

3) All other matrix elements are zero.

To find the eigenvalues and eigenstates satisfying the Schrödinger equation , we adopt the linear combination of atomic orbitals (LCAO) approach, in which

Multiplying the Schrödinger equation on the left by the bra gives:

Substituting eq1 into eq2, and noting that the states  are orthonormal, yields:

Expanding the first summation in eq3 and applying the Hückel approximation results in:

Substituting the trial solution into eq4, and imposing the boundary conditions of , gives:

This rearranges to:

Using the trigonometric identity , with  and yields:

which simplifies to:

Since , we have . So, , which when substituted into eq5 gives:

where .

Although is an integer due to the condition , its specific range in eq6 follows the fact that each eigenvalue is associated with one of linearly independent eigenstates described by eq1.

 

Question

Show that and are trivial solutions.

Answer

If , then and everywhere, which is a trivial solution. If , then and is again zero everywhere. Therefore, the non-trivial solutions correspond to .

 

An electron localised on an atom resides in a bound state with energy measured relative to the vacuum level (). Therefore, the conventional value of is negative. For s-orbitals, is also negative because corresponds to an integral of the form (positive wavefunction) (negative potential) (positive wavefunction). It follows that the lower-energy states occurs when is small in eq6 (so that the cosine term is close to +1 for large ). In other words, and represent the lowest and highest energy states of the system respectively (see diagram below).

One important application of the tight-binding model is its role in explaining band theory, which will be explored in the next article.

 

Next article: Band theory
Content page of solid-state chemistry
Content page of advanced chemistry
Main content page

Band theory

Band theory describes how atomic orbitals in a solid combine to form extended molecular orbitals, whose closely spaced energies effectively create continuous bands.

Consider a linear chain of identical atoms located at position , with representing the s-orbital of atom . When two atoms overlap, molecular orbital (MO) theory states that two MOs are formed: a bonding orbital and an antibonding orbital. For three atoms, three MOs are produced — bonding, non-bonding and antibonding — while the orbital overlap of four atoms results in four MOs: two bonding and two antibonding orbitals. In general, a chain of atoms generates  molecular orbitals. If is large, the energy levels of these MOs merge to form an energy band (see diagram above).

Mathematically, the energies are derived using the tight-binding model:

where , and .

To show that the energies merge to form an energy band, we analyse the energy separation:

Using the trigonometric identity ,

As and in eq7.

For example, the 1s-orbitals of Na, which has the electronic configuration 1s22s22p63s1, overlap to form the 1s band. Similarly, the 2s orbitals form the 2s band, and so on (see diagram above). The various Na bands do not overlap because the energy differences between different Na atomic orbitals (AOs) are much greater than the separation within each type of AO. Another characteristic of the band-MO diagram of Na is that the 3s band is only partially filled. In contrast, the 3s and 3p bands of Mg overlap to form a continuous band. Nevertheless, this merged band is still only partially filled because each 3p AO is initially unoccupied.

In metals, the Fermi energy — the energy of the highest occupied electronic state at absolute zero — lies within a partially filled band. However, it is more meaningful to associate frontier energies with the Fermi level , defined as the energy level at which the probability of finding an electron is 50% at thermodynamic equilibrium according to the Fermi-Dirac distribution, for any temperature above 0 K.

Electronic states of metals near the Fermi energy at absolute zero arise from the overlap of many AOs and are extremely closely spaced in energy. Because the density of states varies only weakly in this region and the gaps between occupied and unoccupied states are extremely small, increasing the temperature from 0 K to room temperature excites only a tiny fraction of electrons into slightly higher-energy levels. This redistribution is so small that the energy at which the occupation probability is 50% at room temperature remains essentially the same as the Fermi energy at 0 K. Consequently, electrons can easily move into the empty states just above the Fermi level when an electric field is applied, giving metals their high electrical conductivity. This conductivity decreases at higher temperatures due to increased collisions between moving electrons and the vibrating lattice atoms (phonons), which disrupt the flow of charge.

On the other hand, solid Ne is a non-conductor. Each Ne atom has the electronic configuration 1s22s22p6, with the 2p band completely filled in the solid. The next available band (derived from 3s orbitals) lies far higher in energy, creating a large band gap. With no empty states near the top of the filled band, electrons cannot be thermally promoted into any empty states, and the material remains insulating. In such insulators, the Fermi level lies within the band gap.

The main types of semiconductors are intrinsic (undoped), p-type and n-type (see diagram above). They also possess a band gap between the valence band and the conduction band. For an intrinsic semiconductor, such as GaAs, the valence band is completely filled at absolute zero, while the conduction band is completely empty. Therefore, the Fermi level lies near the middle of the band gap. Although the gap (1.42 eV) is much larger than the thermal energy at room temperature (0.026 eV), it is still small enough that a small but statistically significant number of electrons can be thermally excited across it. Once promoted into the conduction band, these electrons can be driven by an applied electric field to produce a current. Unlike metals, the conductivity of semiconductors increases with temperature because the number of thermally generated charge carriers grow exponentially, easily outweighing the increased scattering from lattice vibrations.

When dopants are introduced into intrinsic semiconductors, they create additional electronic states within the band gap. In a p-type semiconductor, acceptor impurities introduce energy levels slightly above the valence band. At 0 K, these acceptor levels are empty, while the valence band remains fully occupied. It follows that the Fermi level lies between the valence band and the acceptor levels. Electrical conductivity arises when electrons are thermally promoted from the valence band into the acceptor levels, leaving behind mobile holes in the valence band. These holes serve as the majority carriers responsible for current flow.

n-type semiconductors contain donor levels positioned just below the conduction band. At 0 K, these donor levels are fully occupied by electrons supplied by the dopant atoms, with the Fermi level lying between the donor levels and the conduction band. When electrons are promoted from the donor levels into the conduction band, they become free to move, producing a current.

In conclusion, band theory provides a molecular-orbital perspective on how electronic structure governs conductivity across different materials and forms the foundation for devices such as transistors and solar cells.

 

Next article: Superconductor
Previous article: Tight-binding model
Content page of solid-state chemistry
Content page of advanced chemistry
Main content page

Superconductor

A superconductor is a material that, below a certain critical temperature, conducts electricity with zero electrical resistance and expels magnetic fields.

The first superconductor was discovered in 1911, when Dutch physicist Heike Kamerlingh Onnes observed that mercury’s electrical resistance suddenly vanished at about 4.2 K. In the years that followed, additional elements were identified as superconductors, with critical temperatures below 23 K. These superconducting elements are mostly transition metals that are bunched into certain parts of the periodic table (elements highlighted in yellow in the above diagram). In contrast, the noble metals (such as copper, silver and gold) and the alkali metals generally do not become superconducting at ambient pressure.

Metals normally resist the flow of electricity. Electrons, the carriers of current, constantly scatter off vibrating atoms (phonons), impurities and even each other. Yet, below a certain temperature, some metals suddenly lose all electrical resistance.

How can electrons stop scattering entirely? The answer lies in a profound insight developed by John Bardeen, Leon Cooper and Robert Schrieffer, known as the BCS theory.

According to this theory, electrons stop acting like isolated particles. Instead, they form highly coordinated pairs that move in perfect harmony, allowing current to flow without energy loss. At first glance, the idea that electrons can pair seems impossible. After all, they repel each other electrically. But the key insight is that electrons can interact indirectly through the crystal lattice. When an electron moves through a lattice, it slightly attracts the positively charged ions nearby, creating a tiny distortion. This distortion, in turn, can attract a second electron (see diagram below). This effective attraction is extremely weak, only acts for electrons near the Fermi surface, and operates within a narrow energy range. Such a phonon-mediated interaction is generally isotropic (the same in all directions) in momentum space, or at least does not have a strong directional dependence.

In 1956, Leon Cooper showed that if even the tiniest attraction exists between two electrons just above the Fermi level, they will inevitably form a bound pair — a Cooper pair. Essentially, the enormous density of available electronic states near the Fermi surface acts like a magnifying glass for even the tiniest attractions. Because so many states are packed closely together, two electrons with a weak attractive interaction have an overwhelming number of ways to pair up. This abundance of possibilities amplifies the effect of the weak attraction, making it sufficient to bind electrons into a Cooper pair. Even interactions that would be negligible elsewhere in the energy spectrum become decisive near the Fermi surface, ensuring that pairing always occurs when conditions are right.

A Cooper pair isn’t two electrons clinging together like magnets. It is a quantum superposition extending over thousands of lattice spacings, characterised by:

    • A total spin of 0 (singlet)
    • A total momentum of 0 (electrons at and )
    • A highly delocalised wavefunction that strongly overlaps with others

The wavefunction of the Cooper pair is given by:

where

is a function describing the amplitude
is an operator that creates an electron with momentum and spin
is an operator that creates an electron with momentum and spin
is the vacuum state where the system has zero particles

 

Question

Why does the Cooper pair have zero momentum?

Answer

In conventional superconductors, a Cooper pair with a total linear momentum of zero minimises kinetic energy, making this configuration energetically favourable. As the phonon-mediated interaction is approximately isotropic in momentum space, the Cooper pair’s spatial wavefunction is spherical with zero angular momentum .

Electrons are fermions and the total wavefunction must be antisymmetric under exchange of the two electrons. The total wavefunction of the pair is the product of a spatial part, which is symmetric (), and a spin part, which must then be antisymmetric, forming a singlet.

 

Because the wavefunctions overlap so extensively, all Cooper pairs in the metal lock into a single quantum state. This creates a kind of “quantum super-traffic-jam,” where every pair is aware of every other pair through a shared phase. When billions of Cooper pairs occupy the same quantum state, they form a macroscopic quantum condensate, a unified wavefunction stretching across the entire metal. This condensate is rigid and tiny disturbances cannot knock individual pairs out of step.

How then is the material able to conduct electricity without resistance? Resistance arises when electrons scatter off imperfections or vibrations. Cooper pairs, however:

    • Are spread out over huge distances
    • Share a common quantum phase
    • Cannot scatter individually

For a Cooper pair to scatter, all pairs must do so simultaneously, which requires a prohibitive amount of energy. Small impurities or low-energy phonons simply cannot disrupt this concerted motion. Once the condensate begins moving, it keeps moving forever unless something strong enough breaks the pairs. Furthermore, breaking a Cooper pair costs energy, creating the superconducting energy gap. This gap shields the condensate from thermal fluctuations, explaining why superconductors exhibit zero electrical resistance and persistent currents at low temperatures where phonon-mediated interactions are not disrupted by thermal motion of lattice ions. In other words, electrons in a superconductor below its critical temperature move together forever, without resistance.

This remarkable ability to carry persistent currents without energy loss is directly exploited in technologies that require stable, intense magnetic fields. For example, in magnetic resonance imaging (MRI) machines, niobium-titanium coils are cooled below their critical temperature (10 K) with liquid helium to form persistent currents. These currents generate strong and extremely stable magnetic fields, which are essential for producing high-resolution images of internal tissues. Because the currents circulate indefinitely without decay, MRI magnets do not require continuous power input to maintain their fields, making them highly efficient and reliable. Beyond medical imaging, the same principle is exploited in quantum computing, where persistent, lossless currents are critical.

The second defining property of a superconductor is its ability to expel magnetic fields from its interior, such that . This phenomenon is called the Meissner effect, and it occurs even if an external magnetic field was already present before the material was cooled.

In a normal conductor, magnetic fields penetrate the material almost uniformly (see diagram above), limited only by weak and short-lived effects such as induced eddy currents. If the external magnetic field changes with time, Faraday’s law generates eddy currents in the conductor, but these currents quickly decay because the material has finite electrical resistance. As they die out, they no longer oppose the applied field, allowing the magnetic field to fully penetrate the bulk.

In contrast, when a superconductor cools below , Cooper pairs form and create a coherent superconducting condensate. In the presence of an external magnetic field, the condensate develops spatial variations that raise its kinetic energy, which in turn induces circulating screening currents along the surface. These currents generate magnetic fields that exactly oppose the applied field inside the material. Because the resistance is zero, the screening currents persist indefinitely without any power source, maintaining complete magnetic field expulsion.

Consequently, when a magnet is brought near a superconducting material, the induced screening currents produce a magnetic field that repels the magnet. This repulsive force can counteract gravity, allowing the magnet to float or levitate above the superconductor. Conversely, if the superconductor is placed above a magnet, it can also hover, held in place by the interaction between its induced currents and the magnetic field.

This principle of magnetic levitation can be harnessed to create frictionless transportation. In 1986, high-temperature superconductors ( of up to 138 K) were discovered. These materials, typically containing copper oxide and other metals such as barium and yttrium, are exemplified by YBa2Cu3Ox (), where .

In maglev trains, YBa2Cu3Ox magnets are mounted on the train and cooled inside well-insulated chambers using liquid nitrogen. These magnets interact with permanent magnets or electromagnets embedded in the track, causing the train to levitate and eliminating friction with the track. By carefully controlling the magnetic fields along the track, the train can also be propelled forward. Changing the position or intensity of the track’s magnetic fields induces forces on the superconducting magnets via electromagnetic induction, pushing or pulling the train along its path. The combination of frictionless levitation and contactless propulsion allows maglev trains to achieve very high speeds with minimal energy loss, making them a highly efficient transportation technology.

 

Question

Why is Yba2Cu3Ox a superconductor when the individual elements are not superconducting?

Answer

YBa2Cu3Ox has a layered, perovskite-like crystal structure, with alternating layers of CuO₂ planes and other layers containing yttrium and barium (see above diagram for the unit cell of YBa2Cu3O7). The CuO₂ planes are where superconductivity primarily occurs, while the other layers act as charge reservoirs and provide structural support for the lattice. This arrangement creates an energetically favourable environment in which the electronic properties necessary for superconductivity can emerge.

Within the CuO₂ planes, strong hybridisation between copper orbitals and oxygen orbitals creates an extensive two-dimensional network of electronic states. The superconductivity itself is enabled by controlling the oxygen content through a process known as doping, which introduces holes (positively charged carriers) into the planes. These holes are the microscopic charge carriers that pair up to form Cooper pairs.

Furthermore, the two charge carriers that form a Cooper pairs in the CuO₂ planes are much more tightly bound than in conventional superconductors. This results in a very short coherence length (the characteristic size of a Cooper pair) of only a few nanometres, compared with the tens to hundreds of nanometres typical in phonon-mediated superconductors such as NbTi. The short coherence length reflects a much stronger effective attractive interaction, likely arising from magnetic (spin-fluctuation) mechanisms rather than phonons. This stronger pairing interaction is one of the key reasons YBa₂Cu₃Oₓ can sustain superconductivity at temperatures far higher than those of conventional superconductors.

In essence, superconductivity in YBa2Cu3Ox is not due to the individual elements, but emerges from the specific arrangement of copper and oxygen atoms in the planes, combined with proper doping and crystal structure.

 

Previous article: Band theory
Content page of solid-state chemistry
Content page of advanced chemistry
Main content page

Fermi-Dirac distribution

The Fermi–Dirac distribution gives the probability that a quantum state of a given energy is occupied by a fermion at any temperature above absolute zero, accounting for the Pauli exclusion principle.

It is essential for describing the behaviour of electrons in solids and underpins our understanding of electrical and thermal conductivity in metals and semiconductors. Because many modern technologies, such as transistors, lasers and integrated circuits, depend on the behaviour of electrons in materials, the Fermi–Dirac distribution is a foundational tool in both solid-state chemistry and electronic engineering.

To derive the Fermi-Dirac distribution, consider a system of fixed-volume containing electrons occupying discrete single-particle energy levels , each with degeneracy . If electrons occupy the available states of energy, where , then the total number of electrons and the total energy of the system are

where

Replicas of this system form a microcanonical ensemble, meaning that only configurations with the same fixed and are allowed.

Because electrons are indistinguishable fermions and each single-particle state can be occupied by at most one electron (Pauli exclusion principle), the number of ways to place electrons among the ​ states at energy is

For example, if two electrons occupy three degenerate states (), the system can be found in any of the three microstates 110, 101, or 011 at different instants in time.

 

Question

Is eq2 a combination?

Answer

Yes, it is. It counts the number of ways to choose occupied states out of available single-particle states, where the electrons are indistinguishable (order does not matter) and the degenerate states are distinct because each state is defined by a unique set of quantum numbers.

 

It follows that the total number of microstates corresponding to a particular configuration is

Taking the natural logarithm of eq4 and substituting eq3 into it gives:

Using Stirling’s approximation. where ,  yields:

The possible configurations that define the microcanonical ensemble are restricted by eq1 and eq2. For example, the configurations and generally have different total energies and therefore cannot both belong to the same microcanonical ensemble (see above diagram for an illustration). Within the allowed set of configurations, all corresponding microstates are equally probable. The most probable equilibrium configuration is therefore the one with the largest number of ways of achieving it, i.e. the one whose (or ) is maximal.

The total differential of is:

Hence, we want to solve for . With reference to Step 2 of the derivation of the Boltzmann distribution using the Lagrange method of undetermined multipliers, eq6 becomes:

where we have chosen minus signs for the 2nd and 3rd terms for convenience.

Since varies independently, eq7 holds only if each coefficient is zero:

Substituting eq5 into eq8 gives:

where we have changed the summation index from to in eq9 to discriminate the summation variable from the differentiation variable.

Since does not depend on ,

All terms in the summation of eq10 goes to zero except when . So,

Carrying out the differentiation and rearranging the result yields:

As mentioned above, is the number of electrons, which is equivalent to the number of occupied states at that level. To transform eq11 into a probability distribution, we write:

where is the probability (a fraction between 0 and 1) that a single state at energy is occupied. Thus, gives the expected number of electrons occupying that energy level.

Substituting eq12 into eq11 gives:

To evaluate the parameters and , consider a system (with energy ) of the microcanonical ensemble that is partitioned by a rigid but permeable divider into two subsystems A and B, where A and B have energies and respectively. The total entropy of the system is

Because the total system is also isolated, the second law of thermodynamics demands that be maximised at equilibrium. The total differential of with respect to is:

From , we have . Thus, the condition for maximum entropy under energy exchange between the two subsystems at equilibrium is

Eq14 suggests that the two subsystems share a common physical property at thermal equilibrium. If we regard the electrons in each of the subsystems as a collection of non-interacting particles that move freely, much like molecules in a gas, they collectively form an electron gas. We can then use the fundamental thermodynamic equation to describe system A, where is the chemical potential (also known as the Fermi level) of the electron gas, which has units of energy per particle instead of energy per mole. Since and , we have . Similarly, . This means that the change in entropy with respect to the change in energy is the same throughout the system:

Substituting the statistical entropy into eq15 yields:

Substituting eq8 into eq6 gives:

Substituting the derivatives of eq1 and eq2 into eq17 results in:

Substituting eq18 back into eq16 yields

Repeating the above logic, the total differential of with respect to , where , is:

Substituting into the above equation gives , and using the fundamental thermodynamic equation results in:

Substituting and eq18 into eq20 yields:

Substituting eq19 and eq21 back into eq13 gives:

which is the Fermi-Dirac distribution function.

Eq22 gives the probability that an energy state is occupied by a fermion at temperature . It plays a central role in solid-state chemistry. When the energy of the state equals the Fermi level (), the occupancy becomes at any . In other words, the Fermi level is the energy level at which the probability of finding an electron in a material is 50% at thermodynamic equilibrium for any temperature above 0 K.

The Fermi level, together with band theory, is particularly useful for understanding the electrical conductivity in metals. In semiconductors, it determines how the occupation of states in the conduction and valence bands changes with temperature and doping.

 

Question

What is the definition of Fermi energy?

Answer

The Fermi energy  is the highest occupied single-particle state in a system of non-interacting electrons at 0 K. In metals, the Fermi level approaches the Fermi energy at absolute zero. Although the terms Fermi energy and Fermi level are often used interchangeably in chemistry and physics, especially when discussing properties at or near absolute zero, their strict theoretical definitions distinguish them.

 

Previous article: Equilibrium constant in statistical thermodynamics
Content page of statistical thermodynamics
Content page of advanced chemistry
Main content page

Scanning tunnelling microscopy and spectroscopy (STM/STS)

Scanning tunnelling microscopy (STM) is a high-resolution imaging technique that maps a sample’s surface structure by measuring electron tunnelling between a sharp metal tip and the sample.

STM enables the imaging, manipulation, and characterisation of materials at the atomic scale. Invented in 1981 by Gerd Binnig and Heinrich Rohrer at IBM Zurich, for which they received the 1986 Nobel Prize in Physics, STM marked the beginning of modern nanotechnology. Unlike optical or electron microscopes, which rely on the reflection or transmission of light or electrons, STM measures a quantum mechanical current that flows through a vacuum gap between a sharp conductive tip and a conducting or semiconducting surface. This unique approach allows STM to resolve individual atoms and probe their electronic properties.

A typical STM system comprises several key components: a sharp metallic tip, a piezoelectric scanner, a feedback control system, and associated electronics for current detection and image acquisition (see diagram below). Each plays a crucial role in achieving atomic-scale precision.

The STM tip must be atomically sharp, ideally terminating in a single atom, because the tunnelling current is dominated by the atom closest to the sample. It is usually made of tungsten, platinum–iridium, or another conductive material. Tips are often prepared by electrochemical etching, which produces a fine point, followed by gentle cleaning to remove contaminants.

When the tip is brought extremely close — on the order of a nanometre — to a conductive surface and a fixed bias voltage is applied between them, the probability of electrons tunnelling through the vacuum barrier varies with the length of that barrier. Since an electric current is defined as the net flow of electric charge, the successful tunnelling of electrons through the barrier gives rise to a current :

where is the transmission probability and is the number of incident electrons per unit time.

 

Question

What is the purpose of introducing a bias voltage between the tip and the surface?

Answer

When no bias voltage is applied, the circuit containing the tip and the surface is in thermodynamic equilibrium, with both tip and surface sharing the same electrochemical potential. At this equilibrium, the probability of electrons tunnelling from the tip to the surface is equal to the probability of tunnelling from the surface to the tip, resulting in no net electron flow.

When a non-zero bias voltage is applied, the tip and the surface are held at different potentials. This application of voltage introduces a thermodynamic driving force in the circuit by shifting the entire electronic structure of the surface (valence band, conduction band and Fermi level) as a rigid block by relative to the that of the tip. The tunnelling probability then becomes greater in one direction, creating a net electron flow. In essense, the bias voltage creates the energy difference between the tip and the surface required to generate a measurable tunnelling current.

Notably, the direction of electron flow depends on the polarity of the bias voltage:

    • Positive surface bias (): Electrons tunnel from the tip’s occupied states into the surface’s unoccupied states.
    • Negative surface bias (): Electrons tunnel from the surface’s occupied states into the tip’s unoccupied states.

 

An STM experiment can be performed in two distinct modes: constant-height or constant-current. The choice between modes depends on the sample’s roughness and conductivity, as well as the desired imaging speed.

In the constant-height mode, the feedback loop, consisting of a proportional–integral–derivative (PID) controller, is briefly engaged at the start to bring the tip to the initial tunnelling distance. The desired constant height is established by moving the tip until the setpoint current is reached, after which the loop is disabled.

The movement of the tip relative to the sample is controlled via a piezoelectric scanner, usually a ceramic tube or stack, that expands or contracts linearly in response to applied voltages. This scanner allows sub-ångström precision in all three spatial directions (x, y, and z). During scanning in the constant-height mode, the x– and y-voltages move the tip laterally across the surface. When the tip passes over a higher region of the surface (closer to atoms), the effective barrier distance decreases, and the tunnelling current increases sharply; when it passes over a lower region (further from atoms), the distance increases slightly and current decreases. This variation in current reflects the surface topography.

In the constant-current mode, the feedback loop maintains the set tunnelling current by adjusting the z-position of the tip. The PID controller compares the measured current with a predefined setpoint and modifies the vertical voltage accordingly as the tip moves in the x– or y-directions, with the resulting z-signal representing surface height.

Finally, the varying current or voltage is processed by highly sensitive amplifiers and low-noise electronics. The resulting amplified signal is then analysed by computer software to generate topographic maps, which are often displayed as three-dimensional images (see diagram below).

Modern STM systems can also perform spectroscopic measurements, leading to scanning tunnelling spectroscopy (STS). In this mode, the tip is positioned over a fixed point on the surface and the feedback loop is turned off to maintain a constant tip height. The bias voltage between the tip and the surface is then varied, while the tunnelling current is recorded as a function of the applied voltage.

As mentioned in the above Q&A, varying the bias voltage shifts the electronic structure of one electrode as a rigid block relative to the other. As the positive surface bias increases, electrons in the tip can tunnel into unoccupied electronic states of progressively higher energy in the surface. By analysing how the tunnelling current changes with voltage , STS reveals the relative number of states available for tunnelling at each energy level. Peaks in the versus spectrum correspond to discrete energy levels, much like the features observed in optical or photoemission spectroscopy. The higher the peaks, the greater the density of states.

STS is often used to study crystal and semiconductor surfaces. For example, when a germanium crystal is cut (e.g. into Ge(111)), the surface atoms — originally tetrahedrally bonded in the bulk —  find themselves missing some of their bonding partners, creating energetically unfavourable “dangling bonds”. To reduce surface energy, the atoms near the surface spontaneously rearrange, forming a stable periodic structure that differs from the bulk lattice.

During this surface reconstruction, some atoms known are displaced upwards to become adatoms, which sit above the surface layer (see diagram above). Other atoms, known as rest atoms, are not displaced upwards, but lose their original bonds and relocate slightly from their original positions on the surface.

Although both adatoms and rest atoms possess dangling bonds, their electronic environments differ. Rest-atom dangling-bond states lie at lower energy because rest atoms are more strongly bound to the underlying crystal. These states tend to be occupied by electrons. In contrast, adatom dangling-bond states lie at higher energy owing to their weaker bonding to the substrate, and thus remain unoccupied. Consequently, in an STS measurement we observe a strong adatom peak at positive surface bias (electrons tunneling from tip to surface into empty adatom states) and a weaker rest-atom peak at negative bias (electrons tunneling from occupied rest-atom states into the tip).

Thus, the STS spectrum (see diagram below) provides direct insight into the nature of the surface atoms and their electronic states, and it serves as an important tool for validating theoretical surface reconstruction models.

In conclusion, the standard function of an STM is to image surfaces at atomic resolution. However, when operated under varying bias voltages, it can probe the electronic properties of the surface as a function of energy, becoming a spectroscopic technique.

 

Previous article: Quantum tunnelling
Content page of scanning tunnelling spectroscopy (STS)
Content page of advanced chemistry
Main content page

Quantum tunnelling

Quantum tunnelling is a phenomenon in which a particle passes through a potential energy barrier that it classically does not have enough energy to overcome.

In classical terms, a ball rolling up a hill without enough energy to reach the top would simply roll back down. However, in the quantum world, particles such as electrons are described by wavefunctions that represent probabilities of where they might be found. Because these wavefunctions extend slightly into and beyond barriers, there is a finite probability that the particle will appear on the other side, as though it has “tunnelled” through.

Consider a particle in the potential energy regions shown in the diagram above, where

where is a finite potential energy value.

The Schrödinger equation for the regions and is , while that for is . It follows that the general solutions (which can be verified by substituted them into the respective Schrödinger equations) are:

where and , with .

 

Question

Show that and are eigenfunctions of the linear momentum operator , and interpret the eigenvalues with regard to the motion of a particle.

Answer

and . Since the two wavefunctions are associated with momentum eigenvalues of equal magnitude but opposite signs, represents a particle travelling in the positive -direction with momentum , while represents a particle travelling in the negative -direction with momentum .

 

Therefore, is the complete wavefunction of the particle moving towards the barrier from the left, consisting of an incident wave and a reflected wave from the barrier. If the particle is able to penetrate the barrier, it can only move to the right of the barrier, which implies that .

For , and to be acceptable solutions to their respective Schrödinger equations, they must be continuous at the boundaries of the potential regions (at and ), and their first derivatives must also be continuous. Specifically, this requires:

1) and or

2) and or

To solve the four simultaneous equations, we begin by substituting eq2 into eq4 to give

and

Solving eq5 and eq6 yields:

and

Substituting eq1 into eq3 results in , or equivalently:

Substituting eq7 and eq8 into eq9 and rearranging gives

Since, , or equivalently , eq10 becomes

Expanding the denominator of eq11 and simplifying yields:

The probability that the particle is travelling in the positive -direction towards the left of the barrier is , while the probability of it travelling in the same direction on the right of the barrier is . Therefore, the ratio represents the transmission probability .

Multiplying eq12 with its complex conjugate, and using the identities and , yields:

which rearranges to:

Substituting and  into eq13 and simplifying results in:

where .

For high potential and wide barriers, where , eq14 simplifies to:

Since , the transmission probability decreases exponentially with the thickness of the barrier (see diagram above) and with when . In other words, lighter particles are more likely to tunnel through barriers than heavier ones. For example, electrons can tunnel efficiently through a few nanometres of GaAs semiconductor with a barrier potential of V ∼ 0.1 − 1 eV, or through 0.5 − 1 nm of vacuum with a barrier potential of V ∼ 4 − 5 eV in scanning tunnelling microscopy (STM).

 

Question

Why is in the region depicted as a standing wave in the above diagram?

Answer

is depicted as a standing wave for simplicity. When the transmission probability is small, , and is the condition for a perfect standing wave. In reality, it represents a partial standing wave, since the amplitudes are not exactly equal.

 

In conclusion, quantum tunnelling has profound implications across physics, chemistry, and technology. It explains how nuclear fusion occurs in stars, where protons can tunnel through repulsive energy barriers despite not having enough kinetic energy to overcome them. In modern technology, tunnelling is exploited in devices such as tunnel diodes and scanning tunnelling microscopes, which can image surfaces at the atomic level. It also plays a role in radioactive decay and quantum computing, where tunnelling effects can influence qubit stability and transitions. Overall, quantum tunnelling is a cornerstone of quantum theory and a striking demonstration of how the microscopic world defies classical intuition.

 

Next article: Scanning Tunnelling microscopy and spectroscopy (STM/STS)
Content page of scanning tunnelling spectroscopy (STS)
Content page of advanced chemistry
Main content page

Entropy and the arrow of time

Entropy is a measure of the amount of disorder or the number of possible microscopic arrangements in a system, and the arrow of time is the one-way direction in which time flows. But how are the two concepts related? To unravel this, we must first understand what entropy really is.

Have you ever wondered why a tablespoon of salt dissolves spontaneously in water, or why certain foods turn rancid over time? These everyday occurrences may seem unrelated, but they are all tied to a powerful concept in physics and chemistry: entropy. Though the word may conjure images of complex equations or scientific jargon, entropy is something we all experience daily. It’s the quiet force behind why hot coffee cools, why ice melts at room temperature, and even why dust spreads everywhere and instead of obediently accumulating in the dustpan.

But what exactly is entropy? Many spontaneous processes that occur in our homes involve a dispersal of energy. When salt dissolves in water, the ordered solid lattice structure of sodium chloride breaks apart into freely moving ions, which can occupy many more positions within the liquid (see diagram below). Similarly, when butter turns rancid due to oxidation, gases and volatile organic compounds (such as aldehydes, ketones and short-chain fatty acids) are released, dispersing energy and matter into the surrounding air.

Entropy, denoted by , is a measure of such energy dispersal at a specific temperature. Because spontaneous processes tend towards greater disorganisation, entropy is often loosely described as a measure of disorder or randomness in a system. In fact, the Second Law of Thermodynamics, which states that the entropy of an isolated system increases over time, is rooted in countless everyday observations like these.

On closer examination, the second law is not an absolute rule but a statistical tendency that applies to systems with vast numbers of particles — atoms and molecules that move in countless ways. It describes the probabilistic tendency of these particles to evolve towards more disordered, more probable arrangements.

Consider the melting of ice at room temperature. The highly ordered crystalline structure of ice (low entropy) spontaneously transitions into liquid water (higher entropy) because there are far more ways for the molecules to arrange themselves in the fluid state than in a rigid crystal lattice. This transformation increases the total entropy of the system (ice plus water).

How then is entropy related to the arrow of time? On the macroscopic level, Newton’s equations are time-symmetric — they work just as well backward as forward. For example, if and are replaced by and in Newton’s second law , the same valid solution is obtained. But when we consider systems with enormous numbers of particles, like salt dissolving in water or ice melting, the laws of probability take over. The number of possible disordered (high-entropy) configurations vastly exceeds the number of ordered (low-entropy) ones. Thus, while nothing in the laws of physics forbids entropy from decreasing, it is astronomically unlikely. In other words, entropy increases not because it must, but because it is overwhelmingly probable that it will. This statistical bias gives rise to what physicist Arthur Eddington famously called the Arrow of Time. The arrow points in the direction of increasing entropy, defining the “forward” direction of time that we perceive in memory, causality and the evolution of the universe.

If the arrow of time points in the direction of increasing entropy, then it must have had a beginning — a moment when entropy was at its lowest. To understand the flow of time in the universe, we must look back to its earliest moments: the Big Bang.

At first glance, the Big Bang might not seem like a low-entropy event. After all, the early universe was extremely hot, dense, and filled with high-energy radiation — conditions we might intuitively associate with disorder. But entropy is not just about temperature; it also depends on how matter and energy are arranged. The early universe, though energetic, was remarkably smooth and uniform, with matter and radiation spread almost evenly in all directions. This evenness means it had very few possible configurations, and therefore, very low entropy.

The evidence for this comes from the cosmic microwave background (CMB), the faint afterglow of the Big Bang. Unlike radiation from stars or galaxies, the CMB is remarkably uniform in all directions. Observations of the CMB across the sky, interpreted using the Planck radiation law, reveal temperature variations of only about one part in 100,000. This extraordinary uniformity indicates that, in its infancy, the universe was astonishingly smooth and ordered. Had the universe begun in a random, high-entropy state, matter and radiation would have been far clumpier and more chaotic, and the arrow of time, the steady progression towards greater disorder, might never have emerged as we observe it.

As the universe expanded from the Big Bang, gravity began to play an increasingly important role in shaping entropy. Unlike most physical systems, gravity behaves counterintuitively with respect to order and disorder. In an ordinary gas, for example, molecules spread out to maximise entropy. But in a gravitational system, clumping actually increases entropy.

To see why, imagine a cloud of gas floating in space. If left alone, gravity will cause it to collapse and form a star or planet. The resulting structure seems more ordered, but in reality, the overall entropy has increased. That’s because the gravitational potential energy lost during collapse is converted into heat and radiation, which disperse into space. The final configuration, a hot star emitting light, represents a far greater number of microscopic possibilities than the original, uniform gas cloud.

This process explains how the universe could start with low entropy and still give rise to the complex, structured cosmos we see today — galaxies, stars, planets and eventually life. As gravity amplified tiny irregularities in the early universe, matter clumped together and entropy grew. The formation and evolution of stars, black holes and galaxies are all milestones on this cosmic journey towards higher entropy.

If entropy keeps increasing, what happens in the end? The answer depends on how the universe evolves, whether it will expand forever, halt and contract, or oscillate in cycles. These possibilities are described in four scenarios:

    • Big Freeze (or heat death):

Observations over the past two decades, particularly of distant supernovae and the cosmic microwave background, reveal that the universe’s expansion is accelerating. In this case, the universe will grow ever colder and more diffuse. Over trillions of years, stars will burn out, stellar remnants will cool, and galaxies will fade into darkness. Eventually, the cosmos will approach a state of maximum entropy — a condition known as the heat death of the universe. At that point, the universe will be a thin, uniform haze of particles and radiation, with no free energy left to drive physical processes or sustain life. In such a state, the universe reaches thermodynamic equilibrium, and with no further increase in entropy possible, the arrow of time would effectively lose its direction.

    • Big Crunch:

If gravity were strong enough, it could eventually halt the expansion of the universe and reverse it into a contraction, ending in what is known as a “Big Crunch.” In this scenario, galaxies and stars would merge, black holes would coalesce, and even atoms would be torn apart as the universe collapses into an increasingly dense and hot state. The gravitational collapse releases potential energy as heat and radiation, driving the total entropy of the universe ever higher. As densities and temperatures approach extreme values, known laws of physics break down, and space and time themselves may lose their classical meaning.

    • Big Bounce:

The Big Bounce is a speculative but fascinating alternative to both the Big Freeze and Big Crunch scenarios. It proposes that the universe did not begin from a singular, one-time Big Bang, but rather from the collapse of a previous universe. In this picture, the cosmos undergoes a perpetual cycle of expansion and contraction — each “Big Crunch” giving rise to a new “Big Bang.” Instead of a final end, the universe continually renews itself through an endless series of bounces. This idea is based on the hypothesis that at extremely high densities, quantum effects could generate a repulsive force that halts the collapse before a singularity forms.

However, the Big Bounce faces a fundamental challenge from the Second Law of Thermodynamics. Entropy can only increase, so if each cycle carries forward the entropy produced in the previous one, the total disorder of the universe would accumulate from bounce to bounce. Over many iterations, the universe would grow larger and last longer with each cycle, trending towards a maximum entropy state, which leads to a heat-death-like equilibrium even within this cyclic framework. In other words, while a Big Bounce could prevent a singular end, it may not fully escape the thermodynamic arrow of time.

    • Big Rip:

The Big Rip is one of the most dramatic and unsettling possibilities for the fate of the universe. In this model, the universe, fuelled by dark energy, keeps accelerating until it overcomes all other forces. First, distant galaxies would drift out of view as their light can no longer reach us. Then, the gravitational bonds holding galaxies together would fail. Stars and planets would be torn from their orbits, followed by the disintegration of solar systems and the destruction of individual stars. In the final moments, even atoms and subatomic particles would be ripped apart as the fabric of space itself expands faster than light can travel. The entire process would culminate in a singular event when spacetime and all known structures are shredded into oblivion.

Of all the scenarios, the Big Freeze is consistent with many observations of accelerated expansion. In this case, time unquestionably continues, but it leads to a static, featureless epoch where there is no change and thus no discernible arrow of time. However, recent evidence suggesting that the universe may be decelerating raises the possibility of a Big Crunch.

Whatever the ultimate explanation, one fact remains: we live in a universe where entropy increases, where time moves inexorably forward, and where each passing moment marks a small but irreversible step in the great unfolding of cosmic history. The arrow of time, born in the furnace of the Big Bang, continues to guide the evolution of everything, from the orbits of galaxies to the beating of our own hearts.

 

Previous article: Statistical entropy
Content page of chemical thermodynamics
Content page of advanced chemistry
Main content page

Atomic clock

An atomic clock is a highly precise timekeeping device that measures time based on the frequency of electromagnetic radiation emitted or absorbed during transitions between specific energy levels of atoms.

Traditional mechanical and even early electronic clocks are limited in accuracy because their timekeeping depends on macroscopic oscillations, such as pendulum swings or quartz vibrations, which are affected by temperature, friction, air pressure and other environmental factors. These variations cause conventional clocks to drift over time, making them unsuitable for applications that require extreme precision.

The idea of using atomic phenomena to measure time dates back to the 19th century. In his 1873 Treatise on Electricity and Magnetism, James Clerk Maxwell was among the first scientists to suggest that the oscillations of light waves could serve as a fundamental standard for time, laying the conceptual groundwork for the development of atomic clocks nearly a century later.

Building on Maxwell’s insight, modern atomic clocks achieve extraordinary precision by measuring the natural “ticks” of atoms rather than relying on macroscopic oscillations. One of the most widely used types is the caesium-133 atomic clock, which defines the second in the International System of Units (SI).

In the ground state, a caesium-133 atom has a single unpaired electron in its valence s-orbital, giving it the electronic configuration [Xe] 6s¹. For this electron, the orbital angular momentum is zero, and the spin angular momentum is ​, resulting in a total electronic angular momentum of and the term symbol . The nucleus, composed of protons and neutrons, has a collective nuclear spin of ​.

Both the electron and the nucleus generate magnetic fields associated with their respective magnetic dipole moments, and these fields interact with one another. This interaction, known as magnetic dipole coupling, causes the nuclear spin angular momentum to combine vectorially with the total electronic angular momentum (see diagram above), producing two total angular momentum states , where  and is the projection of onto the laboratory -axis. The small energy difference between these two states (see diagram below), known as the hyperfine splitting, forms the basis for the caesium atomic clock: by precisely measuring the frequency of radiation corresponding to transitions between these two states, the clock can maintain an exceptionally stable and accurate measure of time.

 

Question

Why is caesium-133 chosen as the standard for atomic clocks?

Answer

Caesium-133 is a stable isotope with a well-defined hyperfine splitting. Its hyperfine transition frequency of 9,192,631,770 Hz lies in the microwave region and is relatively insensitive to small variations in temperature, magnetic fields and electric fields compared to other atoms. This allows the transition to be generated and measured with exceptionally high precision in laboratory conditions.

 

The main components of a caesium atomic clock include an atomiser, a magnetic state selector, a resonance chamber, a second magnetic selector, a detector and an electronics module (see diagram above). In the atomiser, an oven heats a small amount of metallic caesium, releasing atoms that pass through a narrow aperture to form a beam within a vacuum chamber. The oven temperature is carefully maintained at around 120°C to produce an adequate vapour pressure of caesium. At this temperature, the population ratio of atoms in the upper and lower hyperfine states follows the Boltzmann distribution:

Substituting , , and into the above equation gives:

Atoms in the lower hyperfine state are then selected and directed into the resonance chamber by a magnetic selector acting as a Stern-Gerlach device. Inside the chamber, the atoms are exposed to microwave radiation tuned near 9,192,631,770 Hz, corresponding to the transition between the two hyperfine levels. This frequency is generated by an electronic oscillator connected to both the resonance chamber and the detector in a feedback loop. When the microwave frequency exactly matches the transition, atoms in the lower state are driven to the upper state with maximum probability. After leaving the chamber, the beam passes through a second Stern–Gerlach device, which deflects unexcited atoms so that only those in the excited state reach the detector.

The detector measures the number of caesium atoms that arrive. Typically, this involves the atoms striking a hot surface, where they are ionised. The resulting ions or electrons are collected at an electrode, generating an electric current in the detector circuit that is proportional to the number of excited atoms reaching the detector. When the oscillator frequency is exactly equal to the transition frequency, the detector current reaches a maximum value, .

If temperature or other effects cause the oscillator frequency to drift from 9,192,631,770 Hz, the detector current decreases. A servo-control circuit detects this deviation and adjusts the oscillator frequency to restore the current to . This continuous feedback ensures that the oscillator remains precisely locked to the hyperfine transition frequency. The locked oscillator output is then sent to a counter circuit, which counts the oscillations and produces one output pulse each time exactly 9,192,631,770 cycles are completed — defining one second. These pulses drive a digital display, allowing the clock to show the precise time corresponding to the oscillations of the caesium atoms.

Modern caesium atomic clocks maintain accuracy to within a few billionths of a second per day and form the basis of Coordinated Universal Time (UTC), as well as global positioning and communication systems.

 

Question

Must the sample of caesium be replenished for the clock to run indefinitely?

Answer

Not often. In a caesium atomic clock, the atomisation process in the oven is extremely gentle. The oven warms a small quantity of metallic caesium, typically only a few grammes, to around 100–150 °C, producing a very low vapour pressure. This allows caesium atoms to slowly evaporate and form a steady atomic beam within the vacuum chamber. Only a tiny fraction of the atoms is emitted per second, perhaps 1012 to 1015 atoms per second. Compared to roughly 1021 atoms in one gramme of caesium (1 mole of caesium = 132.91 g), the sample is consumed at an exceedingly slow rate. The same small reservoir can therefore support continuous operation for many years, often 5 to 20 years or more, before the caesium becomes depleted or the oven performance degrades.

When that happens, the caesium source is simply replaced or refilled, and the clock continues operating as normal. In practice, the electronics or vacuum system usually require maintenance long before the caesium itself runs out. Thus, while the clock cannot run truly indefinitely, the atomisation process can indeed continue steadily for decades without needing frequent replenishment.

 

Previous article: f-orbital
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

The energy-time uncertainty relation

The energy–time uncertainty relation states that the more precisely a system’s energy is defined, the longer it takes for the system to undergo a significant change.

Mathematically, it is expressed as:

It can be derived from the general form of the uncertainty principle , where and are the observables corresponding to the Hermitian operators and , respectively. In this case, we let be a system’s energy , which is the observable associated with the Hamiltonian operator . Any uncertainty in therefore corresponds to a change in .

However, in non-relativistic quantum mechanics, time is a parameter, not a Hermitian operator. Hence, we cannot replace with a time operator . Instead, we begin with the time evolution of the expectation value of an operator that does not explicitly depend on time:

 

Question

What is an operator that does not explicitly depend on time?

Answer

An operator that does not explicitly depend on time is one whose definition does not contain time as a variable. For example, the angular momentum operator depends only on spatial coordinates and not on time. However, the state on which the operator acts may evolve with time according to the Schrödinger equation. Therefore, even though the operator itself is time-independent, its expectation value can still change over time as the state evolves.

 

Differentiating eq27b with respect to time using the product rule gives:

The time evolution of the state is governed by the time-dependent Schrödinger equation . Taking its Hermitian conjugate yields . Substituting these two equations into eq27c results in:

 

Question

Explain the Hermitian conjugate forms of and .

Answer

Linear operators acting on the Hilbert space vectors can be represented by square matrices. The Hermitian conjugate (or complex transpose) of two such matrices is given by (see property 13 of this article for proof). Therefore, , where . However, is a scalar operator acting on a scalar parameter. If , then

 

Comparing eq27d with the general uncertainty principle gives:

where .

Since , eq27e becomes the energy-time uncertainty relation. Here, corresponds to the time scale for the system’s evolution, i.e. the time required for the expectation value of to change by one standard deviation .

An important example of this relation occurs in the excited state of a molecule, where corresponds to the lifetime of the excited state, and is the uncertainty in the transition energy between the excited and relaxed states. In other words, the shorter the lifetime of an unstable state, the larger the uncertainty in its transition energy. A large means the emitted photon’s energy is not a single, sharp value, but a range of values, leading to a broadened line in the spectrum.

The broadening of spectral lines can also be caused by molecular interactions. For example, collisions between atoms or molecules lead to shortened excited-state lifetimes by inducing transitions via a non-radiative pathway. When two particles approach closely enough to interact, their potential energy varies according to the internuclear distance, producing a perturbation that couples their internal energy levels. If part of the internal energy of particle A, which is in an excited state, is transferred to particle B during the collision, particle A undergoes collisional de-excitation, and the excess energy is converted into additional kinetic energy of the colliding pair rather than being emitted as a photon. This process effectively reduces the lifetime of the excited state, leading to spectral broadening.

Finally, since , eq27a is sometimes written less precisely as:

or

 

Next article: spectral decomposition of an operator
Previous article: The uncertainty principle (derivation)
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Wigner D-matrix

The Wigner D-matrix , where the total angular momentum  is a unitary matrix that represents all rotation symmetry operations corresponding to the irreducible representations of the SU(2) group.

The irreducible representations of SU(2), or special unitary group of degree 2, consists of unitary matrices with determinant 1. They describe angular momentum transformation for particles with both integer and half-integer values of . In the case where is an integer, the corresponding Wigner D-matrices also represent the irreducible representations of the SO(3) group, which is the group of all proper rotations in 3D space.

As shown in the previous article, the total rotation operator transforms a quantum state with total angular momentum projection along the molecular -axis into a linear combination of states with the projection along the lab -axis:

Since forms a complete orthogonal basis set for SO(3), the coefficients of , according to group theory, are the matrix elements of . Multiplying the above equation on the left by the bra gives the Wigner D-matrix elements:

Substituting eq115 into eq120 yields:

where is an eigenstate of by convention.

If , then (see this article for proof). So,  and eq121 becomes:

Since and is Hermitian, i.e. , we have . It follows that

where is the Wigner small-d matrix element.

The corresponding Wigner small-d matrix is a single-axis, single-angle rotation operator of the SO(3) group in the basis.

 

Question

Why is expressed as a matrix element without explicitly carrying out the operation ?

Answer

is an eigenstate of , but not of . This can be shown by combining the angular momentum raising and lower operators ( and ) to give . Substituting eq144 and eq147 into yields:

with expressed in units.

Since mixes the states when acting on , the state is not an eigenstate of . Therefore, is conveniently expressed as a matrix element, rather than by explicitly carrying out the exponential operation, which would involve a linear combination of multiple states.

 

, other than being matrix elements of Wigner D-matrices, are also rotational wavefunctions. To explain why, we refer to the great orthogonality theorem for finite groups, given by:

where

    • refers to the matrix entry in -th row and -th column of the -th matrix of the -th irreducible representation.
    • is the order of the group, and is also the normalisation factor for the sum.
    • is the dimension of the irreducible representation.

The theorem can be extended to infinite groups like SO(3), with the normalised sum over group elements replaced by a normalised integral over all rotation angles :

where

    • , the total volume of the SO(3) manifold (intrinsic rotation space), is the normalisation factor for the integral, i.e. .
    • (see this article for further explanation).
    • is a specific set of values.

Eq124 reveals that the functions are complex and orthonormal. They are eigenfunctions of and (in units), where:

and

Since there are infinite SO(3) irreducible representations, are associated with all possible combinations of the three rotational quantum numbers and hence all possible eigenvalues. In other words, form a complete orthonormal set of wavefunctions for symmetric rotors.

 

Previous article: Rotation operator
Content page of rotational spectroscopy
Content page of advanced chemistry
Main content page
Mono Quiz