Completeness of a vector space

A complete vector space is one that has no “missing elements” (e.g. no missing coordinates).

In the previous article, we learned that the function d(\boldsymbol{\mathit{u}},\boldsymbol{\mathit{v}}) defines the distance between two elements of the vector space. Such a function is called a metric and it measures the ‘closeness’ of elements (or points) in a vector space. Since a vector space is a collection of elements, we can use a sequence, e.g. \left \{ x_n \right \}_{n=1}^{\infty}, to represent elements of a vector space X. If the distance between two members of the sequence gets closer as n gets larger (see diagram below), i.e.

\lim_{m,n\rightarrow \infty}d(x_n-x_m)=0

we call the sequence a Cauchy sequence.

Cauchy sequences are useful in determining the completeness of a vector space. A vector space V is complete if every Cauchy sequence in V converges to an element of V. For example, the sequence \left \{ x_n \right \}_{n=1}^{\infty}, where x_n=\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}, is one of many Cauchy sequences of rational numbers in the rational number space \mathbb{Q}. However, the sequence converges to ln2, which is not an element of \mathbb{Q}. Therefore, \mathbb{Q} is not complete, and has “missing elements” or “gaps”, as compared to the real space number space \mathbb{R}, which is complete.

 

Question

Show that \sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}=ln2.

Answer

If \left | x \right |<1, then (1-x)(1+x+x^{2}+\cdots). So, \frac{1}{1-x}=1+x+x^{2}+\cdots or

\frac{1}{1-(-x)}=\frac{1}{1+x}=1-x+x^{2}+\cdots

Integrating the 2nd equality of the above equation on both sides gives

ln\left | 1+x \right |=x-\frac{x^{2}}{2}+\frac{x^{3}}{3}+\cdots=\sum_{k=1}^{\infty}(-1)^{k+1}\frac{x^{k}}{k}

Substituting x=1  in the above equation yields ln2=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}.

 

A vector space with no “missing elements” is essential for scientists to work with to formulate theories (e.g. kinematics) and solving problems associated with those theories. Furthermore, the ability to compute limits in a complete vector space implies that we can apply calculus to solve problems defined by the space. For example, a complete inner product space called a Hilbert space, which will be discussed in the next article, is used to formulate the theories of quantum mechanics.

 

 

Next article: hilbert space
Previous article: Vector subspace and eigenspace
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Inner product space

An inner product space is a vector space with an inner product.

An inner product is an operation that assigns a scalar to a pair of vectors \langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v}}\rangle, or a scalar to a pair of functions \langle f\vert g\rangle. The way to assign the scalar may be through the matrix multiplication of the pair of vectors, for instance

\langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v}}\rangle=\begin{pmatrix} u_{1}^{*} &u_{2}^{*}& \cdots &u_{N}^{*} \end{pmatrix}\begin{pmatrix} v_1\\v_2 \\ \vdots \\ v_N \end{pmatrix}=\sum_{i=1}^{N}u_{i}^{*}v_i\; \; \; \; \; \; \; \; 3

or it may be through an integral of the pair of functions:

\langle f\vert g\rangle=\int_{-\infty}^{\infty}f(x)^{*}g(x)dx

You may notice that eq3 resembles a dot product. The dot product pertains to vectors in \mathbb{R}^{3}, where \boldsymbol{\mathit{A}}\cdot\boldsymbol{\mathit{B}}=\sum_{i=1}^{3}A_iB_i, which can be extended to N-dimensions, where \langle\boldsymbol{\mathit{A}}\vert\boldsymbol{\mathit{B}}\rangle=\sum_{i=1}^{N}A_iB_i, and to include complex and real functions, \langle f\vert g\rangle=\int_{-\infty}^{\infty}f(x)^{*}g(x)dx. Therefore, an inner product is a generalisation of the dot product.

An inner product space has the following properties:

    1. Conjugate symmetry: \langle f\vert g\rangle=\langle g\vert f\rangle^{*}
    2. Additivity: \langle f+g\vert h\rangle=\langle f\vert h\rangle+\langle g\vert h\rangle
    3. Positive semi-definiteness: \langle f\vert f\rangle\geq 0, with \langle f\vert f\rangle= 0 if f=0

 

Question

i) Why is the inner product space positive semi-definite?
ii) Show that orthogonal vectors are linearly independent.
iii) Prove that .

Answer

i) A general vector space of \langle\boldsymbol{\mathit{a}}\vert\boldsymbol{\mathit{a}}\rangle can be positive or negative. The inner product space is defined such that \langle f\vert f\rangle\geq 0, with \langle f\vert f\rangle= 0 if f=0, which is useful in quantum mechanics.

ii) Let the set of vectors \left \{ \boldsymbol{\mathit{v_k}} \right \} in eq1 be orthogonal vectors. The dot product of eq1 with \boldsymbol{\mathit{v_i}} gives c_i\boldsymbol{\mathit{v_i}}\cdot\boldsymbol{\mathit{v_i}} =c_i\left |\boldsymbol{\mathit{v_i}} \right |^{2}=0. Since the magnitudes of orthogonal vectors are non-zero, c_i=0. Hence, orthogonal vectors are linearly independent.

iii) Let’s consider two vectors and as position vectors starting from the origin. Then the vector forms a triangle with them. According to the law of cosines, we have:

Substituting into the above equation gives:

which completes the proof.

 

Two functions (or two vectors) are orthogonal if \langle f\vert g\rangle= 0. Elements of a set of basis functions are orthonormal if \langle \phi_i\vert \phi_j\rangle=\delta_{ij} where

\delta_{ij}= \{\; \begin{matrix} 1 & for\; \; i=j\\ 0 & for\; \; i\neq j \end{matrix}

In other words, two functions (or two vectors) are orthonormal if they are orthogonal and normalised.

Finally, the norm (or length) of a vector \boldsymbol{\mathit{u}} is denoted by \left \|\boldsymbol{\mathit{u}}\right \| and is defined as \left \|\boldsymbol{\mathit{u}}\right \|=\sqrt{\langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{u}}\rangle}=\sqrt{\left |\boldsymbol{\mathit{u}}\right |\left |\boldsymbol{\mathit{u}}\right |cos\: 0^{\circ} }=\left |\boldsymbol{\mathit{u}}\right |. With this association of inner product and the length of a vector, we can establish the relationship between inner product \langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v}}\rangle and the Euclidean distance d(\boldsymbol{\mathit{u}},\boldsymbol{\mathit{v}}) between 2 vectors \boldsymbol{\mathit{u}} and \boldsymbol{\mathit{v}}. Using the \mathbb{R}^{2} space as an example, where \boldsymbol{\mathit{u}}=\begin{pmatrix} 6\\9 \end{pmatrix} and \boldsymbol{\mathit{v}}=\begin{pmatrix} 3\\5 \end{pmatrix}, we have

d(\boldsymbol{\mathit{u}},\boldsymbol{\mathit{v}})=\left \|\boldsymbol{\mathit{u}}-\boldsymbol{\mathit{v}}\right \|=\sqrt{\langle\boldsymbol{\mathit{u}}- \boldsymbol{\mathit{v}}\vert\boldsymbol{\mathit{u}}- \boldsymbol{\mathit{v}}\rangle}=\sqrt{\langle\begin{pmatrix} 3\\4 \end{pmatrix}\vert \begin{pmatrix} 3\\4 \end{pmatrix}\rangle}

=\sqrt{\begin{pmatrix} 3 &4 \end{pmatrix}\begin{pmatrix} 3\\4 \end{pmatrix}}=5

 

 

Next article: vector subspace and eigenspace
Previous article: vector space of functions
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Vector space of functions

As mentioned in the previous article, a vector space \textit{V} is a set of objects that follows certain rules of addition and multiplication. This implies that a set of functions \textit{V}=\left \{ f(x),g(x),h(x)\cdots \right \} that follows the same rules, forms a vector space of functions. The properties of a vector space of functions are:

1) Commutative and associative addition for all functions of the closed set \textit{V}.

f(x)+g(x)=g(x)+h(x)

\left [f(x)+g(x)\right ]+h(x)=f(x)+\left [g(x)+h(x)\right ]

2) Associativity and distributivity of scalar multiplication for all functions of the closed set.

\gamma\left [ \delta f(x) \right ]=(\gamma\delta)f(x)

\gamma\left [ f(x)+g(x) \right ]=\gamma f(x)+\gamma g(x)

(\gamma +\delta)f(x)=\gamma f(x)+\delta f(x)

where \gamma and \delta are scalars.

3) Scalar multiplication identity.

1f(x)=f(x)

4) Additive inverse.

f(x)+[-f(x)]=0

5) Existence of null vector \boldsymbol{\mathit{0}}, such that

\boldsymbol{\mathit{0}}+f(x)=f(x)

where \boldsymbol{\mathit{0}} in this case is a zero function that returns zero for any inputs.

Similarly, a set of linearly independent functions y_0(x),y_1(x),\cdots,y_n(x) forms a set of basis functions. We have a complete set of basis functions if any well-behaved function in the domain a\leq x\leq b can be written as a linear combination of these basis functions, i.e.

f(x)=\sum_{n=0}^{\infty}c_ny_n(x)

In quantum chemistry, physical states of a system are expressed as functions called wavefunctions.

 

 

Next article: inner product space
Previous article: vector space
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Vector space

A vector space is a set of objects  that follows specific rules of addition and multiplication.

These objects are called vectors and the rules are:

1) Commutative and associative addition for all elements of the closed set.

\boldsymbol{\mathit{v_{1}}}+\boldsymbol{\mathit{v_{2}}}=\boldsymbol{\mathit{v_{2}}}+\boldsymbol{\mathit{v_{1}}}

(\boldsymbol{\mathit{v_{1}}}+\boldsymbol{\mathit{v_{2}}})+\boldsymbol{\mathit{v_{3}}}=\boldsymbol{\mathit{v_{1}}}+(\boldsymbol{\mathit{v_{2}}}+\boldsymbol{\mathit{v_{3}}})

2) Associativity and distributivity of scalar multiplication for all elements of the closed set

c_1(c_2\boldsymbol{\mathit{v_{1}}})=(c_1c_2)\boldsymbol{\mathit{v_{1}}}

c_1(\boldsymbol{\mathit{v_{1}}}+\boldsymbol{\mathit{v_{2}}})=c_1\boldsymbol{\mathit{v_{1}}}+c_1\boldsymbol{\mathit{v_{2}}}

(c_1+c_2)\boldsymbol{\mathit{v_{1}}}=c_1\boldsymbol{\mathit{v_{1}}}+c_2\boldsymbol{\mathit{v_{1}}}

where c_1 and c_2 are scalars.

3) Scalar multiplication identity.

\boldsymbol{\mathit{1}}\boldsymbol{\mathit{v_{1}}}=\boldsymbol{\mathit{v_{1}}}

4) Additive inverse.

\boldsymbol{\mathit{v_{1}}}+(-\boldsymbol{\mathit{v_{1}}})=0

5) Existence of null vector , such that

\boldsymbol{\mathit{0}}+\boldsymbol{\mathit{v_{1}}}=\boldsymbol{\mathit{v_{1}}}

In a vector space V=\left \{\boldsymbol{\mathit{v_{1}}},\boldsymbol{\mathit{v_{2}}},\cdots ,\boldsymbol{\mathit{v_{k}}}\right \}, one vector can be expressed as a linear combination of other vectors in the set, e.g.:

\boldsymbol{\mathit{z}}=c_1\boldsymbol{\mathit{v_{1}}}+c_2\boldsymbol{\mathit{v_{2}}}+\cdots+c_k\boldsymbol{\mathit{v_{k}}}

The span of a set of vectors V is the set of vectors that can be written as a linear combination of vectors in the set . For example, the span of the set of unit vectors \boldsymbol{\mathit{\hat{i}}} and \boldsymbol{\mathit{\hat{j}}} in the \mathbb{R}^{2} space is the set of all vectors (including the null vector) in the \mathbb{R}^{2} space. Alternative, we say that \boldsymbol{\mathit{\hat{i}}} and \boldsymbol{\mathit{\hat{j}}} span \mathbb{R}^{2}.

If we vary c_1,c_2,\cdots c_k (but not the trivial case where all scalars are zero) such that \boldsymbol{\mathit{z}} is equal to \boldsymbol{\mathit{0}},

c_1\boldsymbol{\mathit{v_1}}+c_2\boldsymbol{\mathit{v_2}}+\cdots+c_k\boldsymbol{\mathit{v_k}}=\boldsymbol{\mathit{0}}\; \; \; \; \; \; \;\; 1

the set of vectors \boldsymbol{\mathit{v_1}},\boldsymbol{\mathit{v_2}},\cdots\boldsymbol{\mathit{v_k}} is said to be linearly dependent because any vector can be written as a linear combination of the others:

\boldsymbol{\mathit{v_1}}=\frac{-c_2}{c_1}\boldsymbol{\mathit{v_2}}+\cdots+\frac{-c_k}{c_1}\boldsymbol{\mathit{v_k}}\; \; \; \; \; \; \; \; 2

If the only way to satisfy eq1 is when c_k=0 for all k, the set of vectors is said to be linearly independent. In this case, we can no longer express any vector as a linear combination of the other vectors (as c_1=0, resulting in RHS of eq2 being undefined). An example of a set of linearly independent vectors is the set of unit vectors \boldsymbol{\mathit{\hat{i}}}, \boldsymbol{\mathit{\hat{j}}} in the \mathbb{R}^{2} space.

 

Question

Can a set of linearly independent vectors include the zero vector?

Answer

No, because if and in eq1, then can be any number. Since is not necessarily 0, it contradicts the definition of linear independence.

 

A set of \textit{N} linearly independent vectors in an \textit{N}-dimensional vector space \textit{V} forms a set of basis vectors, \boldsymbol{\mathit{e_1}},\boldsymbol{\mathit{e_2}},\cdots,\boldsymbol{\mathit{e_N}}. A complete basis set is formed by a set of basis vectors of \textit{V} if any vector \boldsymbol{\mathit{x}} in the span of \textit{V} can be written as a linear combination of those basis vectors, i.e.

\boldsymbol{\mathit{x}}=\sum_{i=1}^{N}x_i\boldsymbol{\mathit{e_i}}

 

Next article: vector space of functions
Previous article: dirac bra-ket notation
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Dirac bra-ket notation

The Dirac bra-ket notation is a concise way to represent objects in a complex vector space \mathbb{C}^{n}.

A ket, denoted by \vert \textbf{\textit{v}} \rangle, is a vector \textbf{\textit{v}}. Since a linear operator \hat{O} maps a vector to another vector, we have \hat{O}\vert \boldsymbol{\mathit{v_{1}}}\rangle=\vert \boldsymbol{\mathit{v_{2}}} \rangle.

A bra, denoted by \langle\boldsymbol{\mathit{u}}\vert , is often associated with a ket in the form of an inner product, denoted by \langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v}}\rangle. If a ket is expressed as a column vector, the corresponding bra is the conjugate transpose of its ket, i.e. \langle\boldsymbol{\mathit{u}}\vert=\vert\boldsymbol{\mathit{u}}\rangle^{\dagger}. The inner product can therefore be written as the following matrix multiplication:

or in the case of functions:

\langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v}}\rangle=\int_{-\infty}^{\infty}f_{u}^{*}(x)f_{v}(x)dx

Since a linear operator acting on a ket is another ket, we can express an inner product as:

\langle\phi_{i}\vert\phi_{k}\rangle=\langle\phi_{i}\vert\hat{O}\vert\phi_{j}\rangle=\int \phi_{i}^{*}\hat{O}\phi_{j}d\tau

where \hat{O}\vert\phi_{j}\rangle=\vert\phi_{k}\rangle.

If i=j, then \langle\phi\vert\hat{O}\vert\phi\rangle is the expectation value (or average value) of the operator \hat{O}.

As mentioned above, bras and kets can be represented by matrices. Therefore, the multiplication of a bra and a ket that involves a linear operator is associative, e.g.:

\langle\boldsymbol{\mathit{u}}\vert(\hat{O}\vert\boldsymbol{\mathit{v}}\rangle)=(\langle\boldsymbol{\mathit{u}}\vert\hat{O})\vert\boldsymbol{\mathit{v}}\rangle\equiv\langle\boldsymbol{\mathit{u}}\vert\hat{O}\vert\boldsymbol{\mathit{v}}\rangle

(\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)\vert\boldsymbol{\mathit{w}}\rangle=\vert\boldsymbol{\mathit{u}}\rangle(\langle\boldsymbol{\mathit{v}}\vert\boldsymbol{\mathit{w}}\rangle)

(\hat{O}\vert\boldsymbol{\mathit{u}}\rangle)\langle\boldsymbol{\mathit{v}}\vert=\hat{O}(\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)\equiv\hat{O}\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert

You can verify the above examples using a 2×2 matrix with complex elements to represent the operator acting on a vector in \mathbb{C}^{2}. The three examples reveal that:

    1. \hat{O}\vert\boldsymbol{\mathit{v}}\rangle produces another ket.
    2. \langle\boldsymbol{\mathit{u}}\vert\hat{O} results in another bra. This is because (\langle\boldsymbol{\mathit{u}}\vert\hat{O})\vert\boldsymbol{\mathit{v}}\rangle=\langle\boldsymbol{\mathit{u}}\vert(\hat{O}\vert\boldsymbol{\mathit{v}}\rangle)=\langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v'}}\rangle=c, where c is a scalar; and if (\langle\boldsymbol{\mathit{u}}\vert\hat{O})\vert\boldsymbol{\mathit{v}}\rangle=c, the only possible identity of (\langle\boldsymbol{\mathit{u}}\vert\hat{O}) is a bra.
    3. \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert , which is called an outer product, is an operator because (\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)\vert\boldsymbol{\mathit{w}}\rangle=\vert\boldsymbol{\mathit{u}}\rangle(\langle\boldsymbol{\mathit{v}}\vert\boldsymbol{\mathit{w}}\rangle)=c\vert\boldsymbol{\mathit{u}}\rangle, i.e. \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert maps the ket \vert\boldsymbol{\mathit{w}}\rangle to another ket c\vert\boldsymbol{\mathit{u}}\rangle. In other words, the operator \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert transforms the vector \vert\boldsymbol{\mathit{w}}\rangle in the direction of the vector \vert\boldsymbol{\mathit{u}}\rangle, i.e. \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert projects \vert\boldsymbol{\mathit{w}}\rangle onto \vert\boldsymbol{\mathit{u}}\rangle.
    4. The product of two linear operators is another linear operator: \hat{O}\hat{O'}=\hat{O}(\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)=(\hat{O}\vert\boldsymbol{\mathit{u}}\rangle)\langle\boldsymbol{\mathit{v}}\vert=\vert\boldsymbol{\mathit{u'}}\rangle\langle\boldsymbol{\mathit{v}}\vert=\hat{O''}.

Next article: vector space
Previous article: stationary state
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Stationary state

A stationary state is described by a wavefunction that is associated with a probability density and an expectation value that are both independent of time.

For the time-independent Schrodinger equation, \hat{H}\psi =E\psi, its solutions are stationary states. This implies that stationary states of the time-independent \hat{H} can be represented by basis wavefunctions but not linear combinations of basis wavefunctions with non-zero coefficients. For example, the wavefunction \psi\left ( x,t \right )=\phi\left ( x \right )e^{-iEt/\hbar}  describes a stationary state because:

\left | \psi\left ( x,t \right ) \right |^{2}=\psi^{*}\psi=\phi^{*}e^{iEt/\hbar}\phi e^{-iEt/\hbar}=\left | \psi\left ( x \right ) \right |^{2}

and

\left \langle H \right \rangle=\int \psi^{*}\hat{H}\psi dx=\int \phi^{*}e^{iEt/\hbar}\hat{H}\phi e^{-iEt/\hbar}dx=E\int \phi^{*}\phi dx=E

whereas \psi\left ( x,t \right )=\sum_{n=1}^{2}c_n\phi_n\left ( x \right )e^{-iE_nt/\hbar}=c_1\phi_1\left ( x \right )e^{-iE_1t/\hbar}+c_2\phi_2\left ( x \right )e^{-iE_2t/\hbar} does not describe a stationary state because:

\left | \psi\left ( x,t \right ) \right |^{2}=c_{1}^{*}c_1\phi_{1}^{*}(x)\phi_{1}(x)+c_{2}^{*}c_2\phi_{2}^{*}(x)\phi_{2}(x)+c_{1}^{*}c_2\phi_{1}^{*}(x)\phi_{2}(x)e^{i(E_1-E_2)t/\hbar}+c_{2}^{*}c_1\phi_{2}^{*}(x)\phi_{1}(x)e^{i(E_2-E_1)t/\hbar}

where the last two terms of the RHS of the last equality are time-dependent.

If \phi_! and \phi_2 describe a degenerate state, where E_1=E_2=E, then \psi\left ( x,t \right ) describes a stationary state. Since an observable of a stationary state, e.g. H, is independent of time, every measurement of H of systems in such a state results in the same value E.

 

Next article: dirac bra-ket notation
Previous article: ground state
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Ground state (quantum mechanics)

The ground state of a system is the lowest energy state of the system.

The state of a system is dependent on the way electrons are distributed in a chemical species. A distribution that results in the system having the lowest energy is the ground state of the system. Every other distribution configuration  is associated with a higher energy state known as an excited state. For example, the ground state of carbon is given by the term symbol 3P, while the term symbols 1D and 1S denote excited states of carbon.

 

Next article: stationary state
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

1st order consecutive reaction

A 1st order consecutive reaction of the type ABC is composed of the reactions:

A\rightarrow B\; \; \; \; \; \; \; v_1=k_1[A]

B\rightarrow C\; \; \; \; \; \; \; v_2=k_2[B]

The rate laws are:

\frac{d[A]}{dt}=-k_1[A]\; \; \; \; \; \; \; \; 20

\frac{d[B]}{dt}=k_1[A]-k_2[B]

\frac{d[C]}{dt}=k_2[B]

To understand how a 1st order consecutive reaction proceeds over time, we need to develop equations for , and . The expression for is the solution for eq20, i.e. [A]=[A_0]e^{-k_1t}. Substituting this in the 2nd rate law above and rearranging gives:

\frac{d[B]}{dt}+k_2[B]=k_1[A_0]e^{-k_1t}\; \; \; \; \; \; \; \; 21

Eq21 is a linear first order differential equation of the form y’ + P(t)y = f(t). Multiplying eq21 with the integrating factor e^{k_2t} , we have

e^{k_2t}\frac{d[B]}{dt}+k_2e^{k_2t}[B]=k_1e^{k_2t}[A_0]e^{-k_1t}\; \; \; \; \; \; \; \; 22

The LHS of eq22 is the derivative of the product of e^{k_2t} and [B], i.e. \frac{d\left ( e^{k_2t}[B] \right )}{dt}. So,

\frac{d\left ( e^{k_2t}[B] \right )}{dt}=k_1e^{k_2t}[A_0]e^{-k_1t}

Integrating both sides with respect to time, noting that [B] = 0 at t = 0, and rearranging, we have

[B]=\frac{k_1}{k_2-k_1}\left ( e^{-k_1t}-e^{-k_2t} \right )[A_0]\; \; \; \; \; \; \; \; eq23

As t → ∞, [B] = 0.

At all times, [A] + [B] + [C] = [A0], so from eq23,

[A_0]-[A]-[C]=\frac{k_1}{k_2-k_1}\left ( e^{-k_1t}-e^{-k_2t} \right )[A_0]

Substituting [A]=[A_0]e^{-k_1t} in the above equation and rearranging yields:

[C]=\left( 1+\frac{k_1e^{-k_2t}-k_2e^{-k_1t}}{k_2-k_1} \right )[A_0]

As t → ∞, [C] = [A0].

 

NExt article: 2nd order reaction of the type (A+B→P)
Previous article: 1st order reversible reaction
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page

1st order reversible reaction

A 1st order reversible reaction of the type A\rightleftharpoons B is composed of the reactions:

A\rightarrow B\; \; \; \; \; \; \; v_1=k_1[A]

B\rightarrow A\; \; \; \; \; \; \; v_2=k_2[B]

The rate law is:

\frac{d[A]}{dt}=-k_1[A]+k_2[B]

Substituting [B] = [A0] – [A], where [A0] is the initial concentration of A, in the above equation and rearranging yields:

\frac{d[A]}{dt}=-(k_1+k_2)[A]+k_2[A_0]\; \; \; \; \; \; \; \; 16

Let

x=(k_1+k_2)[A]-k_2[A_0]\; \; \; \; \; \; \; \; 17

Eq16 becomes \frac{d[A]}{dt}=-x. Differentiating eq17 with respect to [A], we have d[A]=\frac{dx}{k_1+k_2}, and differentiating this expression with respect to time, we have \frac{d[A]}{dt}=\frac{1}{k_1+k_2}\frac{dx}{dt}, which is equivalent to

-x=\frac{1}{k_1+k_2}\frac{dx}{dt}\; \; \Rightarrow \; \; -(k_1+k_2)dt=\frac{dx}{x}

Let x = x0 when t =0 and integrate the above expression. We have

-(k_1+k_2)t=lnx-lnx_0\; \; \; \; \; \; \; \; 18

Substituting eq17 in eq18, noting that the 2nd term on RHS of eq18 refers to concentrations at t = 0, where [A] = [A0], gives

-(k_1+k_2)t=ln\frac{(k_1+k_2)[A]-k_2[A_0]}{(k_1+k_2)[A_0]-k_2[A_0]}

which rearranges to

[A]=\frac{k_2+k_1e^{-(k_1+k_2)t}}{k_1+k_2}[A_0]\; \; \; \; \; \; \; \; 19

When t = 0, eq19 becomes [A] = [A0]. As t → ∞,

[A_\infty ]=[A_{eqm}]=\frac{k_2}{k_1+k_2}[A_0]

Since [B_{eqm}]=[A_0]-[A_\infty ] ,

[B_{eqm}]=[A_0]-\frac{k_2}{k_1+k_2}[A_0]=[A_0]\frac{k_1}{k_1+k_2}

Therefore, the equilibrium constant for the reaction is:

K=\frac{[B_{eqm}]}{[A_{eqm}]}=\frac{k_1}{k_2}

This is the link between chemical kinetics and thermodynamics.

 

Next article: 1st order consecutive reaction
Previous article: Transition state theory
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page

3rd order reaction of the type (A + B + C → P)

The rate law for the 3rd order reaction of the type A + B + CP is:

\frac{d[A]}{dt}=-[A][B][C]

Using the same logic described in a previous article, we can rewrite the rate law as:

\frac{dx}{dt}=k(a-x)(b-x)(c-x)

where a = [A0], b = [B0] and c = [C0].

Integrating the above equation throughout gives

\int_{0}^{x}\frac{dx}{(a-x)(b-x)(c-x)}=k\int_{0}^{t}dt

Substituting the partial fraction expression \small \frac{1}{(a-x)(b-x)(c-x)}=\frac{1}{(a-x)(b-a)(c-a)}+\frac{1}{(b-x)(a-b)(c-b)}+\frac{1}{(c-x)(a-c)(b-c)}  in the above integral and working out some algebra yields

kt=\frac{1}{(b-a)(c-a)}ln\frac{a}{a-x}+\frac{1}{(a-b)(c-b)}ln\frac{b}{b-x}+\frac{1}{(a-c)(b-c)}ln\frac{c}{c-x}

Next article: Maxwell-Boltzmann distribution
Previous article: 3rd order reaction of the type (A+2B→P)
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page
Mono Quiz