Vector space of functions

As mentioned in the previous article, a vector space \textit{V} is a set of objects that follows certain rules of addition and multiplication. This implies that a set of functions \textit{V}=\left \{ f(x),g(x),h(x)\cdots \right \} that follows the same rules, forms a vector space of functions. The properties of a vector space of functions are:

1) Commutative and associative addition for all functions of the closed set \textit{V}.

f(x)+g(x)=g(x)+h(x)

\left [f(x)+g(x)\right ]+h(x)=f(x)+\left [g(x)+h(x)\right ]

2) Associativity and distributivity of scalar multiplication for all functions of the closed set.

\gamma\left [ \delta f(x) \right ]=(\gamma\delta)f(x)

\gamma\left [ f(x)+g(x) \right ]=\gamma f(x)+\gamma g(x)

(\gamma +\delta)f(x)=\gamma f(x)+\delta f(x)

where \gamma and \delta are scalars.

3) Scalar multiplication identity.

1f(x)=f(x)

4) Additive inverse.

f(x)+[-f(x)]=0

5) Existence of null vector \boldsymbol{\mathit{0}}, such that

\boldsymbol{\mathit{0}}+f(x)=f(x)

where \boldsymbol{\mathit{0}} in this case is a zero function that returns zero for any inputs.

Similarly, a set of linearly independent functions y_0(x),y_1(x),\cdots,y_n(x) forms a set of basis functions. We have a complete set of basis functions if any well-behaved function in the domain a\leq x\leq b can be written as a linear combination of these basis functions, i.e.

f(x)=\sum_{n=0}^{\infty}c_ny_n(x)

In quantum chemistry, physical states of a system are expressed as functions called wavefunctions.

 

 

Next article: inner product space
Previous article: vector space
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Vector space

A vector space is a set of objects  that follows specific rules of addition and multiplication.

These objects are called vectors and the rules are:

1) Commutative and associative addition for all elements of the closed set.

\boldsymbol{\mathit{v_{1}}}+\boldsymbol{\mathit{v_{2}}}=\boldsymbol{\mathit{v_{2}}}+\boldsymbol{\mathit{v_{1}}}

(\boldsymbol{\mathit{v_{1}}}+\boldsymbol{\mathit{v_{2}}})+\boldsymbol{\mathit{v_{3}}}=\boldsymbol{\mathit{v_{1}}}+(\boldsymbol{\mathit{v_{2}}}+\boldsymbol{\mathit{v_{3}}})

2) Associativity and distributivity of scalar multiplication for all elements of the closed set

c_1(c_2\boldsymbol{\mathit{v_{1}}})=(c_1c_2)\boldsymbol{\mathit{v_{1}}}

c_1(\boldsymbol{\mathit{v_{1}}}+\boldsymbol{\mathit{v_{2}}})=c_1\boldsymbol{\mathit{v_{1}}}+c_1\boldsymbol{\mathit{v_{2}}}

(c_1+c_2)\boldsymbol{\mathit{v_{1}}}=c_1\boldsymbol{\mathit{v_{1}}}+c_2\boldsymbol{\mathit{v_{1}}}

where c_1 and c_2 are scalars.

3) Scalar multiplication identity.

\boldsymbol{\mathit{1}}\boldsymbol{\mathit{v_{1}}}=\boldsymbol{\mathit{v_{1}}}

4) Additive inverse.

\boldsymbol{\mathit{v_{1}}}+(-\boldsymbol{\mathit{v_{1}}})=0

5) Existence of null vector , such that

\boldsymbol{\mathit{0}}+\boldsymbol{\mathit{v_{1}}}=\boldsymbol{\mathit{v_{1}}}

In a vector space V=\left \{\boldsymbol{\mathit{v_{1}}},\boldsymbol{\mathit{v_{2}}},\cdots ,\boldsymbol{\mathit{v_{k}}}\right \}, one vector can be expressed as a linear combination of other vectors in the set, e.g.:

\boldsymbol{\mathit{z}}=c_1\boldsymbol{\mathit{v_{1}}}+c_2\boldsymbol{\mathit{v_{2}}}+\cdots+c_k\boldsymbol{\mathit{v_{k}}}

The span of a set of vectors V is the set of vectors that can be written as a linear combination of vectors in the set . For example, the span of the set of unit vectors \boldsymbol{\mathit{\hat{i}}} and \boldsymbol{\mathit{\hat{j}}} in the \mathbb{R}^{2} space is the set of all vectors (including the null vector) in the \mathbb{R}^{2} space. Alternative, we say that \boldsymbol{\mathit{\hat{i}}} and \boldsymbol{\mathit{\hat{j}}} span \mathbb{R}^{2}.

If we vary c_1,c_2,\cdots c_k (but not the trivial case where all scalars are zero) such that \boldsymbol{\mathit{z}} is equal to \boldsymbol{\mathit{0}},

c_1\boldsymbol{\mathit{v_1}}+c_2\boldsymbol{\mathit{v_2}}+\cdots+c_k\boldsymbol{\mathit{v_k}}=\boldsymbol{\mathit{0}}\; \; \; \; \; \; \;\; 1

the set of vectors \boldsymbol{\mathit{v_1}},\boldsymbol{\mathit{v_2}},\cdots\boldsymbol{\mathit{v_k}} is said to be linearly dependent because any vector can be written as a linear combination of the others:

\boldsymbol{\mathit{v_1}}=\frac{-c_2}{c_1}\boldsymbol{\mathit{v_2}}+\cdots+\frac{-c_k}{c_1}\boldsymbol{\mathit{v_k}}\; \; \; \; \; \; \; \; 2

If the only way to satisfy eq1 is when c_k=0 for all k, the set of vectors is said to be linearly independent. In this case, we can no longer express any vector as a linear combination of the other vectors (as c_1=0, resulting in RHS of eq2 being undefined). An example of a set of linearly independent vectors is the set of unit vectors \boldsymbol{\mathit{\hat{i}}}, \boldsymbol{\mathit{\hat{j}}} in the \mathbb{R}^{2} space.

 

Question

Can a set of linearly independent vectors include the zero vector?

Answer

No, because if and in eq1, then can be any number. Since is not necessarily 0, it contradicts the definition of linear independence.

 

A set of \textit{N} linearly independent vectors in an \textit{N}-dimensional vector space \textit{V} forms a set of basis vectors, \boldsymbol{\mathit{e_1}},\boldsymbol{\mathit{e_2}},\cdots,\boldsymbol{\mathit{e_N}}. A complete basis set is formed by a set of basis vectors of \textit{V} if any vector \boldsymbol{\mathit{x}} in the span of \textit{V} can be written as a linear combination of those basis vectors, i.e.

\boldsymbol{\mathit{x}}=\sum_{i=1}^{N}x_i\boldsymbol{\mathit{e_i}}

 

Next article: vector space of functions
Previous article: dirac bra-ket notation
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Dirac bra-ket notation

The Dirac bra-ket notation is a concise way to represent objects in a complex vector space \mathbb{C}^{n}.

A ket, denoted by \vert \textbf{\textit{v}} \rangle, is a vector \textbf{\textit{v}}. Since a linear operator \hat{O} maps a vector to another vector, we have \hat{O}\vert \boldsymbol{\mathit{v_{1}}}\rangle=\vert \boldsymbol{\mathit{v_{2}}} \rangle.

A bra, denoted by \langle\boldsymbol{\mathit{u}}\vert , is often associated with a ket in the form of an inner product, denoted by \langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v}}\rangle. If a ket is expressed as a column vector, the corresponding bra is the conjugate transpose of its ket, i.e. \langle\boldsymbol{\mathit{u}}\vert=\vert\boldsymbol{\mathit{u}}\rangle^{\dagger}. The inner product can therefore be written as the following matrix multiplication:

or in the case of functions:

\langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v}}\rangle=\int_{-\infty}^{\infty}f_{u}^{*}(x)f_{v}(x)dx

Since a linear operator acting on a ket is another ket, we can express an inner product as:

\langle\phi_{i}\vert\phi_{k}\rangle=\langle\phi_{i}\vert\hat{O}\vert\phi_{j}\rangle=\int \phi_{i}^{*}\hat{O}\phi_{j}d\tau

where \hat{O}\vert\phi_{j}\rangle=\vert\phi_{k}\rangle.

If i=j, then \langle\phi\vert\hat{O}\vert\phi\rangle is the expectation value (or average value) of the operator \hat{O}.

As mentioned above, bras and kets can be represented by matrices. Therefore, the multiplication of a bra and a ket that involves a linear operator is associative, e.g.:

\langle\boldsymbol{\mathit{u}}\vert(\hat{O}\vert\boldsymbol{\mathit{v}}\rangle)=(\langle\boldsymbol{\mathit{u}}\vert\hat{O})\vert\boldsymbol{\mathit{v}}\rangle\equiv\langle\boldsymbol{\mathit{u}}\vert\hat{O}\vert\boldsymbol{\mathit{v}}\rangle

(\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)\vert\boldsymbol{\mathit{w}}\rangle=\vert\boldsymbol{\mathit{u}}\rangle(\langle\boldsymbol{\mathit{v}}\vert\boldsymbol{\mathit{w}}\rangle)

(\hat{O}\vert\boldsymbol{\mathit{u}}\rangle)\langle\boldsymbol{\mathit{v}}\vert=\hat{O}(\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)\equiv\hat{O}\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert

You can verify the above examples using a 2×2 matrix with complex elements to represent the operator acting on a vector in \mathbb{C}^{2}. The three examples reveal that:

    1. \hat{O}\vert\boldsymbol{\mathit{v}}\rangle produces another ket.
    2. \langle\boldsymbol{\mathit{u}}\vert\hat{O} results in another bra. This is because (\langle\boldsymbol{\mathit{u}}\vert\hat{O})\vert\boldsymbol{\mathit{v}}\rangle=\langle\boldsymbol{\mathit{u}}\vert(\hat{O}\vert\boldsymbol{\mathit{v}}\rangle)=\langle\boldsymbol{\mathit{u}}\vert\boldsymbol{\mathit{v'}}\rangle=c, where c is a scalar; and if (\langle\boldsymbol{\mathit{u}}\vert\hat{O})\vert\boldsymbol{\mathit{v}}\rangle=c, the only possible identity of (\langle\boldsymbol{\mathit{u}}\vert\hat{O}) is a bra.
    3. \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert , which is called an outer product, is an operator because (\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)\vert\boldsymbol{\mathit{w}}\rangle=\vert\boldsymbol{\mathit{u}}\rangle(\langle\boldsymbol{\mathit{v}}\vert\boldsymbol{\mathit{w}}\rangle)=c\vert\boldsymbol{\mathit{u}}\rangle, i.e. \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert maps the ket \vert\boldsymbol{\mathit{w}}\rangle to another ket c\vert\boldsymbol{\mathit{u}}\rangle. In other words, the operator \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert transforms the vector \vert\boldsymbol{\mathit{w}}\rangle in the direction of the vector \vert\boldsymbol{\mathit{u}}\rangle, i.e. \vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert projects \vert\boldsymbol{\mathit{w}}\rangle onto \vert\boldsymbol{\mathit{u}}\rangle.
    4. The product of two linear operators is another linear operator: \hat{O}\hat{O'}=\hat{O}(\vert\boldsymbol{\mathit{u}}\rangle\langle\boldsymbol{\mathit{v}}\vert)=(\hat{O}\vert\boldsymbol{\mathit{u}}\rangle)\langle\boldsymbol{\mathit{v}}\vert=\vert\boldsymbol{\mathit{u'}}\rangle\langle\boldsymbol{\mathit{v}}\vert=\hat{O''}.

Next article: vector space
Previous article: stationary state
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Stationary state

A stationary state is described by a wavefunction that is associated with a probability density and an expectation value that are both independent of time.

For the time-independent Schrodinger equation, \hat{H}\psi =E\psi, its solutions are stationary states. This implies that stationary states of the time-independent \hat{H} can be represented by basis wavefunctions but not linear combinations of basis wavefunctions with non-zero coefficients. For example, the wavefunction \psi\left ( x,t \right )=\phi\left ( x \right )e^{-iEt/\hbar}  describes a stationary state because:

\left | \psi\left ( x,t \right ) \right |^{2}=\psi^{*}\psi=\phi^{*}e^{iEt/\hbar}\phi e^{-iEt/\hbar}=\left | \psi\left ( x \right ) \right |^{2}

and

\left \langle H \right \rangle=\int \psi^{*}\hat{H}\psi dx=\int \phi^{*}e^{iEt/\hbar}\hat{H}\phi e^{-iEt/\hbar}dx=E\int \phi^{*}\phi dx=E

whereas \psi\left ( x,t \right )=\sum_{n=1}^{2}c_n\phi_n\left ( x \right )e^{-iE_nt/\hbar}=c_1\phi_1\left ( x \right )e^{-iE_1t/\hbar}+c_2\phi_2\left ( x \right )e^{-iE_2t/\hbar} does not describe a stationary state because:

\left | \psi\left ( x,t \right ) \right |^{2}=c_{1}^{*}c_1\phi_{1}^{*}(x)\phi_{1}(x)+c_{2}^{*}c_2\phi_{2}^{*}(x)\phi_{2}(x)+c_{1}^{*}c_2\phi_{1}^{*}(x)\phi_{2}(x)e^{i(E_1-E_2)t/\hbar}+c_{2}^{*}c_1\phi_{2}^{*}(x)\phi_{1}(x)e^{i(E_2-E_1)t/\hbar}

where the last two terms of the RHS of the last equality are time-dependent.

If \phi_! and \phi_2 describe a degenerate state, where E_1=E_2=E, then \psi\left ( x,t \right ) describes a stationary state. Since an observable of a stationary state, e.g. H, is independent of time, every measurement of H of systems in such a state results in the same value E.

 

Next article: dirac bra-ket notation
Previous article: ground state
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

Ground state (quantum mechanics)

The ground state of a system is the lowest energy state of the system.

The state of a system is dependent on the way electrons are distributed in a chemical species. A distribution that results in the system having the lowest energy is the ground state of the system. Every other distribution configuration  is associated with a higher energy state known as an excited state. For example, the ground state of carbon is given by the term symbol 3P, while the term symbols 1D and 1S denote excited states of carbon.

 

Next article: stationary state
Content page of quantum mechanics
Content page of advanced chemistry
Main content page

1st order consecutive reaction

A 1st order consecutive reaction of the type ABC is composed of the reactions:

A\rightarrow B\; \; \; \; \; \; \; v_1=k_1[A]

B\rightarrow C\; \; \; \; \; \; \; v_2=k_2[B]

The rate laws are:

\frac{d[A]}{dt}=-k_1[A]\; \; \; \; \; \; \; \; 20

\frac{d[B]}{dt}=k_1[A]-k_2[B]

\frac{d[C]}{dt}=k_2[B]

To understand how a 1st order consecutive reaction proceeds over time, we need to develop equations for , and . The expression for is the solution for eq20, i.e. [A]=[A_0]e^{-k_1t}. Substituting this in the 2nd rate law above and rearranging gives:

\frac{d[B]}{dt}+k_2[B]=k_1[A_0]e^{-k_1t}\; \; \; \; \; \; \; \; 21

Eq21 is a linear first order differential equation of the form y’ + P(t)y = f(t). Multiplying eq21 with the integrating factor e^{k_2t} , we have

e^{k_2t}\frac{d[B]}{dt}+k_2e^{k_2t}[B]=k_1e^{k_2t}[A_0]e^{-k_1t}\; \; \; \; \; \; \; \; 22

The LHS of eq22 is the derivative of the product of e^{k_2t} and [B], i.e. \frac{d\left ( e^{k_2t}[B] \right )}{dt}. So,

\frac{d\left ( e^{k_2t}[B] \right )}{dt}=k_1e^{k_2t}[A_0]e^{-k_1t}

Integrating both sides with respect to time, noting that [B] = 0 at t = 0, and rearranging, we have

[B]=\frac{k_1}{k_2-k_1}\left ( e^{-k_1t}-e^{-k_2t} \right )[A_0]\; \; \; \; \; \; \; \; eq23

As t → ∞, [B] = 0.

At all times, [A] + [B] + [C] = [A0], so from eq23,

[A_0]-[A]-[C]=\frac{k_1}{k_2-k_1}\left ( e^{-k_1t}-e^{-k_2t} \right )[A_0]

Substituting [A]=[A_0]e^{-k_1t} in the above equation and rearranging yields:

[C]=\left( 1+\frac{k_1e^{-k_2t}-k_2e^{-k_1t}}{k_2-k_1} \right )[A_0]

As t → ∞, [C] = [A0].

 

NExt article: 2nd order reaction of the type (A+B→P)
Previous article: 1st order reversible reaction
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page

1st order reversible reaction

A 1st order reversible reaction of the type A\rightleftharpoons B is composed of the reactions:

A\rightarrow B\; \; \; \; \; \; \; v_1=k_1[A]

B\rightarrow A\; \; \; \; \; \; \; v_2=k_2[B]

The rate law is:

\frac{d[A]}{dt}=-k_1[A]+k_2[B]

Substituting [B] = [A0] – [A], where [A0] is the initial concentration of A, in the above equation and rearranging yields:

\frac{d[A]}{dt}=-(k_1+k_2)[A]+k_2[A_0]\; \; \; \; \; \; \; \; 16

Let

x=(k_1+k_2)[A]-k_2[A_0]\; \; \; \; \; \; \; \; 17

Eq16 becomes \frac{d[A]}{dt}=-x. Differentiating eq17 with respect to [A], we have d[A]=\frac{dx}{k_1+k_2}, and differentiating this expression with respect to time, we have \frac{d[A]}{dt}=\frac{1}{k_1+k_2}\frac{dx}{dt}, which is equivalent to

-x=\frac{1}{k_1+k_2}\frac{dx}{dt}\; \; \Rightarrow \; \; -(k_1+k_2)dt=\frac{dx}{x}

Let x = x0 when t =0 and integrate the above expression. We have

-(k_1+k_2)t=lnx-lnx_0\; \; \; \; \; \; \; \; 18

Substituting eq17 in eq18, noting that the 2nd term on RHS of eq18 refers to concentrations at t = 0, where [A] = [A0], gives

-(k_1+k_2)t=ln\frac{(k_1+k_2)[A]-k_2[A_0]}{(k_1+k_2)[A_0]-k_2[A_0]}

which rearranges to

[A]=\frac{k_2+k_1e^{-(k_1+k_2)t}}{k_1+k_2}[A_0]\; \; \; \; \; \; \; \; 19

When t = 0, eq19 becomes [A] = [A0]. As t → ∞,

[A_\infty ]=[A_{eqm}]=\frac{k_2}{k_1+k_2}[A_0]

Since [B_{eqm}]=[A_0]-[A_\infty ] ,

[B_{eqm}]=[A_0]-\frac{k_2}{k_1+k_2}[A_0]=[A_0]\frac{k_1}{k_1+k_2}

Therefore, the equilibrium constant for the reaction is:

K=\frac{[B_{eqm}]}{[A_{eqm}]}=\frac{k_1}{k_2}

This is the link between chemical kinetics and thermodynamics.

 

Next article: 1st order consecutive reaction
Previous article: Transition state theory
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page

3rd order reaction of the type (A + B + C → P)

The rate law for the 3rd order reaction of the type A + B + CP is:

\frac{d[A]}{dt}=-[A][B][C]

Using the same logic described in a previous article, we can rewrite the rate law as:

\frac{dx}{dt}=k(a-x)(b-x)(c-x)

where a = [A0], b = [B0] and c = [C0].

Integrating the above equation throughout gives

\int_{0}^{x}\frac{dx}{(a-x)(b-x)(c-x)}=k\int_{0}^{t}dt

Substituting the partial fraction expression \small \frac{1}{(a-x)(b-x)(c-x)}=\frac{1}{(a-x)(b-a)(c-a)}+\frac{1}{(b-x)(a-b)(c-b)}+\frac{1}{(c-x)(a-c)(b-c)}  in the above integral and working out some algebra yields

kt=\frac{1}{(b-a)(c-a)}ln\frac{a}{a-x}+\frac{1}{(a-b)(c-b)}ln\frac{b}{b-x}+\frac{1}{(a-c)(b-c)}ln\frac{c}{c-x}

Next article: Maxwell-Boltzmann distribution
Previous article: 3rd order reaction of the type (A+2B→P)
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page

3rd order reaction of the type (A + 2B → P)

The rate law for the 3rd order reaction of the type A + 2BP is:

\frac{d[A]}{dt}=-k[A][B]^2

Using the same logic described in a previous article, we can rewrite the rate law as:

\frac{dx}{dt}=k(a-x)(b-2x)^2

where a = [A0] and b = [B0].

Integrating the above equation throughout yields

\int_{0}^{x}\frac{dx}{(a-x)(b-2x)^2}=k\int_{o}^{t}dt\; \; \; \; \; \; \; \; 15

Substituting the partial fraction expression \frac{1}{(a-x)(b-2x)^2}=\frac{2}{(2a-b)(b-2x)^2}+\frac{1}{(2a-b)^2(a-x)}-\frac{2}{(2a-b)^2(b-2x)}  in eq15 and working out some algebra gives

kt=\frac{2x}{b(2a-b)(b-2x)}+\frac{1}{(2a-b)^2}ln\frac{a(b-2x)}{b(a-x)}

 

Question

How to compute \int_{0}^{x}\frac{2}{(2a-b)(b-2x)^2}dx ?

Answer

\int_{0}^{x}\frac{2}{(2a-b)(b-2x)^2}dx=\int_{0}^{x}\frac{2b}{b(2a-b)(b-2x)^2}dx

=\int_{0}^{x}\frac{2[(b-2x)+2x]}{b(2a-b)(b-2x)^2}dx=\int_{0}^{x}\frac{2[(2a-b)(b-2x)+2x(2a-b)]}{b(2a-b)^2(b-2x)^2}dx

=\int_{0}^{x}\frac{2b(2a-b)(b-2x)-4bx(b-2a)}{b^2(2a-b)^2(b-2x)^2}dx=\frac{2x}{b(2a-b)(b-2x)}

Note that the integrand in the 5th equality can be obtained by differentiating the last term using the quotient rule.

 

 

Next article: 3rd order reaction of the type (A+B+c→P)
Previous article: 2nd order autocatalytic reaction
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page

2nd order autocatalytic reaction

An autocatalytic reaction is one in which a product catalyses the reaction. An example is the Mn2+-catalysed oxidation of oxalic acid by potassium manganate (VII):

2MnO_4^{\; -}+16H^++5C_2O_4^{\; 2-}\rightarrow 2Mn^{2+}+10CO_2+8H_2O

The equation can be reduced to A + P → 2P, or simply, A P, with the rate law:

\frac{d[A]}{dt}=-k[A][P]\; \; \; \; \; \; \; \; 13a

Using the same logic described in a previous article, we can rewrite the rate law as:

\frac{dx}{dt}=k(a-x)(p+x)

where a = [A0] and p = [P0].

Integrating throughout, we have

\int_{0}^{x}\frac{dx}{(a-x)(p+x)}=k\int_{0}^{t}dt\; \; \; \; \; \; \; \; 14

Substituting the partial fraction expression \frac{1}{(a-x)(p+x)}=\frac{1}{a+p}\left ( \frac{1}{a-x}+\frac{1}{p+x} \right ) in eq14 and working out the algebra yields

kt=\frac{1}{a+p}ln\frac{a(p+x)}{p(a-x)}

which is equivalent to

kt=\frac{1}{[A_0]+[P_0]}ln\frac{[A_0][P]}{[P_0][A]}

 

Question

The mechanism of the above reaction is found to include the following steps:

A+P\rightleftharpoons AP

AP\rightarrow P+P\; \; \; \; \; \; (rds)

With this in mind, derive the rate law and show that it is consistent with eq13a.

Answer

We can write the rate law as:

\frac{d[P]}{dt}=2k_{rds}[AP]=2k_{rds}K[A][P]=k[A][P]

where K is the equilibrium constant for the 1st step and k = 2krdsK. Furthermore, the overall reaction is AP, which means that

\frac{d[P]}{dt}=-\frac{d[A]}{dt}=k[A][P]

 

 

Next article: 3rd order reaction of the type (A+2B→P)
Previous article: 2nd order reaction of the type (A+2B→P)
Content page of advanced chemical kinetics
Content page of Advanced chemistry
Main content page
Mono Quiz