$$ \require{cancel} \newcommand{\given}{ \,|\, } \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\vecg}[1]{\boldsymbol{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\bbone}{\unicode{x1D7D9}} $$

Assignment 4 Solutions

Download as PDF

Q1

The Exponential distribution has probability density function (pdf):

\[\tilde{f}(y \,|\, \lambda) = \begin{cases} \lambda e^{-\lambda y} & \mbox{if } y \in [0,\infty) \\ 0 & \mbox{otherwise} \end{cases}\]

where \(\lambda > 0\).

Simulate three values (pen-and-paper, not R) from this pdf via inverse transform sampling using the following values simulated from the Uniform\((0,1)\) distribution:

\[0.56, \ \ 0.85, \ \ 0.26\]

\[\begin{align*} \tilde{F}(y) = \mathbb{P}(Y \le y) &= \int_0^y \lambda e^{-\lambda t} \,dt \\ &= \left. -e^{-\lambda t} \right|_{t=0}^y \quad \text{by substitution ($u=\lambda t$) or inspection} \\ &= 1 - e^{-\lambda y} \\[1em] \implies \tilde{F}^{-1}(u) &= -\lambda^{-1} \log(1-u) \end{align*}\]

Therefore, we can use the three uniform simulations provided in the question to generate three Exponentially distributed simulations: \[\begin{align*} u = 0.56 & \implies y = -\lambda^{-1} \log(1-0.56) = 0.821 \lambda^{-1} \\ u = 0.85 & \implies y = -\lambda^{-1} \log(1-0.85) = 1.897 \lambda^{-1} \\ u = 0.26 & \implies y = -\lambda^{-1} \log(1-0.26) = 0.301 \lambda^{-1} \end{align*}\]

Q2

Let the random variable \(X\) have pdf:

\[f(x \,|\, \mu) = \begin{cases} \mu^2 x e^{-\mu x} & \mbox{if } x \in [0,\infty) \\ 0 & \mbox{otherwise} \end{cases}\]

where \(\mu > 0\).

Show that the Exponential distribution can be used as a proposal distribution in a rejection sampler to generate simulations of \(X\). Ensure you state any conditions on \(\lambda\) and \(\mu\).

We require \(c < \infty\) such that \[ \mu^2 x e^{-\mu x} \le c \lambda e^{-\lambda x} \quad \forall\ x \in [0, \infty) \] In other words, we require: \[ c = \sup_{x \in [0,\infty)} \frac{\mu^2}{\lambda} x e^{(\lambda-\mu) x} \] Firstly, we note that at \(x=0\) the expression is zero, and as \(x \to +\infty\) the exponential will decay faster than the linear term in \(x\) grows as long as it has negative power, meaning \(\mu > \lambda\).

Next, to find \(c\), \[\begin{align*} \frac{\partial}{\partial x}\left( \frac{\mu^2}{\lambda} x e^{(\lambda-\mu) x} \right) &= \frac{\mu^2}{\lambda} \left( e^{(\lambda-\mu) x} + (\lambda-\mu) x e^{(\lambda-\mu) x} \right) \quad \text{(product rule)} \\ &= \frac{\mu^2}{\lambda} \left( 1 + (\lambda-\mu) x \right) e^{(\lambda-\mu) x} \end{align*}\] This derivative is clearly only zero if, \[ 1 + (\lambda-\mu) x = 0 \implies x = \frac{1}{\mu - \lambda} \] which will never be undefined since we require \(\mu > \lambda\) from above.

Therefore, we have shown \(\tilde{f}\) is a suitable proposal in a rejection sampler for \(f\) when \(\mu > \lambda\) and has bounding constant: \[\begin{align*} c &= \frac{\mu^2}{\lambda} \frac{1}{\mu - \lambda} e^{(\lambda-\mu) \frac{1}{\mu - \lambda}} \\ &= \frac{\mu^2}{\lambda (\mu - \lambda)} e^{-1} \end{align*}\]

Q3

For any choice of \(\mu\), what is the optimal \(\lambda\) to choose as the parameter in the proposal distribution?

We recall that \(c\) is the expected number of iterations of the rejection sampling algorithm until acceptance. Therefore, the optimal choice of \(\lambda\) will be the one which leads to the smallest value for \(c\). To find this, we differentiate wrt \(\lambda\):

\[\begin{align*} \frac{\partial }{\partial \lambda} \left[ \frac{\mu^2 e^{-1}}{\lambda (\mu - \lambda)} \right] &= \frac{\lambda (\mu - \lambda) \times 0 - 1 \times \frac{\partial }{\partial \lambda}\left[ \lambda (\mu - \lambda) \right]}{\lambda^2 (\mu - \lambda)^2} \quad \text{(quotient rule)} \\ &= \frac{e^{-1} \mu^2 (\mu - 2\lambda)}{\lambda^2 (\mu - \lambda)^2} \quad \text{(product rule)} \\ \end{align*}\]

The above is only zero if \(\mu-2\lambda = 0\). Note from the question that \(\mu > 0\), and from solving Q2 that \(\mu > \lambda\), so there is no risk that \(\mu - \lambda = 0\) in the denominator.

Therefore, the optimal choice is \(\lambda = \frac{\mu}{2} \ \forall\,\mu > 0\).

Q4

Show that when the optimal \(\lambda\) is used the expected number of iterations required to produce a single simulation of \(X\) is approximately 1.47 for all \(\mu.\)

We simply substitute the optimal choice of \(\lambda\) back into the expression for \(c\),

\[\begin{align*} c &= \frac{\mu^2}{\lambda (\mu - \lambda)} e^{-1} \\ &= \frac{\mu^2}{\frac{\mu}{2} \left(\mu - \frac{\mu}{2}\right)} e^{-1} \\ &= 4 e^{-1} \approx 1.47 \end{align*}\]

as required.

🏁🏁 Done, end of assignment! 🏁🏁