Slutsky's theorem

Handout
\begin{thm}[Slutzky]
    If
    \[
    X_n\xrightarrow{\;\mathcal{D}\;} X \text{ og } Y_n\xrightarrow{\;\mathcal{P}\;} a,
    \]
    then
    \[
        X_n Y_n\xrightarrow{\;\mathcal{D}\;}aX \text{ og } X_n+Y_n\xrightarrow{\mathcal{D}}a+X.
    \]
\end{thm}

\begin{xmpl}{}
We know that if $X_n \sim b\left(n,p\right)$ then
$$\hat{p}_n := \frac{X_n}{n} \xrightarrow{\mathcal{D}} p$$
and we know that the function
$$x\mapsto \sqrt[]{x(1-x)}$$
is continuous so that
$$\sqrt[]{\hat{p}_n(1-\hat{p}_n)} \xrightarrow{\mathcal{P}} \sqrt[]{p(1-p)}$$

We also know that $X_n$ can be written as a sum
$$X_n \stackrel{\mathcal{D}}{=} \sum\limits_{i=1}^n Y_i $$
where $Y_i$ are independent and Bernoulli, $Y_i \sim b(1,p)$ i.i.d. and
$\hat{p}_n$ therefore has the same distribution as an average,
$$\hat{p}_n \stackrel{\mathcal{D}}{=} \frac{\sum\limits_{i=1}^n Y_i }{n},$$
so
$$\frac{\hat{p}_n - E[\hat{p}]}{\sqrt[]{V[\hat{p}]}} \xrightarrow{\mathcal{D}} n\left(0,1\right) $$
But $V[\hat{p}]=\frac{p(1-p)}{n}$ and so we can use Slutsky's theorem to conclude
$$\frac{\hat{p}_n-p}{\sqrt[]{\hat{p}(1-\hat{p})/n}} \xrightarrow{\mathcal{D}} n\left(0,1\right)$$\\
\end{xmpl}


On assumptions:

1) When should we use t-distribution?

$$\frac{\bar{X_n}-\mu}{S_n/\sqrt[]{n}} \sim t_{n-1}$$

This holds exactly if $X_1,..,X_n \sim n\left(\mu,\sigma^2\right)$, iid.


2) But if $n$ is "large" then this still holds as an approximation, based on combining the CLT and Slutzky's theorem:
$$
\frac{\bar{X}-\mu}{S/\sqrt[]{n}} \sim
\stackrel{.}{\sim}n\left(0,1\right)
$$

Here we just need $X_i$ iid with finite $\sigma^2$ -- we do \textbf{not} need the original random variables to be Gaussian.
 
\nl Slutzky's theorem has a series of consequences. If $X_1,
X_2,\ldots$ are iid with \[\text{E}\left[X^2\right]<\infty\] (so
that $\sigma^2=V[X]<\infty$) then the mean
$\bar{X_n}:=\frac{1}{n}\sum_{i=1}^{n} X_i$ has the property that
\[
    \frac{\bar{X_n}-\mu}{\sigma/\sqrt{n}}\xrightarrow{\;\mathcal{D}\;}n(0,1)
\]
and we also know that
\[
    \mathrm{S}_n^2:=\frac{1}{n-1}\sum_{i=1}^{n} (X_i-\bar{X_n})^2
\]
Further, $\mathrm{S}_n^2\xrightarrow{\;\mathcal{P}\;} \sigma^2$ and
hence $S_n\xrightarrow{\;\mathcal{P}\;}\sigma$ so Slutzky's theorem
implies:
\begin{align*}
    \frac{\bar{X_n}-\mu}{\mathrm{S}_n/\sqrt{n}} &= \frac{\sqrt{n}\frac{\bar{X_n}-\mu}{\sigma}}{\mathrm{S}_n/\sigma}\\
    &=\underbrace{\frac{\sigma}{\mathrm{S}_n}\sqrt{n}}_{\xrightarrow{\mathcal{P}} 1}\frac{\bar{X_n}-\mu}{\sigma}\xrightarrow{\;\mathcal{D}\;}n(0,1).
\end{align*}

Note that this implies that we can approximate probabilities of events
such that
\[
    \mathrm{P}\left[\bar{X}_n-\kappa\frac{\mathrm{S}_n}{\sqrt{n}}\leq \mu\leq\bar{X}_n+\kappa\frac{\mathrm{S}_n}{\sqrt{n}}\right]=\mathrm{P}\left[-\kappa \leq\frac{\bar{X_n}-\mu}{\mathrm{S_n}/\sqrt{n}}\leq\kappa\right]
\]
by corresponding $n(0,1)$ probabilities, i.e.
\[
    \mathrm{P}\left[\bar{X_n}-\kappa\frac{\mathrm{S}_n}{\sqrt{n}}\leq \mu\leq\bar{X_n}+\kappa\frac{\mathrm{S}}{\sqrt{n}}\right]\approx 1-\alpha
\]
where $\kappa=z_{1-\frac{\alpha}{2}}$. This is an \textbf{approximation}.


Finally,
if $X_i\sim n(\mu,\sigma^2)$ iid already know that
\[
    T_n:=\frac{\bar{X_n}-\mu}{\mathrm{S}/\sqrt{n}}=\dfrac{\dfrac{\bar{X_n}-\mu}{\sigma/\sqrt{n}}}{\sqrt{\dfrac{\sum_{i=1}^n (X_i-\bar{X})^2}{\sigma^2} \bigg/(n-1)}}
\]
and $\dfrac{\sum_{i=1}^n (X_i-\bar{X})^2}{\sigma^2}\sim \chi_{n-1}^2$
so that
\[
    P\left[\bar{X_n}-\kappa\frac{\mathrm{S}}{\sqrt{n}}\leq\mu\leq\bar{X_n}+\kappa\frac{\mathrm{S}}{\sqrt{n}}\right]=1-\alpha
\]

where $\kappa=t_{n-1,1-\frac{\alpha}{2}}$. This is \textbf{exact} but requires the assumption of normality of the data.



\begin{xmpl}
  $X_i=\begin{cases}0\\1\end{cases}, P[X_i=1]=p=1-P[X_i=0], X_i$ iid,
  i.e. $X_i\sim b(1,p)$ iid and $Y_n:=\sum_{i=1}^n X_i\sim b(n,p)$.\\
  

We know that
  $\frac{\frac{1}{n}Y_n-\mu}{\sigma/\sqrt{n}}\xrightarrow{\mathcal{D}}n(0,1)$
  (CLT) since $\mu=\mathrm{E}[Y_n]/n=p$ and $\sigma =
  \mathrm{V}\left[\frac{Y_n}{n}\right] = \frac{1}{n^2}np(1-p)$ i.e. if
  $\hat{p}_n=\frac{1}{n}Y_n$ then
    \[
        \frac{\hat{p}-p}{\sqrt{np(1-p)}}\xrightarrow{\mathcal{D}}n(0,1)
    \]
    We could use
        $\mathrm{P}\left[-z_{1-\frac{\alpha}{2}}\leq\frac{\hat{p}-p}{\sqrt{np(1-p)}}\leq
          z_{1-\frac{alpha}{2}}\right]\approx 1-\alpha$ to obtain
        intervals of the form

    \[
        \mathrm{P}\left[f_1(\hat{p})\leq p \leq
                  f_2(\hat{p})\right]\approx 1-\alpha ,
    \]
    but since we know that $\hat{p}_n\xrightarrow{\mathcal{P}} p$
        we obtain using Slutzky's theorem
    \begin{equation}\label{eah1}
        \frac{\hat{p}-p}{\sqrt{n\hat{p}(1-\hat{p})}}\xrightarrow{\mathcal{D}}n(0,1)
    \end{equation}
    [more exactly: $\hat{p}\xrightarrow{\mathcal{P}}p$ and
        $s\mapsto\frac{1}{\sqrt{s(1-s)}}$ is continuous
    \[
        \Rightarrow \frac{1}{\sqrt{\hat{p}(1-\hat{p})}}\xrightarrow{\mathcal{P}}\frac{1}{\sqrt{p(1-p)}}
    \]
    and \eqref{eah1} is therefore a consequence of Slutzky's theorem]

    i.e. we obtain:
    \[
        \mathrm{P}\left[\hat{p}-z_{1-\frac{\alpha}{2}}\sqrt{n\hat{p}(1-\hat{p})}\leq p\leq \hat{p}+z_{1-\frac{\alpha}{2}}\sqrt{n\hat{p}(1-\hat{p})}\right]\approx 1-\frac{\alpha}{2}\\
    \]
\end{xmpl}