Best linear unbiased estimators (BLUE)

Handout
Certain estimators can be derived from scratch using a definition of
optimality.


If $Y_1,\dots,Y_n$ as independent random variables one can consider
estimators of the form
$$W = \displaystyle\sum^n_{i=1}a_iY_i$$
and choose the coefficients $(a^*_1,\dots,a^*_n)=:\underline{a}^*$
so that
\[E\sum a^*_i Y_i = \tau(\theta)\]
\[V\sum a^*_i Y_i = \displaystyle\min_{\underline{a}} V\sum a_iY_i\]
\begin{xmpl}
    $Y_1,\dots,Y_n\sim n(\mu,\sigma^2)$ iid $\tau(\theta) = \mu$
    $$W = \sum a_i Y_i$$
    $$EW = \mu = E\sum a_i \bar{Y}_i = \mu$$
    $$\Rightarrow \sum a_i \mu = \mu$$
    \begin{equation}\tag{*}
    \overset{(***)}{\Rightarrow} \sum a_i = 1
    \end{equation}
    $$VW \overset{(**)}{=} \sum a_i \sigma^2$$
    We thus want
    $$\displaystyle\min_{a_1,\dots,a_n} \sum a_i^2$$
    $$ m.t.t \sum a_i = 1$$
    $$L = \sum a_i^2 + \lambda(\sum a_i - 1)$$
    $$0 = \frac{\partial}{\partial a_i} L = 2a_i + \lambda \Rightarrow a_i = \frac{-\lambda}{2}$$
    i.e. all the $a_i$ are the same and $(*)$ implies $a_i =
        \frac{1}{n}$ and hence $\bar{Y}$ is the BLUE for
        $n(\mu,\sigma^2)$.
\end{xmpl}
\begin{note}
    We assumed independence in $(**)$, and identical
        distributions in $(***)$ but not
        normality, and hence $\bar{Y}$ is BLUE for $\mu$ if
        $Y_1,\dots,Y_n$ are i.i.d. with expected value $\mu$ and a
        common finite variance $\sigma^2$.
\end{note}