There are several issues when trying to find the Minimum Variance Unbiased (MVU) of a variable.
- The Probability Density Function (PDF) is not known
- It is difficult to model the PDF
- Even in cases where the PDF is known it is difficult to arrive at the estimate of the minimum variance
The intended approach in such situations is to use a sub-optiomal estimator and impose the restriction of linearity on it.
- This estimator is not biased.
- The minimum variance can be arrived at using only the first and second moments of the probability density function (PDF).
- It has more practical usefulness as the complete PDF is never required.
- The variance of this estimator is the lowest among all unbiased linear estimators.
The following steps summarize the construction of the Best Linear Unbiased Estimator (B.L.U.E)
- Define a linear estimator.
- It must have the property of being unbiased.
- The mimimum variance is then computed.
- The conditions under which the minimum variance is computed need to be determined.
- This then needs to be put in the form of a vector.
The BLUE becomes an MVU estimator if the data is Gaussian in nature irrespective of if the parameter is in scalar or vector form.
In order to estimate the BLUE there are only two details needed. They are scaled mean and the covariance the first and second moments respectively.
Advantages over Disadvantages
If data can be modeled to have linear observations in noise then the Gauss-Markov theorem can be used to find the BLUE. The Markov theorem generalizes the BLUE result to the case where the ranks are less than full.
BLUE is applicable to amplitude estimation of known signals in noise. However it is to be noted that noise need not necessarily be Gaussian is nature.
The biggest disadvantage of BLUE is that is already sub-optimal in nature and sometimes it is not the right fit to problem in question.