- Simple Random Sampling and Sampling Distribution
- Sampling Error
- Stratified Random Sampling
- Time Series and Cross Sectional Data
- Central Limit Theorem
- Standard Error of the Sample Mean
- Parameter Estimation
- Point Estimates
- Confidence Interval Estimates
- Confidence Interval for a Population mean, with a known Population Variance
- Confidence Interval for a Population mean, with an Unknown Population Variance
- Confidence Interval for a Population Mean, when the Distribution is Non-normal
- Student’s t Distribution
- How to Read Student’s t Table
- Biases in Sampling
Point Estimates
A point estimate is a single statistic value that is the “best guess” for the parameter value (such as population mean). The point estimates are calculated using formulas such as the formula to calculate the sample mean or standard deviation. These formulas are called estimators and the values calculated using these estimates are called estimates. To give an example, the sample mean calculated from a sample is a point estimate of the population.
Properties of a Good Estimator
There are three main properties associated with a good estimator. They are:
Unbiasedness: Unbiasedness here means that the expected value of the estimate should equal the parameter the estimate is estimating. In other words, the sampling distribution of the estimator equals the parameter value. For example, the mean of its sampling distribution equals the mean of the population (E(x̄) = μ).
Efficiency: An unbiased estimator is most efficient among all the unbiased estimators, when it has the smallest standard error. No other unbiased estimator has a sampling distribution with lower variance.
For example, both mean and median are estimators to find the centre value of a set of data. When the data comes from distributions with thick tails, the sample median is more efficient. When the data come from distributions with a thin tail, like the normal distribution, the sample mean is more efficient.
Consistency: Consistency means that the estimator converges in probability with the estimated parameter. In other words, as the sample size increases, the probability of the estimates being close to the value of population parameter increases. We can also say that as n reaches infinity, the variance of the estimator approaches 0.
Related Downloads
Related Quizzes
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
- Getting Started with R
- R Programming for Data Science
- Data Visualization with R
- Financial Time Series Analysis with R
- Quantitative Trading Strategies with R
- Derivatives with R
- Credit Risk Modelling With R
- Python for Data Science
- Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $29 (Regular $57)Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python.