- Simple Random Sampling and Sampling Distribution
- Sampling Error
- Stratified Random Sampling
- Time Series and Cross Sectional Data
- Central Limit Theorem
- Standard Error of the Sample Mean
- Parameter Estimation
- Point Estimates
- Confidence Interval Estimates
- Confidence Interval for a Population mean, with a known Population Variance
- Confidence Interval for a Population mean, with an Unknown Population Variance
- Confidence Interval for a Population Mean, when the Distribution is Non-normal
- Student’s t Distribution
- How to Read Student’s t Table
- Biases in Sampling

# Central Limit Theorem

The Central Limit Theorem is a fundamental theorem of probability and describes the characteristics of the population of the means. According the Central Limit Theorem, for simple random samples from any population with finite mean and variance, as n becomes increasingly large, the sampling distribution of the sample means is approximately normally distributed. Here n represents the size of each sample.

The sample mean will be normally distributed regardless of the population distribution, i.e., regardless of the distribution of parent.

The Central Limit Theorem tells us what happens when we have the sum of a large number of independent random variables each of which contributes a small amount to the total.

The central limit theorem has the following characteristics:

- The mean of the sample means is equal to the mean of the population (μ) from which the samples were drawn.
- The standard deviation of the distribution of sample means is s divided by the square root of n (σ/Sqrt(n)). This is also called the standard error of the sample mean.
- For the sampling distribution to approximate a normal distribution, the sample size must be large (n >=30).

#### Related Downloads

#### Related Quizzes

## Free Guides - Getting Started with R and Python

Enter your name and email address below and we will email you the guides for R programming and Python.