Standard Error: Sampling Distribution
Standard Error: Sampling Distribution
a sampling distribution or finite-sample distribution is the probability distribution of a given statistic based on a random sample. Sampling distributions are important in statistics because they provide a major simplification on the route to statistical inference. More specifically, they allow analytical considerations to be based on the sampling distribution of a statistic, rather than on the joint probability distribution of all the individual sample values.
The sampling distribution of a statistic is the distribution of that statistic, considered as a random variable, when derived from a random sample of size n. It may be considered as the distribution of the statistic for all possible samples from the same population of a given size. The sampling distribution depends on the underlying distribution of the population, the statistic being considered, the sampling procedure employed and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by an asymptotic distribution, which corresponds to the limiting case as n . For example, consider a normal population with mean and variance . Assume we repeatedly take samples of a given size from this population and calculate the arithmetic mean for each sample this statistic is called the sample mean. Each sample has its own average value, and the distribution of these averages is called the "sampling distribution of the sample mean". This distribution is normal since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see central limit theorem). An alternative to the sample mean is the sample median. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes). The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest statistical populations. For other statistics and other populations the formulas are more complicated, and often they don't exist in closed-form. In such cases the sampling distributions may be approximated through Monte-Carlo simulations, bootstrap methods, or asymptotic distribution theory.
Standard error
The standard deviation of the sampling distribution of the statistic is referred to as the standard error of that quantity. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is:
where is the standard deviation of the population distribution of that quantity and n is the size (number of items) in the sample. An important implication of this formula is that the sample size must be quadrupled (multiplied by 4) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a role in understanding cost-benefit tradeoffs.
Statistical inference
In the theory of statistical inference, the idea of a sufficient statistic provides the basis of choosing a statistic (as a function of the sample data points) in such a way that no information is lost by replacing the full probabilistic description of the sample with the sampling distribution of the selected statistic. In frequentist inference, for example in the development of a statistical hypothesis test or a confidence interval, the availability of the sampling distribution of a statistic (or an approximation to this in the form of an asymptotic distribution) can allow the ready formulation of such procedures, whereas the development of procedures starting from the joint distribution of the sample would be less straightforward. In Bayesian inference, when the sampling distribution of a statistic is available, one can consider replacing the final outcome of such procedures, specifically the conditional distributions of any unknown quantities given the sample data, by the conditional distributions of any unknown quantities given selected sample statistics. Such a procedure would involve the sampling distribution of the statistics. The results would be identical provided the statistics chosen are jointly sufficient statistics.
A sample is a subset of a population. Typically, the population is very large, making a census or a complete enumeration of all the values in the population impractical or impossible. The sample represents a subset of manageable size. Samples are collected and statistics are calculated from the samples so that one can make inferences or extrapolations from the sample to the population. This process of collecting information from a sample is referred to as sampling. 4.3 Sampling Distribution of the Mean
If I wanted to form a sampling distribution of the mean I would: 1. Sample repeatedly from the population 2. Calculate the statistic of interest (the mean) 3. Form a distribution based on the set of means I obtain from the samples The set of means I obtain will form a new distribution- a sampling distribution. In this case, the sampling distribution of the mean. Take a look at the following demonstration for a visual representation.
In this example a small population of four values is represented. Every possible combination of values from the population is sampled to form a true sampling distribution of the mean. Note, however, that a sampling distribution of any true population would be much larger, thereby making their very nature theoretical rather than practically demonstrable.
This demonstration illustrates Rule 1 of the Central Limit Theorem: The mean of the population and the mean of the sampling distribution of means will always have the same value. This rule is important to hypothesis testing because when we go to test a hypothesis even though our sample will not be exactly like the population, on average it will be exactly the same. We can be sure that repeated experiments will yield sample values close to the mean, and exactly the same over time. Rule 2 of the Central Limit Theorem states: the sampling distribution of the mean will be normal regardless of the shape of the population distribution. Whether the population distribution is normal, positively or negatively skewed, unimodal or bimodal in shape, the sampling distribution of the mean will have a normal shape. In the following example we start out with a uniform distribution. The sampling distribution of the mean, however, will contain variability in the mean values we obtain from sample to sample. Thus, the sampling distribution of the mean will have a normal shape, even though the population distribution does not. Notice that because we are taking a sample of values from all parts of the population, the mean of the samples will be close to the center of the population distribution.
t Distribution
Probability Density Function The formula for the probability density function of the t distribution is
where is the beta function and is a positive integer shape parameter. The formula for the beta function is
In a testing context, the t distribution is treated as a "standardized distribution" (i.e., no location or scale parameters). However, in a distributional modeling context (as with other probability distributions), the t distribution itself can be transformed with a location parameter, , and a scale parameter, .
The following is the plot of the t probability density function for 4 different values of the shape parameter.
These plots all have a similar shape. The difference is in the heaviness of the tails. In fact, the t distribution with equal to 1 is a Cauchy distribution. The t distribution approaches a normal distribution as becomes large. The approximation is quite good for values of > 30. Cumulative Distribution Function The formula for the cumulative distribution function of the t distribution is complicated and is not included here. It is given in the Evans, Hastings, and Peacock book. The following are the plots of the t cumulative distribution function with the same values of as the pdf plots above.
Percent Point Function The formula for the percent point function of the t distribution does not exist in a simple closed form. It is computed numerically. The following are the plots of the t percent point function with the same values of as the pdf plots above.
Other Probability Functions Since the t distribution is typically used to develop hypothesis tests and confidence intervals and rarely for modeling applications, we omit the formulas and plots for the hazard, cumulative hazard, survival, and inverse survival probability functions. Common Statistics Mean 0 (It is undefined for equal to 1.) Median 0 Mode 0 Range Infinity in both directions. Standard Deviation
It is undefined for equal to 1 or 2. Undefined 0. It is undefined for less than or equal to 3. However, the t distribution is symmetric in all cases.
It is undefined for less than or equal to 4. Parameter Estimation Since the t distribution is typically used to develop hypothesis tests and confidence intervals and rarely for modeling applications, we omit any discussion of parameter estimation. Comments The t distribution is used in many cases for the critical regions for hypothesis tests and in determining confidence intervals. The most common example is testing if data are consistent with the assumed process mean. Software Most general purpose statistical software programs, including Dataplot, support at least some of the probability functions for the t distribution.