The standard deviation of a set of numbers is the measure of the dispersion of the numbers with respect to the mean.

Consider the following sets of numbers:

-2, -2, 2, 2 with a mean of 0

-3, -1, 0, 4 with a mean of 0

The two sets of numbers have the same mean, , or average value. However, the numbers in the first set are clearly less dispersed from the central value of 0 than those in the second set. Therefore, we need an averaging indicator that can tell us the deviation or spread of the numbers about the mean. Suppose we find the average of the absolute values of the numbers:

We end up with the same value with no information regarding the respective spreads. If we try this approach,

we have a reasonable distinction between the spreads.

What we have done for each set in the above computation is that we have:

- Taken the difference between an element in the set and the mean.
- Squared the difference and repeat step 1 and step 2 for the rest of the elements in the set.
- Summed all the squared differences.
- Divided the sum with the number of elements in the set.
- Computed the square root of the result in step 4.

This method of measure is called * standard deviation* . In general,

If the mean of the set of numbers is zero,

We call the standard deviation for this special case, the ** root mean square **value of the set. Notice that the results obtained from the absolute value method and the standard deviation method (or root mean square) are the same if the magnitudes of all the elements in the set are the same.