📚
Learning

Statistics Without Tears

Derek Rowntree
5
-min read
3
More on Amazon
A pretty comprehensive guide to statistics for the non-technical reader. Rowntree focuses on the 'why' of quantitative methods so that the 'how' makes more sense.

Statistical Inquiry

Statistics has been developed as a way of making sense of collections of observations. It aims, particularly to help us avoid jumping to conclusions. It reminds us to be cautious about the extent to which we can generalise from out always limited experience.

Likelihood/probability is central to the statistical view of the world.

Statistics helps us to look for reliable regularities and associations among things 'in general' and 'in the long run'. At the same time, however, it teaches us to be cautious about expecting these regularities to hold true of any specific individuals.

Chief concerns of statistics:

  1. Summarising what we have found out so that we and other people can be clear about what the facts are
  2. Using the summary to make comparisons with what may turn out to be true in situations elsewhere or even in the future

Statistical thinking is a way of recognising that our observations of the world can never be totally accurate; they are always somewhat uncertain.

Types of Statistics

  1. Descriptive: Methods used to summarise or describe a sample
  2. Inferential: Generalising from a sample , to make estimates and inferences about a wider population

Reliability of the generalisation will depend on how well the sample represents the population.

Describing Samples

Variables

Variable: Any attribute or characteristic that will help us distinguish between one individual or another.

  • Category Variable: Any variable that involves putting individuals into categories (also called Nominal Variable).
  • Ordinal Variable: Whenever a sample has more of a certain characteristic than does another, then we can arrange them in order (e.g. Excellent-Acceptable-Poor).
  • Quantity Variable: Where what we are looking for is a numerical value, a quantity.
  • Discrete Variable: Where possible values are clearly separated from each other (e.g. 1 child, as opposed to 1.4 children)
  • Continuous Variable: Whatever two values you mention, it's always possible to  imagine more values in between them (e.g. height)

Errors

Some error is inevitable in statistics.

Errors differ from one subject to another. In experimental science they are likely to be minute. In social science, they are much larger.

Summarising Data

Frequency Distribution: The frequency with which each value in the distribution was observed.

Central Tendency (Mean, Median, Mode)

The tendency of the observations to pile up around a particular value.

The more variable the values the more dispersed they will be - so we are looking for a measure of dispersion.

Inter-quartile range: The 'mini-range' based on the difference in values between quartiles

Standard Deviation: The most common measure of dispersion. The greater the dispersion, the bigger the deviation and the bigger the standard ('average') deviation.

Calculating the Standard Deviation (data set: 111, 114, 117, 118, 120)

  1. Calculate the arithmetic mean (116)
  2. Calculate deviation of each data point (-5, -2, +1, +2, +4)
  3. Square each deviation, getting rid of minus signs (25, 4, 1, 4, 16)
  4. Calculate the variance which is the arithmetic mean of these numbers (10)
  5. The square root (√) of this number is the standard deviation (3.16)

Whatever the distribution, you'll find that the standard deviation never approaches anywhere near the range.

The Shape of a Distribution

A distribution can be positively or negatively skewed:

  • positively, if the skew (tail) is on the right (positive) side;
  • negatively, if the skew (tail) is on the left (negative) side;

In symmetrical distributions, all three measures of central tendency are to be found in the same place—at or near the centre. The measures move depending on whether there is a positive or negative skew.

The normal curve of distribution (the bell curve) is not the 'usual' curve. rather, 'norm' is being used in the sense of a pattern or standard (idealised, perfect) against which we can compare the real-world distributions.

Proportions under the normal curve

z-values: The base-line of the distribution measured off both in unity of the standard deviation.

Any value in a distribution can be converted into a z-value by subtracting the mean of the distribution and dividing the difference by the standard deviation.

From Sample to Population

Statistics: Figure is derived from the sample (e.g. mean, median, mode, range, standard deviation, inter-quartile range). Represented by roman letters (e.g. sample mean = x̄)

Parameters: The true mean, mode etc. of the population. Represented by greek letters (e.g. population mean = μ)

Sampling Variation: the variability from one sample to another.

Standard Error (SE): The standard deviation of a sampling distribution (e.g. of the sample-means). Influenced by three factors:

  1. The standard deviation within the sample.
  2. The size of the sample.
  3. The proportion of the population covered by the sample.

The smaller the standard error, the more confident we can be that our sample-mean is close to the population-mean

Confidence Interval: The sample mean (x̄) ± margin of error (SE) (e.g. 50 ± 1.5, gives a range of 48.5 to 51.5, which is a 68% confidence interval)

Comparing Samples

Test of Significance: Asking whether the difference between samples is big enough to signify a real difference between populations.

Null Hypothesis: An assumptions that attempt to nullify the difference between two sample-means by suggesting that it is of no statistical significance. Through the research process the null hypothesis is 'under assault'.

Alternate Hypothesis: When we reject the null hypothesis, we replace it with an alternative hypothesis. The most usual alternative hypothesis would be simply that the two population-means are not the same.

We reject the null hypothesis when the difference between samples signifies a real difference between the populations.

Significant does not necessarily imply 'interesting' or 'important'. Note also that it is not the social or human value of the difference we are looking at. We are only concerned with how big it is in terms of the standard error of such differences (and thus how likely to be repeated in future comparisons).

The bigger the difference, the more confidently we can reject the null hypothesis.

In significance testing there are two opposite risks:

  • Type 1 error: to reject the null hypothesis when it is true
  • Type 2 error: to accept the null hypothesis when it is false

The emphasis is on reducing Type 1 errors.

Further Matters of Significance

Critical region: an area in the theoretical distribution, where the alternative hypothesis will seem more acceptable than the null hypothesis.

z-test: the significance test discussed above.

t-test: the t-test is used instead when samples are smaller than 30

f-test: a single test whereby several samples can be compared all at once

Analysing Relationships

Correlation concerns the strength of the relationship between the values of two variables.

Regression Analysis: determines the nature of that relationship and enables us to make predictions from it.

Correlation Coefficient (r): a numerical figure that will be at maximum when the correlation is strong and reduce to a minimum as the correlation weakens.

How far we can trust a sample-r as an estimate of population-r will depend on two factors.:

  1. The size of the coefficient.
  2. The size of the sample.

Correlation (a mathematical relationship) can never prove a causal connection. What it does do is give support to an explanation you can justify on logical grounds.

Image credits: Statistics Without Tears by Derek Rowntree