Pursuing Data

Article

Bootstrapping

Bat arm holding onto bar

This is what bootstrapping is like, yeah? In class we were told that the term comes from the idea of being stuck in quicksand and pulling yourself out by your own bootstraps. So bootstrapping is fixing the problem yourself, getting what you need to solve the issue with what you already have.

So what is the problem we are trying to solve? And what are the resources that we have to solve it?

The Problem to Solve

Umbrellas on crosswalk

Quite simply, the problem is - how do we answer a question about a group of people if we can't ask every person?

Let's say that you want to know the proportion of black umbrellas used by the people of Vancouver. (Let's face it, it rains a LOT here!) You could go to the city centre one lunch hour and take tallies of the colors of the umbrellas. If the count you get looks something like the picture to the right, you'd probably say a bit over 10%.

But doesn't this just mean that you know about the people carrying umbrellas at that particular time? How do you use this information to gain insight into the entire population?

The Available Resources

So if all you had was this picture, what resources do you have to be able to answer your question?

The answer is actually the picture itself! (Plus this very intriguing concept of using this information to bootstrap to estimates of the population.)

How to Bootstrap

So to bootstrap the above information, you could do something like this:

  1. Take bucket and fill it with white (non-black umbrellas) and black balls (black umbrellas). (If you represented the bucket as a list of 1's and 0's it would look something like this:
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
  2. You would then choose a sample size, say 200, and you would close your eyes and pick one of the balls from your bucket and record it's colour. Then you would put the ball back into the bucket, swirl it around, and pick again. You'd do that 200 times.
  3. For your 200 picks, you'd then find the proportion of black balls/umbrellas for that sample. (You could just count the number of black balls and then divide that by 200)
  4. You would then repeat the above process for something like 10,00 times and record all of the different proportions you found.
  5. You could then plot the frequency of all the proportions you found, calculate the average of all of the proportions, and cut off 2.5% from each tail of the frequency distribution to create a 95% confidence interval.

If you did that, the distribution of the proportion of black umbrellas would look something like this:

The proportion of black umbrellas for this process would come out somewhere around 13% and you could say that you were 95% confident that the actual value was between 8.5% and 18%.

Which would mean... You just did it!

From the sample that you took you could say that you estimated that 13% of all people in Vancouver carried black umbrellas and you were 95% confident that the actual value fell somewhere between 8.5% and 18%. Pretty neat, huh?

How to Use Bootstrapping

Obviously if you actually did what was described above it would take FOREVER. But with developments in computing/programming, this is actually very quickly calculated (took me about 10min to get the code right, including making the graph look pretty!)

The potential applications for bootstrapping don't just apply to estimating the mean of a population. One of our class instructors suggests that bootstrapping can be used as a substitute for any statistical test that we would use in traditional statistics to talk about populations. This includes t-tests, F-tests, chi-squared - you name it, apparently we can do it with bootstrapping. Bootstrapping becomes particularly interesting if we recognize that all of those traditional statistical tests have specific assumptions associated with them. If your data doesn't align with those assumptions, the conclusions you draw from those tests run the risk of being wrong. Apparently, with bootstrapping and confidence intervals/hypothesis testing we can get around a lot of these issues.

I don't yet know enough to fully explain how all of this works (I've applied it to what would be done for t-testing), but I do find the idea fascinating.

Questions about Bootstrapping

But as usual, the more you know, the more you discover what you do not know. So here are some things that I don't know that I would be very interested in finding out:

  1. Are there any circumstances where we would consider the standard statistical tests to be better/faster than bootstrapping?

  2. How is the use of bootstrapping seen in academic/publication circles that would like reports of F/t statistics, p values and other such things?

  3. So far we've been looking at pair-wise comparisons, but some of the tests I've enjoyed in traditional statistics are ANOVAs, to help answer questions like, "Is there any point in analyzing the pair-wise comparisons for this information?" I get that even if we do get a statistically significant result from something like a two-way ANOVA we still have to do the pair-wise comparisons to confirm the interpretations, but do we just do all of the pair-wise comparisons from a bootstrapping perspective and see what comes out? How does this relate to things like the Bonferroni correction of reducing the p value for a single test to reduce Type I errors?

  4. Are there any good resources that help to easily compare between what would have been done in traditional statistical tests and what we would do instead for bootstrapping? And any discussion about the limitations/benefits of each?

  5. Finally, when it comes to research, does the use bootstrapping suggest that we can take smaller sample sizes than we would have with traditional statistical research? Or do the same considerations of wanting to have some kind of realistic representation of the population still apply with bootstrapping?

So many new things to discover!