Let’s say you want to figure out if a certain medication can lower the systolic blood pressure, so we recruit 100 people, give 50 of them the medication and 50 of them the placebo.

The placebo looks and tastes like the medication but is completely harmless and ineffective - like a tiny capsule filled with water.

After six months of taking the medication or the placebo, you measure the blood pressure of each person in the study.

Now, the unit of measurement for blood pressure is millimeters of mercury, but we’ll just keep it simple and call it “units”.

You find that the mean blood pressure in the medication group is 130 units, and the mean blood pressure in the placebo group is 145 units.

At this point, you might use a statistical test, like unpaired or 2-sample t-test, to see if there’s a significant difference between the two groups’ means.

Typically, an unpaired t-test starts with two hypotheses.

The first hypothesis is called the null hypothesis, and it basically says there’s no difference in the means of the two groups.

For example, our null hypothesis would state that there’s no difference in the mean blood pressure for people that take the placebo compared to people that take the medication.

On the other hand, the alternate hypothesis for a t-test can be either one-sided or two-sided, and this has to be determined at the beginning of the study.

The alternate hypothesis for a one-sided t-test would either state that medication lowers mean blood pressure compared to the placebo or that medication raises mean blood pressure compared to the placebo.

The alternate hypothesis for a two-sided t-test would simply state that the mean blood pressure for the medication group is different than the placebo group, but it wouldn’t specify if medication would raise or lower the mean blood pressure.

Typically, researchers choose to use two-sided t-tests, since they usually don’t know how a treatment will affect the people in the study.

One way to see if there’s a difference between mean blood pressures in the placebo group and the medication group is to make a histogram, which is a plot that shows frequency of an event.

Here, the x-axis represents blood pressure and the y-axis represents the number of people with each blood pressure measurement, and the curve would probably look something like this.

This is called the normal distribution curve, and it’s shaped like a bell, with the majority of people’s blood pressure measurements somewhere around the mean of 145, while fewer and fewer people would have more extreme blood pressures, or blood pressures that are further away from the mean, in the “tails” of the curve.

Now, if the null hypothesis was true, and there really isn’t a difference in the mean blood pressure between the medication and placebo groups, then we’d expect the mean blood pressure of the medication group to be exactly the same as the placebo group, so 145.

But there’s always some amount of natural variation between different groups of people, so we might expect the mean to vary at least a little.

For example, if the medication group just so happened to have a few more people with lower blood pressure, their average blood pressure might be a few units lower than 145.

But what if the mean in the medication group is much lower than 145, or much higher than 145? In other words, the mean might be somewhere in one of these tails.

How do we know if the difference in means that we see is due to natural variation between the groups, or if the difference is significant?

In statistics the term “significant” means that the relationship between two variables is caused by something other than random chance, and it’s normally determined by a specific p-value.