• 0 Posts
  • 372 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle






  • Sorry, I think you need to brush up on statistics. The relevant measurement here would be the variance (Variation? Variability? Whatever the term is officially called) in the relevant statistic, not the size of the statistic itself. Using the variance and previous average of the deaths per capita statistic, you can calculate the likelihood of the current deaths per capita having this value compared to the past values. If that likelihood is sufficiently low (for most scientific fields, 5% or less), the result is declared significant, since it’s different than what we would expect it to be if nothing had changed, and we can say that with a high (>95%) confidence. To learn more about this “predict the chance of the result being within normal bounds and then go “whoa that’s weird” when it’s not” method, look up “null hypothesis”, or even better “statistical significance”.

    To give a practical example: The number or deaths from car accidents is fairly low per capita, but since we have a very large amount of data available, it has a low variance and we can predict and calculate the ratio very accurately. If you look up a graph of car deaths per capita over time, each year will only have a ratio of like 0.001%, but the variance between years will not be very high, because we have so much data that the little bits of randomness all even out. We can then look at, for example, car deaths per capita for streets with crosswalks vs without crosswalks, and even though they’ll both be a fraction of a percent, because they’re both measured so accurately we can make confident assessments of that data.