Statistical Thinking

Statistical Analysis  »  Statistical Thinking

Economist Karl Popper referred to science as the “art of systematic over-simplification.” Indeed, if science is discovery and knowledge creation, that certainly cannot take place through “systematic over-complication.” Knowledge can only be created by that which is understood, and often the pathway there is through simplification.

It is often useful, therefore, to take complex concepts and methods and reduce them to their lowest form in order to fully grasp each component. Once these are understood, then it is easier to see how these parts work together as a whole and the utility or dis-utility thereof.

A Method to the Madness

The application of statistical methods can be complex, particularly in regard to multivariate techniques. There is no question there are very complicated methods statistically speaking within methodical techniques employed which have multiplied exponentially via the power of personal computing.

We have noted in previous posts that the purpose of statistics is to measure and understand an unobservable population from a sample of data. We obviously first have to have accurate measurement, but we also need to be able to properly discern what the data are telling us.

Typically, we have a goal in mind with certain attributes we are trying to measure and/or evaluate. With respect to multivariate analyses, this would be known as our dependent variable. In a multivariate regression model, for example, we would observe the dependent variable and then try to determine and measure what factors influenced it.

Let’s set aside the multivariate analysis and limit ourselves to a univariate example. This should serve to enlighten our understanding with a healthy dose of simplicity.

Whether it’s conducting a fair lending analysis or assessing credit quality, there is always a variable of interest that we want to measure and, based on the data and results, draw conclusions. It is essential that we have accurate measurement, but we also need to be able to discern what the data are telling us. This requires an understanding of methods, appropriate application of these and the limitations, not just computing calculations.

A Practical Example

To illustrate, let’s take a simple example of test scores for a student from an accelerated reading program. In this program, the students read a book and then take a short ten-question test to measure their comprehension and retention. 

As long as they pass the test, they receive points, but the points are weighted based on their scores. They receive full points if they make 100% but only partial points if they make less than 100%. They receive no points if they make less than 60%; but, in all cases, the test grade still counts in their average. A key point is they must have an 85% average in each nine weeks. Let’s delve into the scores and see what conclusions we can draw. 

What we want to do is assess the students’ abilities and knowledge based on the scores. One of the first things to note is that since the tests are only ten questions, the scores potentially could be volatile. In addition, the scores are in 10-point increments, but the target is 85. To maintain an 85 average, the student must score 90’s and 100’s in order to reach 85%. Although the average could fluctuate initially as scores are added, it will become more difficult to move the average.

Below are 15 scores for the 9-week period. As shown, the student is below the necessary 85% with an average of 83.3.  From the data, how should we assess the student’s ability? They failed to achieve the 85% as an average.  But the data may suggest a slightly different picture than the average indicates.  

a-111

First, we see that of the 15 scores, 12 were an 80 or above. The highest score is 100 (2 of these) and the lowest 60 (also 2 of these). The bottom 3 scores are 70, 60, and 60 with the rest 80 or above. The two 60’s are contributing to the average being below 85; and if these were removed, the average would be roughly 87.

Next, we see that the median score is 90. The median is the score in the middle in which 50% of the observations are higher and 50% are lower. In addition, the mode, which is the most frequently occurring value, is also 90. This means that if you were to randomly draw from the scores repeatedly the expected score would be a 90. The average, therefore, is 83.3%, just below the 85% of cutoff; but the median and mode are both 90%, which is above the 85% cutoff.

The point here is that the average alone may not present a complete measure of this particular student’s ability. It certainly by itself does not describe the data well or the student’s performance. Simply providing a standard computation does a poor job, in this case, and, is at best, incomplete.

In Conclusion

Although this is a simplistic example, the same principle applies to multivariate and more complex analyses. In fact, the complexities that can arise are only magnified with more sophisticated techniques. Such analyses and modeling must be fit to the data in order to truly evaluate, measure, and draw conclusions.

Today, computers run the computations. It is up to the researcher or analyst to be able to glean what the data says, not just produce a calculation. At best, important information can be overlooked; and, at worst, erroneous conclusions can be the result.


How to cite this blog post (APA Style): 
Premier Insights. (2018, April 12). Statistical Thinking [Blog post]. Retrieved from https://www.premierinsights.com/statistical-thinking.


Leave a Reply

Your email address will not be published. Required fields are marked *