Statistically speaking…

By: Dr. Magdalen Normandeau – Magdalen is a physics instructor and coordinator of Teaching & Learning Services at the University of New Brunswick. She is an expert at opening cans of worms and exploring rabbit holes.

Scholarship of Teaching and Learning is, in some ways, an unusual scholarly undertaking. With few exceptions, its practioners are highly trained individuals… in some other field of inquiry! On the plus side, the variety of backgrounds leads to a variety of questions addressed and a variety of perspectives. On the negative side, many of us are unfamiliar with tools and methodologies that would be appropriate for the investigations we have in mind. This is particularly problematic when it comes to the use of statistics.

Word cloud formed from a list of all statistical tests reported in CJSoTL articles to date. Words appearing more often in the list are in larger font. The colours have no meaning.

Let’s have a look at what has been done for CJSoTL papers. Since its inception in 2010, the Canadian Journal for the Scholarship of Teaching & Learning has published 130 articles. Ninety-three have been categorized as Research Papers and four as Research Notes. Of these 97 articles, 54 include some quantitative aspect. In some cases, these are counts associated with codes or percentages associated with responses on a survey. In others, means and standard deviations are given. Thirty-three of these papers also make some use of statistical tests. The word cloud in this post was made from a list of all the statistical tests used in these papers.

While percentages and averages are familiar to all who teach, anything beyond that is probably unfamiliar to a fairly large fraction of SoTL readers. Given that the point of writing research papers is to communicate with our readers in such a way that they can judge the validity of our analysis and conclusions, I would argue that we need to do a better job than we currently do at explaining and justifying our analysis. I am not suggesting that we should write entire statistics textbooks in our papers, we do not need to go into the gory details of the calculations, but we should clearly state the purpose of a test we are using and the conditions under which it is valid. We also need to convince our readers that those conditions hold for our data.

For instance, like all parametric tests, a t-test assumes that the data are normally distributed. So if I use a t-test, I should state clearly this necessary condition for my analysis to be valid, and I should convince my readers that my data are normally distributed, perhaps by presenting a plot of my data [1]. Notice how this serves two purposes: 1) my readers are better able to judge my analysis, and 2) I contribute to increasing the data analysis expertise of some of my readers.

For those rolling their eyes because statistics are used frequently in their disciplinary research journals and this level of justification is rarely included, I will point out that misuse of statistics is pervasive. It is not a recent phenomenon, and it is certainly not confined to SoTL. In my 1997 copy of Primer of Biostatistics by Stanton A. Glantz, a figure shows the results of four reviews of statistical methods used in the general medical literature between 1950 and 1976: in all four cases about half the papers examined used incorrect statistical methods. Half. Rather alarming! And this predates the days of one-click analysis software…

Because so many people are making these errors (misuse of elementary statistical techniques), there is little peer pressure on academic investigators to use statistical techniques carefully. In fact, one rarely hears a word of criticism. Quite the contrary, some investigators hear that their colleagues – and especially reviewers – will view a correct analysis as unnecessarily theoretical and complicated.  

– Stanton A. Glantz, in A Primer of Biostatistics, 4th edition

More recently, in March 2016, the American Statistical Association became sufficiently alarmed by the misuse of statistics that they issued a statement on statistical significance and p-values. And Greenland et al. (2016) published an essay titled “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations.”

So, dear SoTL enthusiasts, let’s take the first step in the right direction by carefully communicating what we have done to our data and why, and why it is justified. If you are squirming a little because you are uncomfortable with statistics but, for instance, you want to compare data sets, then instead of trying to fake your way through it, why not talk to your colleagues? There are undoubtedly people teaching stats courses at your institution, so track down a friendly statistician, and ask for advice and guidance. It may be the start of a beautiful – and significant – collaboration (with a large effect size)!

[1] I could go one step further and also state the skewness and kurtosis of my data. One more step further would involve also stating the results of a Kolmogorov-Smirnov test or a Shapiro-Wilk test.

This entry was posted in Blog. Bookmark the permalink.

2 Responses to Statistically speaking…

  1. Neil Haave says:

    Well said Magdalene!

  2. Janice Miller-Young says:

    Thanks for posting this, Magdalen! I am NOT rolling my eyes. I like your point that clearly communicating your choices not only improves the paper and thus colleagues’ ability to judge your analysis, but also that it plays a role in educating the audience. Of course, this is true of any choice of analysis method(s)!!
    Here’s a great resource when it comes to quantitative methods: http://ca.wiley.com/WileyCDA/WileyTitle/productCd-111883867X,subjectCd-EDZ0.html

Leave a Reply

Your email address will not be published. Required fields are marked *