Mar
2014
Metrics are useful, but…
I am at the San Francisco Airport waiting to board a red-eye back to Boston after two days at Google for a gathering of colleagues from a few other Higher Ed organizations and K-12 school districts. I love visiting Google. Unfortunately, I cannot openly discuss everything that was part of the meeting.
I have been thinking a lot about metrics. Thanks to technology, we have tremendous amount of data. Some are clean and some are not. The balance of good to bad data depends a lot on the institution, its commitment to collecting and keeping the data clean, so on and so forth. Many institutions use them for comparison purposes, either to show the trends within the institution (faculty/administrative staff ratio over the past ten years) or with their peers. I am sure that a lot of thought goes into these metrics, but some of them tend to fall in the category of “they are doing it, so we must”.
Recently, I was thinking aloud about a metric “acquisitions dollars spent per student (or per student + faculty)” that is reported by libraries so we can compare this with each other as well as study the trends and plan for the future. Why is this a relevant metric? Is it the case that the College that spends the most acquisitions dollars per students somehow is superior in its commitment to supporting the students? I am not so sure. I have asked around and have not gotten a satisfactory answer. A further breakdown of this between electronic content and the rest, tied to the circulation habits of students and faculty would be a better metric. I do understand that circulation metric in itself is problematic in that a book that is used in the library, but not checked out, is not counted. Perhaps a metric that includes some sampling of reshelving frequency, included with circulation is an indicator that is of use.
On the technology side, we tend to compare how many public lab computers each of us have as a function of the number of students. Whereas this was important in late 1990’s and early 2000’s, is this useful any longer? With the prevalence of laptops and technologies that allow one to access the academic software with ease either on one’s laptop or through remote access, why is having a lot of public computers a valid measure? We, for one, are gathering data regarding what exactly are our students using these public machines for (it turns out, they use them mostly for reading email and doing other browsing on the web) to inform us on what is the most appropriate way to manage this.
Sports TV abuses the data in ways that I can talk about for days. “fourth down and 2. The percentage of times this team has gotten a first down under this condition is 30%”. Anyone with basic knowledge of statistics will question this based on how often has this team been in this situation for this statement to be statistically significant? In other words, what is the confidence level in such a statement? No one cares, because when numbers exist, tone comes up with stats that make the broadcast interesting.
“How many followers do you have in Twitter?” is another one of those metric which by itself is meaningless. Active interactions between you and your followers is important but just counting the followers is a meaningless exercise. As you all have heard by now, the number of registrants in a MOOC is far less important than finding out how many active students stick around till the end. This does not mean that only those who receive a certificate are active learners. There are many who are still active, but don’t necessarily bother to take the tests towards a certificate.
All of these point to asking the right questions up front about what useful information can we gather from the data at our disposal and then constructing the metric. Otherwise, a metric that may be flawed will continue to provide a level of comfort that is equally flawed.