EDUCAUSE Core Data Survey

We just finished & submitted the EDUCAUSE Core Data Survey and I thought this is a better forum to discuss than the narrow discussion in EDUCAUSE CIO Listserve. Today is the deadline and we finished it.

First off, what I say below hopefully will help improve the survey and bring more clarity to us as those providing input as well as benefiting from such an important dataset. As a CIO I totally get it that the users are not shy when things don’t work and are deeply silent when things do work. In this case, I am a user! And a dedicated one in that we have submitted the surveys every year that I can remember though to be honest, its value is diminishing for reasons that I will touch on below. Dedication also comes from a deep rooted fear that not filling out the survey will result in us being kicked out of Consortium of Liberal Arts Colleges (CLAC)  (Filling out Module 1 is a membership requirement). CLAC membership is one that I treasure tremendously!

It is clunky and not easy

EDUCAUSE is the premiere Higher Ed technology organization and for a while now I have complained about the website performance. Logging in takes between 30 seconds (on a good day) to a full minute. And I know that this is just not me! I was glad to hear at the annual conference that under the new leadership this is one that is on the radar for improvement.

I look under “Connect & Contribute” and I do not find a link to it. I then search (because this is how the entire world operates now, right?) for “Core Data Service” and “Core Data Survey” and all I get are some publications and no link to the current survey.  Then, like all old folks, I go to my email to find the link. I would characterize this as clunky.

Then, I submitted Module 1. I came back to the landing page and it showed that the module is in progress. OK, I refreshed it a few times and it said “In Progress”. So, I clicked on “Continue with the module” (or the equivalent). It complained that “You have already taken the survey”. I wrote to EDUCAUSE staff about it and this was attributed to the last minute rush. About 45 minutes later, it showed as Submitted. I think EDUCAUSE and Qualtrics (who I believe is administering the survey) can do better than this, I would think!

Yeah, Yeah, Yeah – I should stop complaining and push my directors to help finish on day 1 to avoid the last minute rush.

One size fits all

We know that trying to level the playing field has definite merits for comparison purposes, but it simply does not work. There is no single “Higher Ed”. ERP vendors tried to force us all to behave the same way administratively and have failed miserably! This is precisely the reason why the utility of the survey is diminishing.

We are a small liberal arts College where library and technology services have been merged for a long time. Distinctions of who is doing purely technical work, or even predominantly technical work is much harder than one would think. We face the same dilemma when we fill out the Oberlin group survey for the library staff. The College administration centralized the functional IT staff who now report to Library and Technology Services and recently, the Registrar’s office joined us. Though these items are a little bit easier to handle in answering staffing questions, we have to take some reasonable guesses in terms of budget allocations, but how do we know that our peers in similar circumstances are making similar assumptions?

And I was surprised to find this year “Reference desk and staff (library/IT staff in a merged organization)” to be included in “Information Technology (IT) Support Services” Staff. First off I am not sure what the implication of this in longitudinal comparisons, second of all, I am not sure how the staff who manage the reference desk would feel about this classification.

Hope EDUCAUSE can figure out a common set of questions that applies to all of us and is useful and then have sections that are specific to the type of institution that we each are. I would love to see a combined EDUCAUSE/CLAC survey where the second part is specific to our needs.

Data usage

There have been some improvement in the way the data is being delivered and I was excited to see some talk about even better ways for us to interact with the data in the near future. Frankly the most relevant data for us is the CLAC extract of the CDS data. CLAC members get a slice of the data that contains only data from CLAC schools who completed the survey.

Metrics such as number of desktop staff per faculty or students is so institution dependent that such metrics across such a wide spectrum has far less meaning for us. “$906 is the central IT median spent per institutional FTE (students, faculty, and staff)” sounds good, but what exactly does it mean and how does one use it? This will be a totally different number for a large university than a small college and I bet you this also depends on the location of an institution. I get it that anyone interested in such slicing and dicing can do so, but what I would like to ask is, is there a survey model that simplifies the data collection and analysis in a way that is most helpful and relevant for those spending all this time filling it out.

Too many surveys

Frankly, there are just way too many surveys that we have to fill out every year and I have tweeted that sometimes it feels like we each need a Chief Survey (Response) Officer. I am surprised that this is not an EDUCAUSE Top 10 Issue for CIOs. We just finished Casey Green’s Campus Computing Survey and now this. We have already received request to provide data for Oberlin Group salary survey. Last Spring we did a MISO survey. Each of this takes time to collect data and complete, because we want to be true to the process and not just provide some data for the sake of providing it. Then, when the results come, we want to use them in meaningful ways and as I have pointed out, it is simply becoming harder and harder to do.

A request

In addition to a few things I have written above, I would love for the next round of survey creators to think about a baseline model. Regardless of how many entities & services we support, there is a baseline of resources needed which has far less variance than certain others that clearly depend on how many entities and services you support. For example, supporting an ERP requires certain number of staff and other resources. This may roughly be the same up to a certain number of faculty, staff and students for certain modules. Say, supporting upto 5,000 users may require the same number of staff and computing resources and may scale up afterwards based on some blocks of increase in users. But the important thing is that below this number it remains the same and does not scale down. For smaller institutions, I feel that these kinds of baseline benchmarks are far more important.

I would like to thank all of the EDUCAUSE staff who work on this and request that you don’t take this personally! I know how hard a task this is.

 

Leave a Reply