Jan
2018
Artificial Intelligence in Higher Ed
For a place considered to have a high level of natural intelligence, it is funny that one would even think about Artificial Intelligence (AI) in Higher Ed. However given that Higher Ed has a lot to do with the creation of AI, we might as well think about if there is a way we can benefit from it. There has been a recent explosion in this area and there are many articles written on this that you can look up in Google. “The Most Exciting Artificial Intelligence Applications in Media” and “Top 10 Hot Artificial Intelligence (AI) Technologies” are a couple of interesting (totally randomly selected) readings.
I also encourage you to read “The Great AI Paradox” by Brian Bergstein. This article talks about the distinction between “true” AI and “Computational Statistics” and how some argue that machines are pretty far away from having “true intelligence”. Let us set aside these differences and explore if we can take advantage of what is currently being touted as AI.
I should mention that I took two courses in 1984, one on AI and the other on Natural Language Processing. The first one was a jam packed course and frankly I was totally confused by what was presented in that course. Since AI was in its very early stages, we were discussing designing computer systems that could distinguish between a cube and other regular and simple polyhedrons such as a pyramid. With meager computational power, training a computer about these objects was a huge challenge. However, I had a better experience in Natural Language Processing (NLP) course where I wrote a final paper that I still remember. It was about designing a system that would be fed research articles from the same research group as a training set and when some new results are provided, it will chop out the next publishable paper! It was based on my humble observation that a fair amount of introduction and background in scientific journals tend to have patterns that could be learned and therefore easy to write the new one. My instructor gave me an A but asked me to not talk about my idea with my postdoctoral mentor or other faculty at Hunter College!
So, there is an idea to start with.
Disclaimer – what I discuss below all have serious implications in terms of invasion of privacy, data associated with human subjects that requires appropriate vetting and may require permission from the Institutional Review Board or other governing bodies. I also want to assure you that we at Wellesley are not doing anything that is discussed below, unless I specifically note it.
Because of the enormous amount of data at our disposal, it is possible to conceive what we can potentially do. For example, one idea that I have heard is to use network connectivity data as a way for early intervention. Feed the network connectivity data for students into a system that, over a period of time, learns the pattern of connectivity and when it detects outliers, information is passed on to the appropriate staff for possible examination and intervention. If a student who is registered for courses is not in the classroom (by virtue of the fact that his/her smartphone is not connected to the classroom access point), but is connected to the dorm’s access point or other locations, and this happens to become a pattern, would it be useful for advisors to know and intervene early? Such things have potential application is very large institutions where it is extremely hard to keep track of such information otherwise. Unfortunately, when you collect such data, you also know a lot more about the individual that can potentially be misused. And if the data is compromised, it is a whole another story. So, it is essential to design systems carefully to address these issues.
In the same fashion, one could build a system to learn from the past data to signal potential issues with student retention. Are there indicators similar to the past cases of students leaving the College that can be used to inform when something similar is brewing, so that we can provide the necessary support to make a student successful. As simple as this may sound, I want to repeat that an ill-constructed system can potentially lead us the wrong path.
Other ideas which have surfaced at Wellesley, not necessarily in the context of AI, are the development of a course recommendation system. The basic idea is to use a model like Netflix. A system that would analyze, or in case of AI, learn from, past registration data to make all sorts of recommendations – “Based on the types of courses you have taken in the last two semesters, here are some courses you may be interested in” or “Several students from the Midwest (like you are), who were also a psychology majors, took these courses, so you might want to explore them”. I know that the first objection that will come from our faculty is that this is not in the spirit of exploration in a liberal arts environment! However, such recommendations can be very powerful in assisting students in other contexts.
Adaptive learning is an area where AI can play a role. This is an open area with multitude of possibilities. But, if a system that learns about a student based on online assessments as well as learn about his/her writing styles using natural language processing tools, it can potentially evaluate the learning goals based on some rubrics and guide the student with more directed assessments as well as suggestions to read additional materials to gain knowledge. Some of these are already happening in K-12, certain Higher Eds, and professional education. ALEKS from McGraw Hill is one such example system. Just a reminder – a lot of data is collected here and it alone should never be used to judge a student, but inevitably it will be done and will affect some students!
As I mentioned earlier, the delineation between true AI, computational statistics and predictive analytics are becoming harder to gauge. As a result, some of the applications that will tremendously benefit the administrative areas could be classified as any one of these. For example, fraud detection in finance matters (or even plagiarism detection on the academic side), automation in processing expenses (whereby systems learn how to interpret receipts and can automatically enter them) and generation of FAQs (based on the types of questions that come to a helpdesk queue and the answers provided, AI systems can generate FAQs automatically for future resolutions automatically) are some examples. We can also analyze the data we already capture or potentially capture to look at patterns and discern if we are at risk of a hacking attack of some sort or are in the middle of a compromise of something. Most of these are modeled on outliers from normal behaviors.
I, like many of you, am already a beneficiary of AI systems. Google Assistant on my phone has vastly helped me in the past few years. It reminds me about my meetings, when I should leave for my flight (based on traffic patterns), how many more minutes of exercise I have left to meet my daily goal, and so on and so forth. It brings together enormous amount of data from my emails, calendar, my web viewing habits, tweets, how much walking I do and possibly other things that I am not even aware of to learn about me to assist me. Some are things I don’t necessarily need, but other things are of enormous help. I am just waiting for the day that it will tell me which color shirt and pants I should wear to work!
Whereas some institutions are experimenting with new and emerging tools in this area, a better strategy would be to identify one or two problems that we face today that we don’t have a clear approach to solving and look to see if AI has a way to assist us.
I have to go because my Google Assistant is reminding me that I have a meeting to go to that will take me 5 minutes to get to…