Posts Tagged ‘IBM’

Artificial Intelligence

I was at the NERCOMP Annual Conference last week. There were some really interesting presentations that I attended, but I should say that the first keynote by Gerard Senehi was less than optimal for a conference to open. Danah Boyd, on the other hand, was fantastic, talking about  how even the younger members of our society care about privacy, contrary to the myth that they don’t.

One particular talk that I liked and want to follow up has to do with open educational resources. The powerpoint presentation is available along with the abstract, so please review it. Though some of the panelists are from institutions that are very different from us, we feel that there is something here for us to learn from and educate our community.

Artificial Intelligence has been in the news recently and frankly, trying to define it in clear terms is something I am not capable of. It has morphed over the years thanks to advances in computing. Is it possible for machines to emulate humans in the way we think? This is a loaded question as you can imagine.

Theoretically speaking, an artificial intelligence system must pass the Turing test. This test involves a party game where a man and a woman play with a third person who is trying to guess the genders accurately. The man provides all answers to convince the third person that he is a man while the woman provides tricky answers to convince the third person that she is the man. Turing proposed that if you switched one of them with a machine then the person needs to guess who is a human and who is a machine. If the person failed to guess correctly more than half the time, then the machine will be declared having passed the test (that it has enough intelligence on its own to fool the third person).

There are a lot more underlying details to this of course, because of the availability of massive amounts of data and the computing power, even the “brute force” computing can be confused with intelligence.

(more…)

Independence Day

Hope you enjoyed July 4th! As many of you know, I was born in Sri Lanka and lived there for 16 years as an Indian at heart. I moved to India and spent 6 years going to College. I then moved here 35 years ago, studied and settled happily. (OK, now you know how old I am!) All three countries celebrate independence day in grand manner. By the way, I feel bad for the occupiers (in all the three countries I lived, the British were the occupiers) because they don’t have Independence day as a holiday. In Sri Lanka and India, when I was growing up, the Independence day celebrations were a big deal. In the ’60s and the ’70s you still had many who directly experienced the struggle for independence and remembered the sacrifices of their own and others, and the celebrations were grand. As the time passes, the reason for celebrations change, naturally.

Some of us will be making the long trek to Middlebury College next week for a gathering of staff from six colleges – AmherstBrandeisMiddleburyWellesleyWesleyan and Williams. About 10-11 years ago, the leaders of IT from Wesleyan, Brandeis and Williams (WBW) decided to bring staff from each organization together for an informal gathering and exchange of ideas. It was a lot of fun and productive. Some of us have moved on to other institutions and wanted to start this expanded group. I will write about this meeting next week.

Based on what I heard at the CLAC annual conference last week and the agenda for next week’s gathering I see that many of us are really worried about our dependence on a variety of things. We seek independence and flexibility that may or may not exist and may come at great costs that we don’t have the luxury to fund.

There used to be a time, not too long ago, when the hardware costs were what the technology leaders in higher ed worried about. Whether it is acquiring mainframes from IBM or DEC  or subsequently the mini computers in the late ’80s, these were major investments, that helped exclusively the business processes in the institutions. The early ’90s saw major disruptions when the personal computer revolution came about, and institutions were caught off guard on many fronts. Many institutions were not prepared budgetarily to fund computers for faculty and staff but most importantly, there were not enough staff to support them. The sad truth is that these “personal computers” were developed for “personal use” and the internet changed the game and the transition was ugly. The dissatisfaction amongst the faculty and staff was at an all time high and many leadership in IT organizations changed.

After 10, 15 years, we have forgotten all of these because hardware is cheap and has been commoditized. These two factors has given us the independence and flexibility we want. If a vendor does not provide me what I want, I have many others to turn to. Besides, I can swap hardware A with B with such ease these days, there is very little incremental cost to be able to do it. Virtualization is another huge game changer, which makes it incredibly easy to divide a single physical server into multiple virtual server. So, in general, we are satisfied with the hardware landscape.

The other major area that is vexing for us is software. You have heard so much from me on this, I will keep it brief. Basically, it is a huge mess for certain software and there seems to be no relief in sight. In the CLAC meeting, when someone suggested a possible collaboration amongst colleges to run ERP (eg. Banner) jointly as a way to reduce costs, a seasoned CIO quipped “I don’t see that happening in my lifetime”. I am a bit more optimistic than that. Most of our frustration stems from the fact that despite the fact that these software cost us a boat load of money, they don’t do what we want them to do. I do understand the struggle that the software vendors face in trying to satisfy thousands of institutions each with different expectations, but when I pay annually 4-6 times the full-tuition for students, I expect a lot more and I don’t necessarily care how the vendors accomplish this.

This is precisely why the fiercely independent persona in me loves open source. Of course, it has its own “dependence” – the worldwide community that develops these software and the fear of the unknown. What if the community stops caring about them and stops developing them? Indeed, it is a real issue, but think about it – it is “open” for a reason. If the software stops being supported, you have access to everything you put into it and you can chart the course at that point. This is next to impossible in case of commercial software. Moving from one ERP system to another is so prohibitively costly that with very few bold exceptions, no one dares to even think about it. Did I say I will keep it brief? OK, I will stop.

In similar vein, the journal publishers tie our hands big time. We don’t have much of a leverage to influence the rise in costs.  Open access policies and open access journals are beginning to provide us with some much needed flexibility and independence.

The two items I discussed above are at the institutional level. Due to increased awareness and knowledge of technologies, every member of the community seeks technology independence in some sense. The whole BYOD (bring your own device) or the variance in terms of the operating systems and software that everyone wants to use poses new issues. Whereas we want to encourage these to a large extent, because this is essential for creativity and inquiry. However, we need to also think about certain boundaries.

Democracies are successful because independence is combined with certain constraints such as the laws, rules and regulations or else there could be total anarchy. Not everyone may necessarily agree with all the rules and regulations, but the expectations are that all citizens follow these for the collective good. The same way, we, as an organization, with consultation with the advisory committees such as the Advisory Committee on Library and Technology Policies and the president’s cabinet, create the boundaries and try our best to enforce them, all for the collective good of everyone. Of course, not everyone always agrees with what we do. We try our best to accommodate as many variations as possible and try to explain why we do what we do. I prefer to meet with those who feel that our decisions are adversely affecting their ability to conduct research or business.

As someone said “It is hard to hate someone in person”. And I would add “especially if the person makes sense”, even if we disagree.

Big Data is Big Deal

Many of us are saddened by the Boston Marathon bombings and are relieved that the ordeal has come to an end. Or, has it? I think each of us will take our own time to reflect on the events, digest both the reliable as well as the mis-information that is being directed us from all directions, and derive our own conclusions. As I wrote in my last post, various technologies played important roles in identifying the suspects and eventually capturing one of them. They brought to light several important things – explosion of technologies, how the law enforcement relied on distributed technologies (video tapings from sources other than Law enforcement), social media and crowd-searching (crowd sourced searching), and thermal imaging.

Frankly what got lost in all of these discussions is how every one of these items is far more complicated than the positive aspects which helped us in the end. And most importantly, what led to the surviving suspect was an actual curious human being and not the technology. Quite obviously, every step of the way, there were pitfalls – privacy, security, misuse of captured information, dangers of subjectivity arising from crowdsourcing the search whi has a high probability of the wrong people being implicated etc. etc. And the massive data that was helpful in cases like this and others is the “Big Data“.

(more…)

“It is not what it used to be” – Oh yeah, for sure!

The summer is winding down and we are all getting ready for the start of yet another academic year. As always, there are noticeable changes that will affect our faculty and students when they get back. Some will see them as welcome changes whereas others will see them as annoyances. In a few cases, the changes we have to make are out of our control, but the users don’t necessarily care. We are getting ready to communicate these changes.

I attended a day long meeting last Monday in Northampton organized by NERCOMP. The program committee which plans the annual conference met in the morning and I was assigned to one of the most boring tracks – Policy, Regulations and Security. We were a fun group so it was OK. One of our tasks was also to come up with the theme and suggestions for keynote speakers for the upcoming conference. Because of the topics I suggested, I may have earned a nickname “The Disruptive Technologist”. No, I was not disrupting the proceedings, but earned the title based on the topics I was suggesting.

During the subsequent discussions, I heard more than once, “Working in Higher Ed is not what it used to be”. I have a feeling that some of what I was proposing prompted the others to mention this to me. I was stressing on the urgency for us to realize that we cannot get too comfortable with what we do, instead, we need to be extremely agile and develop constantly in new areas. I totally agree with this statement. In fact, who wouldn’t? More importantly, isn’t this true everywhere we look? Isn’t it also true that it has always been this way, it is just that it has been accelerating more recently?

(more…)