I spent a significant part of December and a couple of weeks in January in India. Some of it was a true “vacation,” but for a couple of weeks, I worked remotely from a relative’s apartment. During my previous trip, I had purchased a phone specifically for use in India, preloaded with Google Pay and other essential apps. However, in the rush of last-minute packing, I accidentally left the phone behind, leading to some unexpected challenges.
Long story short, I remembered that my mobile plan allowed me to have two numbers, one of which was installed as an eSIM on my primary phone used in the U.S. While having the number was helpful, enabling it for transactions like Google Pay required additional verification steps, including a visit to the bank. In the meantime, I experimented with an app called MyPursu, designed for non-resident Indians to facilitate payments without a local phone number. It’s a brilliant concept, but unfortunately, it didn’t work reliably in all situations. Eventually, I set up my own phone for transactions, which worked seamlessly—making it the only device I needed.
AI and Institutional Policies
Beyond personal tech challenges, AI remains a central focus area for me. I am actively involved in AI initiatives, particularly as a member of the AI Working Group formed by the Provost, where we are developing guidelines for faculty regarding AI’s role in teaching and research. Additionally, I am chairing a parallel group focused on the responsible use of AI for administrative staff. As AI becomes increasingly integrated into software applications, it is essential to critically evaluate how these tools process and protect institutional data before they are widely adopted.
Emerging AI Models and Their Implications
One AI model that has garnered attention is DeepSeek. According to Wikipedia, DeepSeek makes its generative AI algorithms and training details open-source, promoting accessibility and innovation. However, its API version in China applies content restrictions in compliance with local regulations, limiting responses on sensitive topics. While DeepSeek claims to be more energy-efficient than its competitors, concerns about privacy and regulatory limitations persist.
Meanwhile, Google Gemini is stepping up its privacy assurances for Google Workspace for Education customers. When users access the free version of Gemini at gemini.google.com, a banner confirms that their data will not be used to train generative AI models. This is a reassuring step for institutions handling sensitive information.
AI’s Next Frontier: Agentic AI
Another emerging topic in AI is the rise of agentic AI. Unlike generative AI, which focuses on content creation, agentic AI takes action based on user prompts. For example, instead of merely providing search results, an agentic AI could navigate Amazon, locate a specific product, and proceed through the ordering process, stopping just before final payment confirmation. This level of automation is not far from reality, and I am eagerly waiting for Google Notebook LM to implement such capabilities.
Final Thoughts
The pace of AI advancement is both exhilarating and daunting. While we continue to embrace new efficiencies, ethical considerations surrounding privacy, security, and control must remain at the forefront. Balancing innovation with responsible adoption will be key as we navigate this rapidly evolving landscape. Thoughtful regulation, transparent AI models, and a commitment to ethical AI development will determine how effectively these technologies serve society in the long run.
* Polished by ChatGPT
I always love to read these!