Reflection Post 6: Mental Models and AI
For this week’s reflection blog post, I want to focus on a post from Maha Bali’s blog, which reflects on an article that explores AI and its broader societal and technological implications. Maha Bali is a professor at the American University in Cairo who focuses on digital learning, social justice, and critical approaches to educational technology. The article is called RethinkingYour Mental Model in the Age of GenAI. Reading the article alongside Maha’s blog made me reflect on how AI has pushed me to make assumptions about my own abilities.
As a fourth year Uvic student, AI tools have become very embedded in my academic life. I use AI to brainstorm ideas, edit grammar, and clarify concepts, approaching AI as a helpful tool that can strengthen my learning. The article challenges this by introducing a triadic model of humanAI collaboration, suggesting that our mental models (how we think AI works) directly impacts the quality of our collaboration.
What stood out to me was the idea that we frequently misunderstand AI by anthropomorphizing it, we treat it as if it thinks or knows like a human. I recognize this in myself, especially for more emotionally or empathetic conversations. When AI gives a confident response or standpoint on my queries, I tend to easily trust it, even when I know that it is pattern-based, and that it doesn’t truly understand the human connection side of what I am asking. This made me reflect that yes, I constantly use and rely on AI in my academic pursuits, but I also need to question it throughout these processes.
Maha Bali’s reflection on this helped me further understand my role in questioning AI, and how critical engagement is important. It helped me think about how using AI effectively is not to do with efficiency, but about maintaining my intellectual responsibility. If I outsource too much of my thinking, I risk weakening the skills I have spent learning over the past four years. However, I think that fully rejecting AI isn’t realistic. I need to focus on shifting my mental model, and use AI as a collaborator that I am questioning and reflecting on instead of just believing what AI outputs. Moving forward, this blog post has made me want to be more intentional with the questions I ask myself when using AI. Specifically, how and why am I using it? I think a lot of the time AI can become second nature because it feels efficient, but I want to be more aware of the necessity of it. Do I need to use AI for this work or am I just trying to get something done with minimal effort? My main takeaway is that I want to reclaim agency in my own learning in a time where thinking itself is becoming increasingly shared with AI, I want to learn by myself.
I found this TED Talk particularly helpful in reflecting on how I can keep my critical thinking intact whilst using AI, and suggest that readers watch this alongside reading the article to reflect for themselves. The presenter of this talk, Advait Sarkar is a researcher at Microsoft and a lecturer affiliated with Universities such as Cambridge, whose work focuses on how AI affects human thinking, creativity, and knowledge work.
Thanks for reading my final reflection post!