Over the past two years, educators, AI experts, and university leaders have come together in countless webinars, seminars, workshops, panel discussions, and conferences to explore the potential effects of AI on education. A large portion of these efforts has focused on the risks to academic and research integrity and how we might navigate them. Like many enthusiasts of digital technology and pedagogy, I joined numerous discussions and events to understand these issues better. It was heartening to see the genuine care we collectively hold for learning and teaching. Through these discussions, we’ve indeed addressed the elephant in the room. But have we truly taken the next steps? I’m not convinced!
It’s now clear that AI is here to stay and will only become smarter and more deeply integrated into our daily lives, including learning and teaching. We can no longer afford to remain in denial; it’s essential that lecturers are equipped with the knowledge and skills to embrace AI in the classroom. Targeted, tailored professional learning is crucial to helping lecturers align AI with their pedagogical goals. Without a clear alignment between technology and pedagogy, adoption rates among lecturers will remain low. So, where should we focus our efforts to help educators integrate AI effectively? Improving AI competencies is, in my view, the top priority.
In a recent report published by EDUCAUSE on AI literacy in teaching and learning, Georgieva and colleagues highlighted four key areas to help educators build essential AI knowledge and skills in higher education – technical understanding, evaluative skills, practical application, and ethical considerations. I think these areas are essential for using AI effectively and responsibly in teaching and learning. In this post, I’ll expand on these competencies and share my thoughts.
Technical understanding
Empirical evidence strongly supports the need for teachers to have sufficient technological knowledge to effectively integrate technology into the classroom. The well-known TPACK model by Mishra and Koehler highlights that teachers should understand fundamentals of technology, in this case AI, including how AI works and how to use it meaningfully to be able to integrate it in the classroom. This understanding includes techniques like prompt engineering, which is essential for effectively using AI tools like ChatGPT. This doesn’t mean that every lecturer needs to become a software developer or AI engineer; rather, they need a solid grasp of the principles behind AI, how it operates, and its potential applications in various academic contexts. With this foundation, lecturers can evaluate the potential of AI tools, recognising when and how to use them in diverse learning contexts. This would likely move lecturers from passive to informed users of AI.
Evaluative skills
Equally important is the ability to critically evaluate AI tools and their potential use for learning, teaching, and research. AI technologies are rapidly evolving, with new tools emerging almost daily. These tools span a variety of categories, including text-to-text, text-to-speech, text-to-image, text-to-video, image-to-image, and many more. Lecturers need the skills to assess these tools based on key criteria such as accuracy, bias, transparency, ethics, and relevance to the curriculum. This ability is critical to align tools with pedagogical needs which is essential for effective technology adoption by lecturers. Universities, therefore, must support lecturers in building the knowledge and skills needed to evaluate the diverse AI tools available. Given the countless new and existing AI tools, it’s unrealistic for any institution to provide training on each individual tool. Instead, focusing on general evaluative criteria and teaching lecturers how to apply these criteria across tools offers a more sustainable approach to professional learning and AI integration.
Practical application
The hands-on aspect of AI is essential for turning knowledge into practice. This means knowing how to integrate AI tools into the curriculum, whether for supporting personalised learning, providing feedback, or enhancing classroom interactions. In other words, lecturers should have the technological pedagogical knowledge outlined in the TPACK model for their subject areas. This could include crafting effective prompts for various teaching needs, such as lesson planning, content creation, and assessment design. As lecturers strengthen their AI knowledge and skills, they can explore more advanced applications like creating personalised learning pathways, implementing automated feedback systems, and even incorporating virtual tutors. These uses can free up time for targeted preparation and support for students who need it, ultimately providing more engaging, inclusive, and scaffolded learning experiences.
Ethical considerations
Last but not least is ethical considerations—an essential, non-negotiable aspect for any educator working with AI. With AI’s power comes the responsibility to consider its implications for student privacy, data security, equity, and fairness. Lecturers must be vigilant about potential biases embedded in AI algorithms, which could disadvantage certain student groups if left unattended. Ethical competency also means using AI tools transparently, prioritising student agency and consent. Universities should support lecturers by keeping them informed on these ethical considerations and helping them develop guidelines and templates for students to use AI responsibly in their learning. This approach is crucial to addressing concerns about AI’s potential impact on academic integrity—a priority for all of us. By promoting ethical AI use among both lecturers and students, we can reduce the need for strict policing of AI use while enhancing the overall teaching and learning experience.
Leave a Reply