Can we design out cheating?

Image generated by AI

In an article titled, Cheating has become normal, Beth McMurtrie highlighted the growing prevalence of academic misconduct in U.S. higher education. The statistics were alarming: in 2024, 65% of students admitted to some form of cheating—a staggering 30% increase compared to 2019. While these figures were based on self-reported data from a single university and don’t necessarily reflect the broader higher education landscape, they spotlight a significant issue – cheating in higher education. Cheating has become a recurring topic of concern among educators, especially since the emergence of Generative AI tools like ChatGPT in 2022. In response, many educators have called for a redesign of assessment. But this raises the question: can we really design out cheating?

Over the past year or so, I’ve had countless conversations with colleagues about rethinking how we assess students. Many of us advocate for authentic assessment as a way to reduce cheating, particularly the misuse of Generative AI. The reasoning is straightforward: authentic assessments are more meaningful and engaging for students, making them less inclined to cheat. When students see clear value in their learning, they’re more motivated to put in the effort. Another argument is that some forms of assessment—such as class presentations and collaborative group projects—are inherently harder for AI tools to handle, creating a natural layer of assessment security. I’ve been part of these discussions, both within my educational design team and with academic colleagues at the University of Wollongong. But were we right? Or were we overly optimistic?

A compelling counterpoint comes from Tim Fawns and colleagues in their article Authentic assessment: From panacea to criticality. While they acknowledge the benefits of authentic assessment, they challenge the notion that it can prevent or significantly reduce cheating. I agree with their argument: good assessment design is crucial for fostering learning and encouraging academic integrity. When assessments promote autonomy, competence, and connection, students are generally less motivated to cheat. But does this happen automatically with every authentic assessment? Probably not. In fact, as Cath Ellis and colleagues have shown, contract cheating providers can produce passable responses even to authentic assessment tasks. So, does this mean authentic assessments have lost their edge in combating cheating? Not entirely. Fawns and colleagues argue that authenticity alone isn’t enough—it’s part of a more complex equation.

Cheating in education is not a new phenomenon. To tackle it effectively, we need to ask: why do students cheat? Research suggests it’s often less about individual morality and more about the situational pressure students face. Who, then, is to blame? Academic integrity policies typically place the blame directly on students. However, as Phillip Dawson and colleagues argue in their paper, Validity matters more than cheating, the issue is far more complex. They pose critical questions about who or what should be blamed: students, poorly designed assessments, contract cheating providers, or a society that overly prioritises high grades and credentials over genuine learning. Considering these factors, it becomes clear that simply adopting a new assessment strategy—like authentic assessment—may not address the root causes of cheating.

So, what’s the solution? While there’s no silver bullet, I find Dawson and colleagues’ focus on assessment validity particularly compelling. They argue that the validity of an assessment—its ability to provide reliable evidence of what students know and can do—should take precedence over concerns about cheating. Importantly, they emphasise that no single assessment method can address all validity issues. Instead, they advocate for programmatic assessment, using multiple types of assessment across a course, involving multiple assessors, and gathering a range of evidence about student capabilities over time.

This approach aligns with guidance from the Australian Tertiary Education Quality Standards Agency (TEQSA) for designing assessments in the age of AI. By employing diverse methods and spreading assessments throughout a program, educators can enhance both validity and security. While this may not be a groundbreaking idea, I think it’s a practical and necessary response to today’s challenges.

So, based on what I’ve highlighted above and other considerations, can we design out cheating? I’m not sure if I have provided an answer. What are your thoughts?

Previous

We have addressed the elephant in the room, then what?

1 Comment

  1. S H A F I Y A A D A M

    An awesome read.. Thanks Dr Ramiz for the fruitful and thought provoking ideas.. we indeed have so much to think further with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén