Academic Integrity and a New Assessment Culture in the Age of AI: an interview with Edward Palmer

Academic Integrity and a New Assessment Culture in the Age of AI: an interview with Edward Palmer


(Generative) AI has turned the examination system on its head. The fear of students trying to cheat still dominates the discussion. But at the University of Adelaide in Australia, the focus has shifted towards academic integrity and student support. In this interview, Edward Palmer, Professor and Director of the Unit of Digital Education and Training in the School of Education, explains the impact of AI on assessment and how attitudes to assessment need to change. 

Question 1: What made you start doing research on technology in learning and assessment?

I started looking for ways in which I could provide efficient ways of learning and demonstrate authentic learning by providing simulations, however basic, of systems, especially those where I could demonstrate core concepts visually. Examples of this include simulating quantum mechanical concepts of superposition related to real experimental data and using images and video to enhance the authenticity of medical scenarios. As a young academic, I had no idea whether my ideas were effective or educationally sound, so I began evaluating learning outcomes and student attitudes to validate my work. This soon turned into my main research area.

Question 2: In your opinion, what impact has artificial intelligence had on examinations so far? What will change?

AI has probably encouraged people to return to the safe haven of invigilated exams or monitored assessments. There is a genuine concern that current assessment approaches are compromised by AI, especially in the Fail/Pass bracket,. where AI has demonstrated its ability to meet minimum standards. Potentially this means students could pass large components of their degrees without genuinely understanding concepts. As students become better versed in the use of AI, this becomes a larger problem as its use in skilled hands is challenging, if not impossible,  to detect. 

This is such a big issue due to a lack of progress in designing assessment tasks. We have used essays, laboratory reports and multiple choice questions for so long that as a sector we haven’t considered in sufficient depth what students should be learning and how best to ensure and measure that. We have a very efficient system, but we now find that the aspects we were measuring can be provided by software and have to think of other approaches. Examinations aren’t likely to be problematic. but if our formative and summative assessment tasks leading up to exams aren’t robust we run the risk of students not learning very much by doing them if they overuse AI. We need to reconsider how we assess leading up to these key examinations. Oral examinations are potentially valuable but there are significant resource implications of using this method for all tasks.

Question 3: How have you responded to AI at the University of Adelaide? (e.g. policies, involvement of students…) What solutions were discussed to prevent cheating? What have you implemented?

We aim to build a positive learning and research culture and ensure students understand what the goals are for completing learning activities and assessments. Our policies reflect this and we have only had to make minor revisions to ensure students are aware of the risks of overusing AI. We have carried out surveys of staff and students to take a snapshot of attitudes and practices using AI and have a large community of practice discussing AI and its use. We have also created two new positions focused on AI and one of these is predominately student focused. We are conscious of the role students play in developing our future with AI and involve them in our discussions. We also have their support as Academic Integrity ambassadors where they are available to visit classes and discuss these core issues. Overall we have taken a positive approach focused on learning and improving assessments and communications around AI rather than focusing on cheating alone.

Question 4: Why should the current discourse focus more on academic integrity?

Integrity in any profession is important and students need to understand the benefits of acting with professional integrity at all times. This extends to their studies, so academic integrity is a cornerstone of their development. I suspect that most breaches of academic integrity occur during pressing deadlines and high pressure situations. Recognising these stressors and being able to make good decisions even when affected by them is a key skill. We need to think about how best to support students under these circumstances especially when AI provides easy, but ultimately ineffective means of meeting learning outcomes.

“If we accept that assessment is partially broken then we need to remedy this.”
Edward Palmer, Professor and Director of the Unit of Digital Education and Training, University of Adelaide (Australia)

Question 5: What do you think is necessary to change assessment culture?

If we accept that assessment is partially broken, then we need to remedy this. This means communicating the issues clearly to staff and encouraging them to critically evaluate their tasks, their learning outcomes and the robustness of those tasks in the face of AI. This can’t be done by individuals but rather requires the full cooperation of all staff involved in education and instructional design. In the short term, this means investing in people by providing them with genuine assistance in developing new assessments rather than simply adding more to the academic workload. So culture must be driven from the top and supported at all levels. This won’t be cost free, so efforts must be made by leadership to ensure that staff understand the magnitude of the issues AI is introducing for assessment. 

We potentially are at a key decision point. On the face of it, the cheapest, safest option is to ensure exams are used as ‘gates’ to progression and students are denied access to technologies within those spaces. I suspect this strategy would lead to disaster with increasing failure rates and reduced retention because the technologies available to students leading up to exams will give the illusion of learning. I see many of our existing summative assessments being used in a more formative way to help support learning. This leads to the need for more educators in the classroom, online or physically, to closely monitor student engagement and outcomes from activities, identify misconceptions and ensure that students are given every opportunity to use AI in a meaningful way that does not impact the process of learning.


Edward Palmer

Prof. Edward Palmer is the Director of the Unit of Digital Education and Training in the School of Education at the University of Adelaide, where he is Acting Deputy Head of School.

Edward’s research is concerned with technology in education, training and the assessment of learning. His work traverses a range of themes, currently focusing on the use of virtual and extended realities for situational awareness and training and the impact of Artificial Intelligence on assessment. he is particularly interested in story driven, personalised approaches to learning.

Write a comment

Your email address will not be published. Required fields are marked *