As part of his ‘AI Reflections’ blog series, our Executive Director – Education Services and Innovation, Liam Sammon, reflects on one of the biggest questions facing education today – whether AI should be embraced or restrained. Drawing on his experience across post-16 education and his leadership of assessor services at The Skills Network, Liam shares a thoughtful perspective on the realities of AI in assessment, the challenges around authenticity, and how the sector can move forward responsibly.
Is AI that big a thing?
Over 15 years ago, when electronic white boards were the big technology thing I remember visiting a college and they proudly showed me all their classrooms with new white boards. However, none of them were on.
AI is definitely on; it’s the most transformative piece of technology that I’ve experienced in my 25 plus years in post-16 education. There is very little in education AI can’t assist with; teaching/coaching, teaching & learning resources, assessment, learner/learning management systems, learner support, admissions & enrolments…the list goes on.
However, this blog can’t go on and on, so I’m going to focus on just one topic: AI in online assessment. This is important to The Skills Network, given we assess circa 250,000 learner submissions every year for ourselves and our partners. And important to me personally, as I’m responsible for assessor services (circa 200 assessors) and EQUAL Sigma; our AI powered assessment tool.
To embrace or restrain AI in assessment?
When it comes to AI, the education sector appears to be divided between those who want to restrain AI (detection) and those who want to embrace it (adaptation). Though as with most important topics in education, the debate is more nuanced than this.
For assessment, in the detection camp are the regulators: Ofqual (Office of Qualifications and Examinations Regulation) in England and JCQ’s (Joint Council for Qualifications) interpretation of Ofqual’s condition A8, which becomes guidance for Centres.
JCQ’s AI Use in Assessments guidance (Revision 2, April 2025) states: “If AI misuse is detected or suspected by the Centre and the declaration of authentication has been signed by the student, the case must be reported to the relevant awarding organisation.“ AI misuse that is judged as malpractice is subject to the same sanctions as ‘making a false declaration of authenticity’ and ‘plagiarism’, which can include being barred from taking qualifications for several years.
No-one would dispute the importance of valid assessment and authenticity of work; but let me present two Scenarios, one Analogue and one AI.
The first would be considered valid (subject to plagiarism checks re: the textbook). The second, potential malpractice, that will at least warrant further investigation, coming down to a) how much the learner has changed the LLM response and used “their own words” and b) how much they are taking credit for AI work and are they properly referencing the use of AI and not “misused” AI.
But is the AI Scenario ‘bad’ learning and not as valid a reflection of the learner’s understanding of the topic under assessment as the Analogue Scenario? Critically it all comes down to intent. Let’s assume in the second case the learner’s intent was to improve their learning (as in the first); but you can easily see how this can err into malpractice, and we can’t see into the hearts and minds of learners.
The Skills Network, like many others, use online tools to detect AI. But again, it all comes down to intentions, and no detection technology can reliably look into a learner’s intent, let alone 100% manage the issue of false positives and negatives. There is evolving assessment practice supplementing online AI detection tools such as identifying a lack of personal voice (see later) or over-simplified and balanced arguments in learner’s work; but LLMs are getting smarter, or rather learners are getting smarter, at avoiding these detection methods and again these practices don’t get at the heart of the matter. That is learner intent.
This is where practice needs to evolve and where online learning can play its part beyond AI detection tools and this is something I expand on later with the developments we’re planning for our Learner Management Solution, EQUAL, and how we develop our online courses.
So how could assessment practice adapt to AI?
In the embrace AI camp there are those educators, who believe assessment needs to adjust to the use of AI and not just focus on detection.
One of the arguments put forward by the embrace AI camp is to change assessment strategies and move up the Bloom’s taxonomy. This is mainly being led by the Higher Education (HE) sectors where the use of AI is recognised and seen as an aid to developing skills, particularly critical thinking skills and critical thinking skills applied in the use of LLMs AI as a ‘thought partner’. This is fine for HE; but what about Level 3 and below, which covers most vocational qualifications? Moving up the Bloom’s taxonomy would make the qualifications more demanding, meeting neither learner or employer needs and ultimately moving the qualification up the Level scale.
Another approach is to build AI skills into the qualification specification and part of the assessment objectives. This is more fitting for vocational qualifications, giving the focus on preparing for work and the increasing use of AI in the workplace; but there are challenges in adding in this content. Do you replace content in the qualification with AI specific skills content or just add it, making the qualification more demanding? Building AI into the curriculum is something we are planning for considering, given the Department for Education’s (DfE) ambitions for digital/AI literacy to be built into the curriculum by 2028, as published in “Every Child Achieving & Thriving” and I cover this later.
Another approach, which is part-detection and part-assessment strategy, is to build personal and work experience into assessment, which loosely comes under the term ‘authentic assessment’. The most AI-proof assessment strategy is direct observation (it is near impossible for a learner to use AI to fake this and in an online context this can be done via real-time video). However, one of the principles of good assessment is manageability and at The Skills Network we pride ourselves in delivering asynchronistic learning to those who face barriers to traditional learning methods: those who work shifts, have care responsibilities, difficulty with transport, Learners with Learning Difficulties and/or Disabilities (LLDD) barriers to learning etc. Adding to the assessment process burden would just exclude these groups further.
We need ‘authentic assessment’ methods that don’t exclude these groups. In research by NCFE and The Open University on how resilient different types of assessment questions are to AI (Developing Robust Assessment in the Light of Generative AI Developments, 2024) which covered Level 3 qualifications, researchers found that reflection on work practice was a fairly robust assessment strategy against AI and this can be applied within an online learning context.
What is The Skills Network doing about AI in assessment?
For us compliance is non-negotiable – fact – and we’re compliant now, as per the numerous AO EQA visits we’ve had; but this is an evolving area and we want to stay a step ahead, whilst maintaining the balance between good use and misuse of AI. Below are some of the new things we are doing/planning to do with regards to AI in assessment.
- Firstly, and most importantly, The Skills Network supports the guidance of learners. We’ve produced a suite of AI literacy online courses to help Centres guide their learners in the appropriate use of AI in learning, not just to ensure they stay on the right side of malpractice; but also inform them of the shortcomings of AI (hallucinations and bias to name a few) and how to check for this. We’ve also produced AI courses on the use of AI in employment, as the challenges of AI in assessment extend to the workplace as well. You can find out more on our courses, here: AI Literacy & Employability Short Courses.
- We are looking at developments in EQUAL (our Ed-tech platform) to help both detect AI and allow the learners to appropriately use AI. This includes building a section in EQUAL to allow learners to upload the work they have done using AI, so they can be transparent on its use, and we can use this to both guide the learners and train assessors on detection/appropriate use of AI.
- We will begin to build AI literacy/skill exercises into our courses regardless of the subject, preparing for and aligned to the DfE’s ambitions for digital/AI literacy to be built into the curriculum from 2028. Similar to how we build in other key skills such as Maths and English.
- We will strengthen ‘authentic assessment’ within our formative assessment, to help with AI detection by extending personal/professional reflection and context within questions, particularly our ‘Stop and Think’ exercises. These can be called upon by Assessors to compare against summative assessment.
- To help further with ‘authentic assessment’ we are planning to further develop our learner intent section on EQUAL. This section asks a series of questions of the learners as to why they have chosen their course, their ambitions and goals etc. We are prototyping a new EQUAL feature called EQUAL Intent – an AI driven tool that engages with learners to get richer/more complete information on their intent. This information can be used by assessors for AI detection e.g. language, personal/professional experience etc.
- We are currently building a new AI-driven product called EQUAL Assure. This is a tool built to support the work of IQA teams and will include features to help with the detection of AI misuse. I will be talking about the work on EQUAL Assure in later blogs.
- EQUAL Sigma – our AI driven assessment tool. We are soon to launch V4 of EQUAL Sigma which is even closer to the learner feedback produced by an experienced human assessor. Critically this is compliant with a part of AI in assessment regulation I’ve not touched upon above, namely the use of AI in marking/assessment judgements for only human assessors mark in EQUAL Sigma, so it is fully compliant with Ofqual regulations/JCQ guidance. EQUAL Sigma V4 will give more time back for assessors to do what AI can’t do, including AI detection, and we will be issuing new assessor guidance on the back of the launch of EQUAL Sigma V4 in May. EQUAL Sigma will be renamed EQUAL Assess to complement EQUAL Assure once it is launched.
AI is transforming education and if you want to contribute to the debate or comment on this blog, speak to Liam directly, contact him via LinkedIn.