Welcome to EQUAL Sigma – The Skills Network’s AI Assessment Tool

15th September 2025

Welcome to EQUAL Sigma – The Skills Network’s AI-driven assessment tool built to support our 200+ assessors and partners, sitting within our proprietary platform – EQUAL.

It’s taken over 18 months to develop Sigma, combining assessment expertise with engineering/AI expertise, and the summation of this expertise continues with the application of Sigma – hence the choice of name. Unlike some other AI assessment tools, Sigma has been built at the qualification unit level, not question level only, so it is of maximum use to assessors who provide summative feedback to learners at the unit level.

This is just one way we’ve put the assessor at the centre of development. As the journey didn’t start with AI, but in mapping out the role of The Skills Network’s own assessors’ work task by task, plotting both the cognitive and practical process of their work.

Learner Centred Design

Good assessment starts with understanding the learner. Their background (prior achievement, initial assessment, etc.), their goals (career and personal), their ambitions for the course and their support needs. We’ve built Sigma with this learner-centred approach in mind. Bringing into the AI engine information on the learner’s intent, such as study reason, career and personal benefit, long-term goal, learner’s self-assessed strengths and improvement areas, etc., combined with other background information on the learner. For GDPR compliance reasons, this data is anonymised when input into the LLM and re-identified when output is returned to EQUAL.

Through the refinement of prompts, this has created AI-generated feedback that is personalised and more learner-centred.

Tailored Feedback According to Learner Journey

An assessor will tailor their feedback according to where the learner is on their journey, setting a path at the beginning, with greater focus on summative feedback and next steps at the end. Sigma uses different prompts depending on where the learner is on their journey, mimicking that of a human assessor.

Question Level Assessment – Unit Level Feedback

An assessor will start by reading the learner’s submission and take notes question-by-question, looking for examples of strengths and key areas for development, for them to base their summative feedback and to help formulate next steps. We’ve replicated this process in Sigma. Sigma reviews the learners’ input question-by-question, assessing these questions based on a series of prompts we have built to replicate an assessor’s cognitive process. Sigma generates feedback as it goes through this question-by-question process, which it stores to be used later to generate unit-level feedback. Assessors also have the ability to add ‘micro-notes’ to each question as they mark, and these notes support AI in generating its feedback.

For Ofqual compliance reasons, we have deliberately told Sigma not to mark questions. We have deliberately made the process so that AI feedback is triggered by the assessors’ marking each question. Ensuring that the assessment is a human-based judgement, as per Ofqual guidelines to awarding organisations. For best practice reasons, we still ask assessors to write their question-level feedback where learners have not passed a question.

Assessing Beyond the Qualification Specification

An experienced assessor in marking and producing feedback will assess beyond the assessment/learning objectives of a qualification specification and is mindful of wider criteria such as Ofsted’s focus on British Values or SPaG. We have built this into Sigma, so it refers to this in its feedback and identifies and corrects misunderstandings in areas such as SPaG.

Supportive and Constructive Feedback

We’ve refined and refined our prompts until we have created feedback that mirrors as closely as possible the guidance we give to our assessors – supportive and constructive feedback. We tested Sigma across multiple LLMs, assessing the output against several measures, including the language and tone of AI-generated feedback. Choosing the LLM that we felt best matched our guidance and culture.

Human Validation

Sigma takes all this information (question-level assessment and feedback, learner intent, etc.) and produces unit-level draft feedback. Just as we require an assessor to mark at the beginning of the process, we’ve deliberately built in ‘friction’ at the end of the process to ensure that an assessor validates the draft feedback before submission for IQA moderation.

Why Sigma?

So why Sigma? In these times of funding constraints and additional burdens placed on educators, Sigma gives assessors time to focus on what they are best at. Whilst AI is impressive, it is yet to match high-quality human assessment. Assessors with emotional intelligence, e.g. empathy, a wealth of industry experience and knowledge, and assessors who can develop targeted interventions for where learners are struggling.

What Next?

As an education technology business, we see this as a journey, not a destination, and we will continue to develop Sigma – its prompts, inputs, outputs, workflow architecture, speed, quality and more. If you want to know more or be part of the conversation on AI in Assessment, check out our LinkedIn here.

Photo of Liam Sammon, Executive Director for Education Services at The Skills Network

Written by Liam Sammon
Executive Director for Education Services at The Skills Network

Our Trusted Partners

aelp
Bright Network
Department for Education
Employment Related Services Association
iep
Sheffield Chamber of Commerce
The Chartered Institution For Further Education
Health and Social Care Forum Member
Early Years Forum Member