Artificial Intelligence (AI) is transforming education at an unprecedented pace, but what does meaningful AI transformation actually look like in practice? We sat down with Liam Sammon, Executive Director for Innovation & Education Services at The Skills Network, to reflect on two years of leading AI product development and business-wide transformation. From building EQUAL Sigma, our AI-powered assessment feedback tool, to launching a human-centred AI transformation programme across the organisation, Liam shares honest insights on what’s worked, what hasn’t, and what education leaders need to understand about AI in assessment, safeguarding, skills development and culture. This interview marks the start of his new “AI Reflections” blog series.
You’ve been leading AI product development and transformation at The Skills Network for just over two years now. Why did you decide to write this blog series, and why now?
As a learning organisation, we deliver and promote lifetime learning and given I’ve just past my 2-year anniversary now felt a good time to sit down and reflect on our AI transformation journey – what’s worked, what’s not worked and what we’ve learnt on the way and to share these learning experiences with our partners and the education community at large.
Two years is a significant period in AI – how much has changed since you started this role?
A lot! And on many levels!
When I started I, like most people, only had one AI tool on my desktop, ChatGPT. Now I have four AI stacks and numerous AI assisted tools. One of my blogs will focus on how the difficult thing now, for anyone leading AI transformation and product development, is not whether to do it or not; but what with. There’s a thin line between choice and confusion.
That’s just using AI, as a training provider we too are adapting to AI like the rest of the sector. AI is transforming education; but it’s still uncertain what the direction of travel is. As someone with a background in assessment and responsible for our assessor services I’m constantly thinking about how to adapt to AI and use it appropriately and I’m so pleased with our recent 16-19 resources that guide learners on how to use AI appropriately and productively. One of my blogs will reflect on AI in education, in particular AI in assessment.
Then there’s skills development – for as an employer, we’re exploring what skills we need now and the future, just as other employers are. In leading our business wide AI transformation programme, I believe it’s important to put biology rather than technology at the centre, as it’s about people and culture, which I focus on in another blog.
Let’s start with your first major project – EQUAL Sigma. Can you tell us what it is and why you built it?
Our starting point for AI product development is the question “what are the pain points that AI might relieve?” As an Ofsted Good provider and with over 200 assessors, we know the challenge of managing assessors, the challenge of recruitment, capacity management (peaks & troughs), and standardisation of assessment. That’s why we developed EQUAL Sigma – our AI-driven assessment feedback tool.
This produces AI draft learner feedback; but critically it doesn’t mark the learners work to ensure compliance, so it’s a ‘human in the loop’ product that is triggered by an assessor marking, and ending with an assessor editing and validating the learner feedback.
EQUAL Sigma took over 18 months to develop, with many ups and downs. However, today we are developing new AI products at a fraction of the time, which is something I will focus on in the blog series.
You mentioned EQUAL Sigma took 18 months to develop with “many ups and downs,” while you now develop new AI products “at a fraction of the time.” Tell us what has changed that now allows you to move so much faster and minimise ‘the downs’?
When we started developing EQUAL Sigma we didn’t have tools like Claude Console and Claude Code. The development process then was more aligned to the traditional approach of engineer to educator – back ‘n forth with developments and testing. With the tools now at our disposal we can all work in the same environment; coders and non-coders, and things are far more transparent and can therefore be developed quicker and more efficiently.
About six months ago, you took on a broader mandate – leading AI transformation across the entire business. What does that actually mean in practice?
As an Edtech company we have a positive attitude towards AI, but that doesn’t mean we don’t know it’s limitations or ethical concerns; however on balance we believe it can be a force for good.
Rather than just limiting the benefits of AI to a few teams or a few areas e.g. AI products, we want the whole of the business to benefit from AI. This is why we started an AI business transformation programme over 6 months ago, which I lead. It’s based on Kotter’s 8 Step Change management approach; build a sense of urgency, rapid piloting, identifying and developing ‘champions’, whilst remaining human centred and not being led by the technology.
Working with all the teams across the business I’ve been amazed at the way they have all creatively and positively considered AI and we’re now at the stage of gathering project proposals to consider for progression and investment.
This hasn’t been without its challenges and what I call ‘the four human responses to AI’ and this is something I will focus on in the blog series.
Can you share a moment when you saw someone’s perspective on AI completely shift?
I ran a series of workshops back in September 2025, showcasing custom GPTs mainly and it was great to see how quickly people picked it up. I think some people thought this was ‘magic’ that only magicians could deal with; but when they saw how easy it was to develop simple tools and the impact it would have on their day-to-day work, it was transformative and they were developing tools in hours.
What do you think education leaders need to understand about AI that they might not currently?
There are many dimensions to that question. From a safeguarding and ethical dimension, educators need to be aware of the guardrails you need to put on learner facing AI assisted products. We’re prototyping some learner facing AI assisted products and this is a major consideration in the development process, and I would say safeguarding testing is likely to take longer than any other part.
Then there is the skills dimension – young people and adult learners are entering/are in a rapidly changing world due to AI. That’s why we’ve built a series of resources to support young people and adults as they adjust to the world of AI and you can find out more about these resources here.
Then there is the innovation/edtech dimension – educators now have at their fingertips tools that can take ideas and turn them into prototypes/proof-of-concepts before even speaking to coders/engineers. It’s not quite as simple as this, or rather taking a working concept into production/scale is not that simple, and this change of product development approach is something I explore further in one of the blogs.
To continue the conversation or to speak to Liam directly, contact him via LinkedIN