No Longer Lost in Translation
I’ve been working in or with edtech for over 15 years and I’ve worked with some amazing engineers and coders; but at times I felt like I was talking a different language. Feeling like someone in a strange country with only a phrase book, spending time going back and forth with wireframes, user stories, RSDs and the like.
AI and prompt engineering has revolutionised the world of development and allowed educators to have direct access to edtech development with the new programming language of natural language – which for me is English. Equal Sigma currently uses 64 pages of prompts (different prompts for different circumstances all tied together by a workflow engine) and most of this is written in English, with a bit of code.
This allows us to have a collective understanding (educators & engineers) of what the programme is doing and, crucially, refine it as we learn from the feedback, and refine it in its raw form of English.
One word can make a lot of difference
Anyone who has done prompt engineering knows the difference one word can make – its addition, omission, or even choice when other synonyms are available.
Try it yourself; put the three prompts below into AI and see the difference you get
- Finish the sentence “Mary”
- Finish the sentence “Mary had”
- Finish the sentence “Mary had a”
As we continuously review and refine the prompts in Equal Sigma, we can begin to see what prompts we put in that make little difference and what we’ve omitted that will notably change the quality of output.
How AI can help AI be better
We started developing the original prompts over 18 months ago and that’s a long time in the world of AI. Having run Equal Sigma for nearly 6 months and across 1,000’s of learner unit submissions, we’ve now got a good understanding of what’s working and what could be improved.
This is where AI can help us improve AI. Tools like Anthropic Console (for Claude) and the OpenAI Platform allow us to rapidly test prompt changes, with no coding experience or knowledge needed. We can instantly compare the results of one prompt with another or the impact of changing/adding variables. This is vital as we develop V4 of Equal Sigma, moving ever closer to the consistency and quality of an experienced human assessor. We plan to launch V4 around Easter 2026, and we will soon announce the improvements this will bring.
The challenge with these tools is that they create unlimited opportunities to test variations of prompt scripts, which when you apply this to over circa 250,000 submissions TSN assesses each year, this creates a lot of data to review.
This is why we’ve created an AI tool to evaluate the impact of changes on output – a sort of AI IQA, and it is based on the work of TSN’s IQA team, codifying their work and converting it into an AI scoring matrix and dashboard.
This AI IQA tool has potential wider uses and can be used as a quality ‘flagging’ system for sampling, and we are working on a commercial version to be launched in early Summer 2026 that will also include additional features to support IQA teams.
How human intelligence can make AI even better still
Better than AI helping AI, is our assessor’s feedback. We’ve created a thumbs-up/thumbs-down button for assessors to change depending on their assessment of the AI feedback and this will help us review feedback on mass. In addition to this we have our own quality team reviewing output through the IQA process, and on-going testing and reviewing through our Assessor Services team.
English – The new prototyping language for educators
We have a brilliant and patient Engineering team, patient with my frequent end of meeting Columbo style interjections of “Just one more thing…”; but unlike Columbo my interjections don’t always produce results, and the Engineering team are busy people.
Equal Sigma started as a HTML model for us to manually upload data and test the results from prompts across different LLMs to create a pre-production MVP. At the time we didn’t have access to AI HTML/App production tools such as Gemini’s Canvas. Tools like Canvas allow educators to create small prototypes or coded snippets of larger project ideas as part of the early stages of development, all through prompts written in English, whilst creating code that engineers/coders can review or use later.
Join the conversation
AI is rapidly changing the world of education and the work of educators. Prompt engineering and prototyping with AI are both exciting and challenging fields of discovery and we’d really like to hear your thoughts on this. If you want to join the conversation, please contact Liam Sammon, Executive Director for Education Services & Innovation here – Liam Sammon | LinkedIn
If you want to find out more about the courses and resources available from TSN please click here Learning Resources – The Skills Network
Liam Sammon, Executive Director for Education Services.
