Cambridge sets six ethical AI principles for English education
Cambridge University Press & Assessment has set out six principles for the ethical use of artificial intelligence in English language education, responding to growing concerns about fairness, transparency, and the human element in English learning and assessment.
The new research highlights a human-centred approach to deploying AI and stresses the need for rigorous standards, while addressing issues such as data privacy, bias, transparency, and environmental sustainability. The publication follows a recent YouGov poll indicating that the British public's top concerns about AI in English proficiency tests are the increased risks of cheating and failure to properly assess language skills, with 39% citing each as issues.
Human-centred approach
At the heart of Cambridge's guidelines is the acknowledgement that human involvement remains essential in language learning and testing, even as AI becomes more prominent in classrooms and examinations. The organisation emphasises that AI should not replace the uniquely human experience of using and acquiring language, and that there must always be accountability and opportunities for human oversight.
Dr Nick Saville, Director of Thought Leadership at Cambridge University Press & Assessment and co-author of the research paper, said:
"The rapid adoption of AI in English language learning and assessment can provide significant benefits for learners, teachers and institutions around the world, but it's critical that it's delivered ethically. Despite the huge benefits AI can bring, without an ethical framework in place, it risks losing credibility and people's trust. The six principles we have defined will help deliver effective AI-based language learning and assessment solutions. By focussing on keeping a human in the loop and maintaining robust standards, we can carve out a future where teachers and learners feel safe and empowered to use new technology to reach their potential."
Six key principles
Cambridge's paper sets out six main principles to guide the responsible use of AI in English language education and assessment:
1. Matching human examiner standards: AI systems must be able to assess language skills with the same accuracy and reliability as experienced human examiners. The paper calls for test providers to collect evidence demonstrating that AI-generated scores meet these professional standards, ensuring results are trusted by learners, teachers, and stakeholders.
2. Fairness through inclusive data: The research points to the need for AI systems to be trained on diverse and representative data sets, with ongoing efforts to monitor and eliminate bias. "Fairness isn't optional - it's foundational," states the guideline, making clear that inclusiveness is a baseline requirement for ethical AI.
3. Data privacy and consent: The principles insist on clear communication regarding what personal data is collected, how it is stored, and its intended use. Behind these public assurances, robust encryption, secure storage, and safeguards against hacking are described as non-negotiable for all parties involved.
4. Transparency and explainability: All stakeholders should be made aware when AI systems are involved in assessment, and be able to understand the reasons behind AI-generated results. The research urges AI solutions to be deployed with strong oversight and governance, and for providers to clearly articulate the frameworks ensuring assessment integrity.
5. Preserving the human aspect of learning: The guideline cautions that, while AI may enhance learning and assessment, it cannot replace human experience and judgment. The principle of 'keeping a human in the loop' is intended to give learners confidence and preserve quality control.
6. Environmental sustainability: Recognising the significant energy requirements of AI systems, Cambridge encourages all stakeholders to consider the real-world environmental costs when implementing AI solutions in education.
Responsible adoption
Reflecting on the sectors' responsibility to learners, Francesca Woodward, Global Managing Director, English, at Cambridge University Press & Assessment, said:
"To maintain high standards in learning and assessment, we must consistently put learners first. AI offers a world of possibilities, but with that comes a responsibility to make sure solutions are ethical, high-quality, and accessible. The use of AI in education lacks consistent regulation, which means we, as a sector, have a responsibility to champion innovation with integrity. We've defined these principles to provide a research-based framework that we encourage others to choose to adopt."
The publication calls on test providers not only to develop but also to regularly update their AI systems, ensuring continual alignment with ethical standards. It highlights that transparency about how AI is used and how decisions are made is paramount if the public's trust in assessment is to be maintained.
Cambridge also emphasises that ethical frameworks are needed urgently as artificial intelligence is increasingly integrated into education. The six principles set out in the paper are intended to act as a benchmark for the development and adoption of AI tools in English language assessment and teaching.