UK releases new reports to enhance AI cyber security measures
The British government has released a new collection of research reports focused on enhancing cyber security within the realm of artificial intelligence (AI). The reports, which draw on insights from both public and private sectors, include substantial contributions from the London-based startup Mindgard. This initiative comes in direct response to a recent cyberattack on the Ministry of Defence by Chinese actors and aims to provide detailed recommendations for cyber security governance to business leaders and government officials.
The Department for Science, Innovation, and Technology (DSIT) commissioned the compilation of these reports, assigning Mindgard the task of conducting an exhaustive study to address the emerging cyber security risks associated with AI. Mindgard, a company originating from Lancaster University, is the only startup to have contributed to this significant body of work. Their report, titled "Cyber Security for AI Recommendations," proposes 45 unique measures to mitigate AI-related cyber security vulnerabilities.
One key focus of Mindgard's contributions is technical recommendations. These include modifications to software, hardware, data, and network access of AI systems, as well as adjustments to the AI models themselves. Such technical improvements encompass changes in training methodologies, pre-processing techniques, and model architecture—all designed to bolster defences against cyber attacks targeting AI systems.
In addition, Mindgard offers general recommendations that frame a broader organisational approach to mitigating AI cyber security risks. These encompass security hygiene practices, company policies, governance frameworks, and various security measures. Notable examples include managing legal and regulatory requirements related to AI, stakeholder engagement, developing organisational AI programmes, implementing controls to limit unwanted model behaviour, and thorough documentation of AI project requirements. The recommendations also advocate for conducting red teaming and risk analysis exercises.
Other significant contributors to the governmental report include Grant Thornton UK LLP, Manchester Metropolitan University, and IFF Research. Their combined efforts have identified several key areas for improvement, particularly around legal and regulatory compliance, stakeholder engagement, and documentation. The research highlighted 23 distinct security vulnerabilities within AI systems, primarily attributed to adversarial machine learning techniques used in past cyber attacks.
Apart from contributing to the government report, Mindgard also offers a platform specially designed to manage AI security risks. This includes protection against threats like data poisoning and model theft. Their platform's modules address outbound risks, external attacks on internal models, and ecosystem vulnerabilities.
Dr. Peter Garraghan, CEO and CTO of Mindgard and a professor at Lancaster University, remarked on the significance of the research work: "Research has always been fundamental to Mindgard's work and mission. Directing that research towards initiatives that strengthen cybersecurity and address the weaknesses of proprietary AI on a national level is a responsibility and a privilege."
The publication of these reports and the accompanying draft Code of Practice on cyber security governance underscore the British government's proactive stance in fortifying AI cyber security. This new guidance aims to equip directors and business leaders with the necessary tools and knowledge to safeguard AI systems against an increasingly sophisticated array of cyber threats.