CFOtech UK - Technology news for CFOs & financial decision-makers
Story image

ETSI sets global baseline for AI cyber security with new standard

Today

ETSI has published a new technical specification intended to improve the cybersecurity of Artificial Intelligence (AI) systems in response to increasing digital threats.

The document, titled 'ETSI TS 104 223 - Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems', sets out a series of requirements designed to protect end users and provide actionable guidance for AI security.

The specification takes a lifecycle approach to AI security, set out in 13 core principles which expand into a total of 72 trackable requirements across five distinct lifecycle phases. These are aimed at enhancing the security practices of all actors involved in AI system development and deployment.

Benefits of this approach include the establishment of transparent, high-level security principles, as well as practical provisions for protecting AI. The requirements are intended for a wide range of stakeholders in the AI supply chain, such as developers, vendors, integrators, and operators. The goal is to offer a foundation for defending AI systems amid rapidly evolving cyber threats.

According to ETSI, AI technology involves unique security challenges not present in traditional software. These include risks such as data poisoning, model obfuscation, and indirect prompt injection, as well as issues related to complex data management practices. The new specification responds by fusing established cybersecurity principles with current AI security research and new guidance developed specifically for these threats.

The specification was prepared by the ETSI Technical Committee on Securing Artificial Intelligence (SAI), which is composed of participants from international organisations, government agencies, and cybersecurity experts. ETSI stated that this collaborative, interdisciplinary development process makes the requirements globally relevant and suitable for practical deployment in diverse contexts.

In addition to the main requirements document, ETSI has pledged to release an implementation guide aimed at helping Small and Medium-sized Enterprises (SMEs) and other stakeholders. This supplementary guide will feature case studies covering various deployment environments to support organisations in meeting the security baseline specified by TS 104 223.

Scott Cadzow, Chair of ETSI's Technical Committee for Securing Artificial Intelligence, commented: "In an era where cyber threats are growing in both volume and sophistication and negatively impacting organisations of every kind, it is vital that the design, development, deployment, and operation and maintenance of AI models is protected from malicious and unwanted inference. Security must be a core requirement, not just in the development phase, but throughout the lifecycle of the system. This new specification will help do just that - not only in Europe, but around the world."

"This publication is a global first in setting a clear baseline for securing AI and sets TC SAI on the path to giving trust in the security of AI for all its stakeholders."

ETSI's focus on supporting end users with accessible guidance marks an attempt to raise standards in AI security while facilitating actual implementation by organisations of all sizes. The new specification and its supporting guide aim to serve as reference points for the international AI industry amid ongoing concerns over digital safety and trust.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X