“Cybersecurity of AI in the AI Act”. A Summary by the European Commission’s Joint Research Centre
Very helpful report by the European Commission’s Joint Research Centre on “Cybersecurity of AI in the AI Act”. A summary:
AI security is covered in Art 15 draft EU AI Act, together with accuracy and robustness, as well as in Recital 51. See for the current text/draft versions of the AI Act: https://lnkd.in/dEZ-pdQ9
The AI Act will require high-risk AI systems to be designed with security in mind, aiming for appropriate levels of accuracy, robustness, safety, and cybersecurity throughout their lifecycle. Compliance involves using state-of-the-art (SOTA) measures based on the specific market segment or application.
AI systems will need to incorporate technical solutions to mitigate AI-specific vulnerabilities, including preventing, detecting, responding to, resolving, and controlling attacks related to data and model poisoning, adversarial examples, confidentiality breaches, and model flaws that could lead to harmful decisions.
In alignment with the European Commission’s new standardization strategy, the AI Act emphasizes the importance of standards in ensuring compliance with the regulation in Recital 61.
Standards on AI-specific cybersecurity are beginning to be developed on the international level, but are not yet available, most notably ISO 27090 on AI-specific mitigation and controls. Work at European level is just beginning.
The Guiding Principles to address the cybersecurity requirements of the AI
Act are:
1: The focus of the AI Act is on AI systems:
“An AI system, formally defined in Article 3(1), should be understood as a software that includes one or several AI models as key components alongside other types of internal components such as interfaces, sensors, databases, network communication components, computing units, pre-processing software, or monitoring systems.” (report, p. 9)
2: Compliance with the AI Act necessarily requires a security risk assessment:
A security risk assessment needs to take into account the AI system’s internal structure and intended application context.
3. Securing AI systems requires an integrated and continuous approach using proven practices and AI-specific controls:
Cybersecurity for AI systems should incorporate both existing controls for software systems and specific controls tailored to individual AI models at various system levels throughout the system’s lifecycle, guided by security-in-depth and security-by-design principles.
4. There are limits in the state of the art for securing AI models:
With newer and more complex models becoming more widespread, complexity in securing AI systems is expected to increase. Various technological challenges exist in securing machine learning models.
by Henrik Junklewitz Ronan Hamon Antoine-Alexandre André Tatjana Evas, Tatjana Josep Soler Garrido Ignacio Sanchez Martin 👏