How the EU is starting to define the requirements for General Purpose AI systems (GPAI)
How the EU is starting to define the requirements for General Purpose AI systems (GPAI) such as ChatGPT within the Euro AI Act
TLDR; The European Union's AI Act aims to mitigate potential harm caused by AI, but a major issue in the negotiations of this proposal is how it will handle General Purpose AI (GPAI), which are large language models, such as ChatGPT or Stable Diffusion that can be utilized for various tasks. EU lawmakers leading the development of the AI Act have proposed major obligations for providers of large language models in addition to clarifying responsibilities across the AI value chain in a first draft regarding this topic on March 14, 2023.
For general reference as you read through the rest of this article:
Based on the proposed legislation, a GPAI is defined as "an AI system that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of tasks."
AI systems developed for a limited set of applications that cannot be adapted for a wide range of tasks are not classified as General Purpose AI systems (GPAI).
None of these provisions are approved yet, just in draft form and being introduced for discussion ever since the explosion of ChatGPT and other LLM's.
New Provision for GPAI Requirements: If a large language model is considered a GPAI, according to the proposed legislation, then the following requirements may be present for those large language models:
Comply with some of the requirements initially meant for the AI solutions more likely to cause significant harm, regardless of the distribution channel and if the system is provided as a standalone or embedded in a larger system.
Align the design, testing, and analysis of the GPAI solutions with the risk management requirements of the regulation to protect people’s safety, fundamental rights, and EU values, including by documenting non-mitigable risks.
Follow appropriate data governance measures for the datasets feeding these large language models such as assessing their relevance, suitability, and potential biases, identifying possible shortcomings and relative mitigation measures.
Undergo external audits throughout their lifecycle testing their performance, predictability, interpretability, corrigibility, safety, and cybersecurity in line with the AI Act’s strictest requirements.
Jointly develop cost-effective guidance and capabilities with international partners to measure and benchmark the compliance aspects of AI systems, and in particular GPAI.
Subject AI models that generate text based on human prompts that could be mistaken for authentic human-made content to the same data governance and transparency obligations of high-risk systems unless someone is legally responsible for the text.
Register the GPAI model on the EU database.
Comply with the quality management and technical document requirements as high-risk AI providers and follow the same conformity assessment procedure.
New Provision for Responsibilities: GPAI's would be expected to be aware of the following:
Any third party, like an GPAI’s distributor, importer or deployer, would be considered a provider of a high-risk system (with the related obligations) if it substantially modifies an AI system, including a General Purpose AI one.
The provider of the original GPAI system will have to assist the new provider (third party), notably by providing the necessary technical documentation, relevant capabilities, and technical access to comply with the AI regulation without compromising commercially sensitive information.
A new annexe was introduced listing examples of information that the GPAI providers should grant to the downstream operators concerning specific obligations of the AI Act like risk management, data governance, transparency, human oversight, quality management, accuracy, robustness and cybersecurity.
To help the downstream economic operator comply with the risk management requirements of the AI rulebook, the annexe asks the GPAI provider to share information on the system’s capabilities and limitations, use instructions, performance testing results, and risk mitigation measures.
In addition, as it relates to monitoring and oversight, the EU Body would monitor GPAI models in how they are being used and practices of self-governance.
Bertuzzi, L. (2023, March 14). Leading EU lawmakers propose obligations for General Purpose AI. EURACTIV.com. https://www.euractiv.com/section/digital/news/leading-eu-lawmakers-propose-obligations-for-general-purpose-ai/