The rapid adoption of artificial-intelligence-powered systems including ChatGPT — which gained more than one million users within weeks of its launch in November 2022, and has been used by more than 100 million people worldwide — has made clear that the question of whether (and how) so-called “general purpose artificial intelligence” (GPAI) should be regulated is not hypothetical, gratuitous, or premature. The reality of AI’s rapid adoption, in circumstances that lack adequate or effective regulation, is the subject of a debate around the Artificial Intelligence Act, the EU’s flagship AI proposal to regulate Artificial Intelligence based on its potential to cause harm, which has been evolving for nearly two years.
Similar debates are taking place elsewhere, including Canada, as lawmakers struggle to update a legislative framework that has been outpaced by the proliferation of unregulated technologies, such as artificial intelligence. While AI technologies have provided many awe-inspiring medical and scientific benefits, many more have been proven to have profound impacts upon personal autonomy, privacy, society, and democratic freedoms.
Given that EU regulation will likely become the de facto global standard for General AI in much the same way as the GDPR did for privacy, an international group of leading researchers and institutions from across domains has published a policy brief of considerations to guide the regulation of “General Purpose AI” in the EU’s AI Act. The recommendations are valuable for lawmakers, lawyers, insurers, academics, system designers, and privacy practitioners in all sectors and countries.
A coherent approach to addressing AI harms globally is essential to ensure the laws and regulations governing the design, production, sale, and use of AI are as consistent and future-proof as possible.
The policy guidance for the EU AI Act, which will set the regulatory tone for addressing AI harms, offers thoughtful recommendations applicable to regulating artificial intelligence globally. It argues the following:
- GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across a spectrum of technologies, rather than be narrowly scoped to chatbots/large language models (LLMs). The definition used in the Council of the EU’s general approach for trilogue negotiations provides a good model.
- GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer.
- GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models toprofit from a distant downstream application while evading any corresponding responsibility.
- Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) o the hook, instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks.
- Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. Standardizeddocumentation practice and other approaches to evaluate GPAI models, specifically generative AI models, across many kinds of harm are an active area of research. Regulation should avoid endorsing narrow methods of evaluation and scrutiny to prevent this from resulting in a superficial checkbox exercise.
The Privacy and Access Council of Canada stands with the Distributed AI Research Institute, Mozilla Foundation, the AI Now Institute, AlgorithmWatch, and internationally-recognized experts in computer science, law and policy, and the social sciences, who agree that General Purpose AI carries serious risks and harmful unintended consequences, and must not be exempt under the EU AI Act, or equivalent legislation in Canada or elsewhere.
You can read the full brief and signatories at https://ainowinstitute.org/wp-content/uploads/2023/04/GPAI-Policy-Brief.pdf