• Skip to main content
  • Skip to footer
  • About
    • Leadership
    • Code of Ethics
    • Privacy Commitment
    • Fellowship Awards
    • Media
  • Get Involved
    • Membership
      • Advancing the Profession
      • Member Benefits
      • Why Join the PACC
      • Member Contact Update
    • Donate
      • Donor Bill of Rights
    • Speak Out
    • Volunteer
  • Certification
    • Guiding the Profession
    • Why Pursue Certification?
    • Benefits of certification
    • Certificate or Certification?
    • Recertification
    • Certification FAQ
    • Accreditation
  • Careers
    • Current Opportunities
  • Resources
    • Strategic Privacy and Access Resource Center
      • Parents & Teachers
      • Standards
      • International Data Flows
      • Stay Informed & Avoid Spam
      • SPARC Contribution Guidelines
    • Commissioners
    • Publications
    • Professional Development
    • Speaking Invitations & Media Requests
  • News & Views
    • Guest Post Guidelines
  • EVENTS
    • Privacy & Data Governance Congress
      • Call for Proposals
      • Sponsors and Partners
      • Attendees
      • Congress FAQ
    • Past Events
  • Privacy Matters
  • Login

Privacy and Access Council of Canada

The voice for privacy and access

Cart

Five considerations to guide the regulation of “General Purpose AI”

14/Apr/2023

The rapid adoption of artificial-intelligence-powered systems including ChatGPT — which gained more than one million users within weeks of its launch in November 2022, and has been used by more than 100 million people worldwide — has made clear that the question of whether (and how) so-called “general purpose artificial intelligence” (GPAI) should be regulated is not hypothetical, gratuitous, or premature. The reality of AI’s rapid adoption, in circumstances that lack adequate or effective regulation, is the subject of a debate around the Artificial Intelligence Act, the EU’s flagship AI proposal to regulate Artificial Intelligence based on its potential to cause harm, which has been evolving for nearly two years.

Similar debates are taking place elsewhere, including Canada, as lawmakers struggle to update a legislative framework that has been outpaced by the proliferation of unregulated technologies, such as artificial intelligence. While AI technologies have provided many awe-inspiring medical and scientific benefits, many more have been proven to have profound impacts upon personal autonomy, privacy, society, and democratic freedoms.

Given that EU regulation will likely become the de facto global standard for General AI in much the same way as the GDPR did for privacy, an international group of leading researchers and institutions from across domains has published a policy brief of considerations to guide the regulation of “General Purpose AI” in the EU’s AI Act. The recommendations are valuable for lawmakers, lawyers, insurers, academics, system designers, and privacy practitioners in all sectors and countries. 

A coherent approach to addressing AI harms globally is essential to ensure the laws and regulations governing the design, production, sale, and use of AI are as consistent and future-proof as possible. 

The policy guidance for the EU AI Act, which will set the regulatory tone for addressing AI harms, offers thoughtful recommendations applicable to regulating artificial intelligence globally. It argues the following:

  1. GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across a spectrum of technologies, rather than be narrowly scoped to chatbots/large language models (LLMs). The definition used in the Council of the EU’s general approach for trilogue negotiations provides a good model.
  2. GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer.
  3. GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models toprofit from a distant downstream application while evading any corresponding responsibility.
  4. Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) o the hook, instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks.
  5. Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. Standardizeddocumentation practice and other approaches to evaluate GPAI models, specifically generative AI models, across many kinds of harm are an active area of research. Regulation should avoid endorsing narrow methods of evaluation and scrutiny to prevent this from resulting in a superficial checkbox exercise.

The Privacy and Access Council of Canada stands with the Distributed AI Research Institute, Mozilla Foundation, the AI Now Institute, AlgorithmWatch, and internationally-recognized experts in computer science, law and policy, and the social sciences, who agree that General Purpose AI carries serious risks and harmful unintended consequences, and must not be exempt under the EU AI Act,  or equivalent legislation in Canada or elsewhere. 

You can read the full brief and signatories at https://ainowinstitute.org/wp-content/uploads/2023/04/GPAI-Policy-Brief.pdf

Filed Under: AI, Democracy, Legislation, Privacy, Technology Tagged With: AI, Democracy, Legislation, Privacy, Technology

Footer

PACC is the voice for privacy and access.

PACC is Independent  •  Non-profit  •  Non-partisan  •  Non-government

PACC is dedicated to the development and promotion of the access-to-information, information privacy, and data governance profession across the private, non-profit and public sectors.

PACC is the certifying body for access and privacy professionals, and engages in outreach efforts to advance awareness about access, privacy, and data protection.

Recent Posts

  • Consultation: Online News Act — Bill C-18
  • Public Sector Use of Private Sector Personal Data: Towards Best Practices
  • Consultation on revised notices regimes
  • Consultation on Combatting Misinformation and Disinformation
  • Public Consultation on Lawful Interception

ABOUT

MEMBERSHIP

CERTIFICATION

CURRENT OPPORTUNITIES

RESOURCES

BLOG

CONTACT

 

Thanks to QuestionPro’s wide range of free survey templates designed by industry experts. We now know exactly where to improve
…………

© 2023 · Privacy and Access Council of Canada · Maintained by SLIcore Design.