Publication
Amendments Submitted Regarding Canada's Proposed Artificial Intelligence Legislation
Published October 30, 2023
The Digital Charter Implementation Act, 2022 (Bill C-27) continues its progress through Parliament, having reached the Committee on Industry and Technology (INDU) after passing its second reading in the House of Commons on April 24, 2023. More recently, on October 3, 2023, the Minister of Innovation, Science and Industry (Minister) provided a series of proposed amendments to Bill C-27 (C-27 Amendments), to be reviewed and considered by INDU.
If Bill C-27 passes, it will result in the enactment of three new statutes: (a) the Consumer Privacy Protection Act (CPPA), which will replace the current federal privacy legislation, the Personal Information Protection and Electronic Documents Act or PIPEDA; (b) the Personal Information and Data Protection Tribunal Act (PIDPTA), which will establish an adjudicative tribunal; and (c) the Artificial Intelligence and Data Act (AIDA), which will govern the responsible deployment of Artificial Intelligence (AI) technologies in Canada.
Canada is among the first jurisdictions to propose legislation governing AI. The European Union is also working on AI legislation[1] (EU AI Act) and the United States is working on regulating AI, although currently through a Presidential Executive Order rather than legislation.[2] AIDA is a novel piece of legislation which, if enacted, will impose significant obligations on organizations and individuals using and creating AI systems, including what it refers to as "high-impact systems". This article gives a brief overview of AIDA, with a focus on the C-27 Amendments that would amend AIDA.
Last Fall, BD&P provided an article commenting on Bill C-27's potential impact on Alberta's provincial privacy legislation (here). This article will be the first in a series of articles explaining each new proposed statute within Bill C-27, as well as the C-27 Amendments affecting those proposed statutes.
Who will AIDA affect?
AIDA, as proposed, will cover most aspects of AI development and use in the course of international or interprovincial trade and commerce. The regulated activities are broad and include designing, developing or making available AI systems, or the system's operations, or making available human data for the use of AI systems.[3]
What are the general obligations under AIDA?
Some of the key features of AIDA can be described through the following proposed requirements:
- Assessment and Record Keeping: Those in charge of an AI system must assess if it's a "high-impact system" and maintain records of the assessment process.[4]
- Risk Mitigation: Parties responsible for high-impact systems must identify, assess, and mitigate risks of harm or bias, maintain records of these measures, and ensure ongoing compliance and effectiveness.[5]
- Transparency: High-impact system operators must publish clear descriptions of their system's purpose, content generation, decision-making, mitigation measures, and other prescribed information on a publicly accessible website.[6]
- Reporting: If the use of a high-impact system results or is likely to result in material harm, the responsible party must promptly inform the Minister. [7]
- Anonymized Data: Those responsible for AI systems that carry out certain regulated activities using anonymized data must establish measures and record-keeping practices for anonymizing and managing this information.[8]
What are the fines for violations and other offences?
A massive implication of AIDA for businesses relates to the significant proposed fines for violators. Non-compliance with the key obligations imposed on persons responsible with respect to regulated activities[9] could amount to a fine of up to $10 million or 3% of the entity's gross global revenues, whichever is greater.[10] Fines would increase to either the greater of $20 million or 4% of the entity's gross global revenues, or the greater of $25 million or 5% of the entity's gross global revenues[11] if the organization commits certain offences under AIDA.[12]
The proposed C-27 Amendments clarify who is responsible for what
In the current draft of AIDA, certain obligations are not clear. For example, it is not clear what constitutes a "high-impact system", yet the statute would require each entity to determine for itself if any of its systems would be considered a high-impact system. The proposed C-27 Amendments provide much-needed clarity, respecting high-impact systems and otherwise, by:
- defining classes of systems that would be considered "high impact systems"
- specifying distinct obligations for general-purpose AI systems, like ChatGPT
- clearly differentiating roles and obligations for actors in the AI value chain
- strengthening and clarifying the role of the proposed AI and Data Commissioner
- aligning certain definitions with the EU AI Act and the AI approaches of other advanced economies
The classes of high-impact systems
Perhaps one of the most illuminating of the C-27 Amendments is the proposed classification of "high-impact systems" which can be described as systems that deal with health or safety.
The proposed list includes seven areas of high impact:[13]
- Employment: AI used in employment decisions, such as hiring, pay, promotions, training, transfers, and terminations.
- Service Determination: AI used in deciding whether to offer services to an individual, determining the type and cost of services, and prioritizing service delivery.
- Biometric Information: AI processing biometric data for individual identification or behaviour analysis, excluding consent-based authentication.
- Content Moderation: AI used to moderate online content on platforms like search engines and social media, and prioritize its presentation.
- Healthcare and Emergency Services: AI used in healthcare or emergency services, excluding certain medical device-related uses.
- Legal Decisions: AI used by courts or administrative bodies to make determinations in cases involving individuals.
- Law Enforcement Assistance: AI aiding peace officers in performing law enforcement powers and duties.
While the C-27 Amendments have only recently been introduced and are still being considered, monitoring their progress will be important to help Canadian companies—particularly in the technology sector—understand what new AI governance measures they will need to implement in the near future.
Stay tuned for future updates on Bill C-27 and the C-27 Amendments. If you have any questions about the proposed amendments, please reach out to any member of our Intellectual Property group.
[1] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
[2] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
[3] AIDA s. 5(1)
[4] AIDA ss. 7, 10(1)
[5] AIDA ss. 8-9
[6] AIDA ss. 11(1),11(2)
[7] AIDA s. 12, Under AIDA s. 5(1), “harm” is defined as (a) physical or psychological harm to an individual; (b) damage to an individual’s property; or (c) economic loss to an individual
[8] AIDA ss. 5, 6
[9] AIDA ss. 6-12
[10] AIDA s. 30
[11] AIDA s. 40
[12] AIDA ss. 38, 39
[13] MinisterOfInnovationScienceAndIndustry-2023-10-03-e.pdf (ourcommons.ca)