In early 2024, the Australian Government released an interim response following a consultation in 2023. The interim response outlines proposed next steps for AI reform and focuses on the following key areas:
- preventing harms from occurring through testing, transparency and accountability;
- clarifying and strengthening laws to safeguard citizens;
- working internationally to support the safe development and deployment of AI; and
- maximizing the benefits of AI.
As part of the initiatives proposed for addressing point 1 above, the Government committed to developing an AI Safety Standard and implementing risk-based guardrails for the industry, which were released on September 5, 2024, as part of its measures to regulate AI.
There are 10 voluntary guardrails (which form part of the Voluntary AI Safety Standard) currently in effect that apply to all Australian organizations and all organizations throughout the AI supply chain as well as 10 mandatory guardrails, applicable to AI in high-risk settings, that have been released in draft form and are subject to further consultation.
Brazil has proposed a comprehensive AI Bill, which is currently being debated (as of July 2024) by the Brazilian government.
The Bill’s primary aim is to grant individuals significant rights and place specific obligations on companies that develop or use AI technology (AI supplier or operator). To achieve this, the Bill establishes the creation of a new regulatory body to enforce the law and takes a risk-based approach by categorising AI systems into different categories. It also introduces a protective system of civil liability for providers or operators of AI systems, along with a reporting obligation for significant security incidents.
The Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) in June 2022 as part of Bill C-27, the Digital Charter Implementation Act, 2022. Following the second reading in the House of Commons in April 2023, Bill C-27 is currently being studied by the Standing Committee on Industry and Technology.
The AIDA proposes the following approach:
- Building on existing Canadian consumer protection and human rights law, AIDA would ensure that high-impact AI systems meet the same expectations with respect to safety and human rights to which Canadians are accustomed. Regulations defining which systems would be considered high-impact, as well as specific requirements, would be developed in consultation with a broad range of stakeholders to ensure that they are effective at protecting the interests of the Canadian public, while avoiding imposing an undue burden on the Canadian AI ecosystem.
- The Minister of Innovation, Science, and Industry would be empowered to administer and enforce the Act, to ensure that policy and enforcement move together as the technology evolves. An office headed by a new AI and Data Commissioner would be created as a centre of expertise in support of both regulatory development and administration of the Act. The role would undergo gradual evolution of the functions of the commissioner from solely education and assistance to also include compliance and enforcement, once the Act has come into force and ecosystem adjusted.
- Prohibit reckless and malicious uses of AI that cause serious harm to Canadians and their interests through the creation of new criminal law provisions.
The Government of Canada launched the Canadian Artificial Intelligence Safety Institute (CAISI) on Tuesday, November 12, 2024, to leverage Canada’s world-leading AI research ecosystem and talent base to advance the understanding of risks associated with advanced AI systems and to drive the development of measures to address those risks. CAISI will conduct research under two streams: Applied and investigator-led research; and Government-directed projects.
China does not have a comprehensive AI Act but does have a number of regulations that focus on subsets of AI.
China’s Interim Measures for the Management of Generative Artificial Intelligence Services came into effect on August 15, 2023. The AI Measures are formulated to "promote a healthy development and regulated application of generative artificial intelligence, safeguard national security and social public interests, and protect the lawful rights and interests of citizens, legal persons and other organizations.
The Measures include:
- Lawful use
- Data labeling rules
- Data training
- Content moderation
- Reporting mechanism
The AI Measures apply to companies when they provide Generative AI services to the public within China regardless of where they are incorporated.
The Deep Synthesis Provisions, which came into force in January 10, 2023, are designed to implement the Outline for the Construction of a Rule of Law-governed Society (2020-2025) formulated by the Chinese central government. The goal is to improve the standardized management of new technologies such as algorithmic recommendations and deep fakes, define the "bottom line" and "red line" for deep synthesis services, and maintain a healthy cyberspace ecosystem.
Effective as of December 1, 2023, the Ethics Review Measures aim to address the social and ethical challenges arising from the development of science and technology, boost innovation, and improve the regulatory frameworks and legal landscape for ethical review.
The measures set out detailed requirements for the procedures and standards of ethics reviews in the areas of science and technology. Activities that are subject to these reviews include scientific and technological activities (e.g., AI development) which involve humans or experimental animals, or may otherwise pose ethical challenges related to life and health, the environment, public order and sustainable development.
The European Union’s comprehensive AI Act was passed by the European Parliament on March 13, 2024. The Act entered into force on August 1, 2024, and will be effective from August 2, 2026, except for the specific provisions listed in Article 113.
The AI Act is designed to ensure AI developed and used in the EU is trustworthy, with safeguards to protect people's fundamental rights.
Specifically, the AI Act:
- addresses risks specifically created by AI applications
- prohibits AI practices that pose unacceptable risks
- determines a list of high-risk applications
- sets clear requirements for AI systems for high-risk applications
- defines specific obligations deployers and providers of high-risk AI applications
- requires a conformity assessment before a given AI system is put into service or placed on the market
- puts enforcement in place after a given AI system is placed into the market
- establishes a governance structure at European and national level
Additionally, the European Commission signed the Council of Europe Framework Convention on Artificial Intelligence in September 2024 on behalf of European Union. The Convention is the first legally binding international instrument on AI and is fully compatible with Union law in general, and the EU AI Act in particular, and provides for a common approach to ensure that activities within the lifecycle of AI systems are compatible with human rights, democracy and the rule of law, while enabling innovation and trust.
While there are currently no specific laws or legislation in India regarding AI regulation, various frameworks are being formulated to guide the regulation of AI, including:
- The National Strategy for Artificial Intelligence (June 2018), which aims to establish a strong basis for future regulation of AI in India.
- The Principles for Responsible AI (February 2021), which serve as India’s roadmap for the creation of an ethical, responsible AI ecosystem across sectors.
- The Operationalizing Principles for Responsible AI (August 2021), which emphasizes the need for regulatory and policy interventions, capacity building and incentivizing ethics by design with regards to AI.
While Japan currently has no law specifically directed to regulating AI, the Japanese government published new AI Guidelines for Business Version 1.0 in April 2024, which are not legally binding but are expected to support and induce voluntary efforts by developers, providers and business users of AI systems through compliance with generally recognized AI principles and following a risk-based approach.
Additionally, a government working group has proposed a regulatory law for AI, entitled the Basic Act on the Advancement of Responsible AI, which would adopt a hard law approach to regulate certain generative AI foundation models. Under the proposed AI Bill, the government would designate the AI systems and developers that are subject to regulation, impose obligations on them with respect to the vetting, operation, and output of the systems, and require periodic reports concerning such AI systems.
The Federal Council has tasked the Federal Department of the Environment, Transport, Energy and Communications (DETEC) with submitting a report by the end of 2024 to identify possible approaches to regulating AI in Switzerland. This analysis is meant to serve as a basis for the Federal Council to issue a specific mandate for drafting an AI regulation in 2025.
In collaboration with the University of Zurich, the Digital Society Initiative published a position paper in 2021, which outlines the approaches that should be taken to the legal coverage of algorithmic systems* in Switzerland, the issues that require particular attention, and how Switzerland should position itself in the context of European regulatory trends.
*the term “algorithmic systems” is used in the position paper instead of AI.
While the U.K. does not have any AI-specific legislation, the U.K. government did issue a white paper on its domestic AI regulation in March 2023.
The white paper indicates a clear intention from the U.K. government to create a proportionate and pro-innovation regulatory framework, focusing on the context in which AI is deployed rather than the technology itself.
At the heart of the UK's framework are five guiding principles that govern the responsible development and use of AI across all sectors of the economy. These principles are:
- Safety, security, and robustness: ensuring AI systems are reliable and secure.
- Appropriate transparency and explainability: making sure AI operations are transparent and can be easily understood by users.
- Fairness: ensuring AI does not contribute to unfair bias or discrimination.
- Accountability and governance: holding AI systems and their operators accountable for their actions.
- Contestability and redress: providing mechanisms for challenging AI decisions and seeking redress.
Note: this information was published before the change in U.K. government following the 2024 general election. It is unclear if the new government will proceed this framework.
Federal:
The U.S. lacks a unified AI regulation but has proposed or established numerous guidelines and frameworks to govern the AI sector on a federal level.
The Algorithmic Accountability Act of 2022 aims to hold organizations accountable for their use of algorithms and other automated systems that are involved in making critical decisions that affect the lives of individuals in the U.S. Among other requirements, the Act would mandate covered entities to conduct impact assessments of the automated systems they use and sell in accordance with regulations that would be set forth by the Federal Trade Commission. The Act is currently before the Senate Committee on Commerce, Science, and Transportation.
Currently before the U.S. House of Representatives, the No AI Fake Replicas & Unauthorized Duplications Act (“No AI Fraud Act”) is bipartisan legislation establishing safeguards to protect against generative AI abuses that stem from the unauthorized copying of a person’s individuality and result in deepfakes, voice clones, and non-consensual impersonations.
Currently before the U.S. Senate, the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024 (“NO FAKES Act”) legislation would create an enforceable new federal intellectual property right allowing victims of nonconsensual deepfakes and voice clones to have them quickly taken down and recover damages.
Note: this legislation was tabled before the change in government following the 2024 U.S. election. It is unclear if the new government will proceed with passing this legislation.
State:
State-wide legislation included below is specific to protecting performers and artists from the misuse of AI:
California
The state of California passed two AI bills on September 17, 2024, both of which will take effect January 1, 2025. Assembly Bill No. 1836 Use of likeness: digital replica protects against the unauthorized exploitation of digital replicas of deceased personalities, which will make it illegal to produce, distribute, or otherwise make available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without prior consent. Assembly Bill No. 2602 Contracts against public policy: personal or professional services: digital replicas protects individuals (including performers) against vague, unfair, and unethical contractual terms that may, deceptively and without the performer’s full awareness, permit the unregulated production, use and distribution of digital replicas of their likeness.
Assembly Bill No. 2013 Generative artificial intelligence: training data transparency, which was passed on September 24, 2024, and will take effect January 1, 2026, requires AI developers to disclose whether they are training their models on copyrighted work.
Tennessee
The state of Tennessee, which was already just one of three states where name, photographs and likeness are considered a property right rather than a right of publicity, became the first state in the U.S. to enact legislation designed to protect songwriters, performers and other music industry professionals against the potential dangers of AI. The state’s “ELVIS Act” (the Ensuring Likeness, Voice, and Image Security Act), which came into effect July 1, 2024, ensures vocal likeness is now included in that list. The law also creates a new civil action where people can be held liable if they publish or perform an individual's voice without permission, as well as use a technology to produce an artist's name, photographs, voice or likeness without the proper authorization.