Insights

Artificial intelligence: A fractured regulatory environment

Artificial intelligence

Artificial intelligence (AI) is advancing at an exponential rate. Generative artificial intelligence, such as ChatGPT, has made headlines due to its technological disruption and accessibility.

Countries around the world have started assessing and implementing regulatory frameworks to manage the risks posed by AI. This update provides a snapshot of the current AI regulatory landscape in Australia, the European Union and the United Kingdom.

Australian perspective: From a principles-based to risk management approach

To date, Australia has taken a soft-law, principles-based approach to regulating AI. The Australian Government's AI Ethics Framework, published in November 2019, sets out eight principles for ensuring AI is used in a way that is safe, secure and reliable. These voluntary principles may be applied by businesses or government when designing, developing, integrating and using AI.

While there are currently no laws or regulations specific to AI in Australia (although existing laws can and do apply), the Australian Government is continuing to consider specific AI regulation as part of its strategy to position Australia as a leader in digital economy regulation.

In June 2023, the Australian Government released its discussion paper Safe and Responsible AI in Australia for public consultation. This built on the recent Rapid Research Report on Generative AI delivered by the National Science and Technology Council.

The discussion paper seeks feedback on the steps Australia can take to mitigate any potential risks of AI and support safe and responsible AI practices. It sets out 20 questions for stakeholders to consider, which are designed to assist the Australian Government in ensuring that Australia continues to support responsible AI practices and to increase public trust and confidence in the development and use of AI in Australia.

The discussion paper's consultation period for submissions closes 26 July 2023.

Overseas perspective: a mixed approach

Globally, there are discrepancies in how different jurisdictions approach the regulation and governance of AI. A complex question exists as to whether prescriptive legislative intervention or principles-based regulation is the most appropriate approach; or alternatively, whether it is more effective to calibrate existing regulatory systems.

European Union

In 2021, the European Union (EU) Commission released a proposed regulation for AI, the Artificial Intelligence Act (AI Act). The proposed AI Act would introduce an EU-wide framework to regulate the development, deployment and use of AI systems, to ensure the safe and responsible use of AI in the future.

While this approach has the advantage of consolidating specific rules relating to AI under one law, it remains to be seen whether this is the most appropriate response. Attempting to regulate technology with capabilities that are not fully understood poses a considerable challenge. There may also be different implications for different use cases in different sectors, including competing needs across sectors, that one piece of legislation cannot accommodate easily.

The AI Act is likely to become the international standard for AI regulation, similarly to the General Data Protection Regulation (GDPR).

United Kingdom

The United Kingdom (UK) has expressed its intention to regulate AI by adopting a flexible, principles-based and pro-innovation approach. In March 2023, the UK Government published a policy paper outlining its regulatory framework approach.

The UK seeks to establish a framework underpinned by five principles:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress.

Existing regulators will be tasked with interpreting and applying the framework to AI within their regulatory remits. Adopting a regulator-led approach is a key difference to the EU approach and may create more space and flexibility to resolve competing needs and use cases.

Important considerations for businesses adopting AI technology

Until the likely rise of AI regulation in Australia and internationally, existing laws must not be ignored when designing, developing and using AI in Australia. These include:

  • the Privacy Act 1988 (Cth), which regulates the collection and handling of "personal" information
  • the Australian Consumer Law (ACL) and particularly the ACL's misleading and deceptive conduct regime, which can apply where organisations make representations about the collection and use of personal information that does not accurately reflect its use in AI systems
  • Commonwealth, state and territory anti-discrimination laws, which in principle apply to the use of AI
  • Commonwealth, state and territory laws governing the use of surveillance devices
  • state and territory defamation and criminal laws.

Businesses are also recommended to consider reviewing their internal processes and governance in anticipation of the introduction of AI regulatory frameworks domestically and internationally. While this area of law is evolving, businesses should position themselves to be adaptable and responsive to existing laws and new AI laws in future.

Further information

Lander & Rogers is watching the AI space with interest and will continue releasing updates as legal developments arise.

For more information on the legal aspects of artificial intelligence, compliance with current regulations and how to be AI regulation-ready, or how to draft an AI governance policy specific to your business, please contact our team of experienced practitioners.

Thank you to Annabelle Gray, Lawyer; Lilli Borozan, Graduate; and Joseph Lau, Graduate for their valuable contribution to this article.

Photo by AdobeStock

All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.

Key contacts

Keely O'Dowd

Senior Associate

Kenneth Leung

Kenneth Leung

Lawyer