Insights

Artificial intelligence regulation under the watchful eye of Digital Platform Regulators' Forum

People in busy market being observed by artificial intelligence

The Digital Platform Regulators' Forum (DP-REG) has prepared a joint submission to government acknowledging the potential for artificial intelligence (AI) to enhance Australia's digital economy but also to compromise consumer protection, competition, privacy and online safety, as well as the work of Australia's regulatory bodies.

The submissions comes in response to calls for contributions to the Department of Industry, Science and Resources' *Safe and responsible AI in Australia* discussion paper, which considers the benefits of AI tempered by the need to mitigate risks originating from fast-moving technology.

What is the Digital Platform Regulators' Forum?

The DP-Reg is an information-sharing and collaboration initiative between Australian regulators aimed at ensuring the safety, trustworthiness, fairness, innovativeness and competitiveness of Australia's digital economy.

DP-Reg members include the Australian Competition and Consumer Commission (ACCC), the Office of the Australian Information Commissioner (OAIC), the Australian Communications and Media Authority (ACMA) and the eSafety Commissioner (eSafety).

Released 11 September 2023, DP-Reg's joint submission covers the regulators' intersecting areas of competition, consumer protection, privacy, online safety, and data, and considers the impact of AI under existing laws and the need for future laws.

Explore the key take-aways from the submission below.

The work of regulators

DP-Reg's submission acknowledges AI's potential to compromise the work of the regulators, with the technology being used to generate, for example, fake submissions to consultations that may not reflect genuine concerns or opinions of Australian citizens and businesses, with the potential, and perhaps intent, to warp consultation processes and outcomes.

Consumer and Competition

While new AI-based or integrated products and services have the potential to benefit consumers, the technology may also exacerbate online risks to consumers when deployed for nefarious purposes such as scams, fake reviews and harmful applications, effectively eroding consumer confidence in the digital economy and distorting competition.

Amplifying threat of scams

While AI could be used to detect scams, it also has the potential to increase the volume and sophistication of scams.

The newly created National Anti-Scam Centre (NASC) builds on the ACCC's Scamwatch service and will make the enforcement activities of ACMA and ASIC easier, representing another example of regulators working together.

There is also support for an economy-wide ban on unfair trading practices to more adequately address online harms, such as persuading consumers to purchase goods online based on fake reviews or downloading a false software update because it looks and sounds authoritative.

According to the submission, DP-Reg participants believe the application of the Australian Consumer Law (ACL) to the design and supply of digital products could be more clearly set out to protect the interests of consumers.

Eroding competition

While open-source training data for general large language models (LLMs) is available through digital libraries, companies with control over valuable or unique IP-protected data may decide to close access, or charge for it, creating a barrier to entry and further entrenching existing market power.

Furthermore, the use of collusive algorithms by two or more firms may result in anti-competitive behaviour, such as setting prices or determining bids.

In both instances, existing laws may not be sufficient, hence allowing competing algorithms that are simultaneously learning to augment the market through setting higher prices and maximising individual profits at the expense of consumers.

Disinformation/misinformation

AI can be used to create and widely distribute seemingly credible disinformation and misinformation at rapid speeds and very low cost. With Australians turning to AI for 'advice' or answers, there is significant potential for the technology to give rise to false information without consumers ever realising.

Recommender systems

User engagement is a key metric in the digital economy, with many platforms and systems designed to maximise audience size and engagement using recommender systems. However, these systems can lead to the spread of disinformation due to controversial stories spreading faster and gaining more interactions than was previously possible using less sophisticated and non-AI algorithms.

Safeguards

While recommender systems are commonly used in the online news sector, operators of such technologies can be held to account through broadcasting codes requiring factual content in news and current affairs programs.

AI can also play a role in detecting and moderating disinformation and misinformation to offset or curtail the risks associated with AI-powered recommender systems.

The Voluntary Code of Practice on Disinformation and Misinformation, managed by the Digital Industry Group, already requires signatories to provide safeguards against harms arising from disinformation and misinformation. The Australian government is also currently consulting on new regulatory powers to address the issue.

Privacy

Privacy laws already apply where AI uses personal information, however, AI-based information-handling systems can be complex and opaque, making it difficult for individuals to understand and assess how their information is collected and used.

AI outputs may also contain inaccurate information about an individual, and long periods of data retention required by the technology to work can expose individuals to risk of harm arising from data breaches.

The risks call for strong and effective privacy protections to ensure AI meets customer expectation and garners trust

Regulation

Privacy in Australia is governed by the Privacy Act 1988 (Cth) and is principles-based, with the Australian privacy principles (APPs) having technology-neutral application.

The advantage of the APPs for AI is the flexibility it affords to take a risk-based approach to the protection of privacy, allowing businesses to be scalable and adaptable, and allowing the Privacy Act to complement other legislation. However, specific rules can be made via codes, creating greater clarity on obligations when it may be warranted.

The Privacy Act Review Report has recommended various reforms, including to enhance transparency and individual self-management, which will assist to mitigate the potential privacy risks of AI.

Online safety

AI can be used to create realistic synthetic images depicting:

  • child sexual exploitation and abuse;
  • deepfakes depicting individuals in sexually explicit contexts without consent or having ever engaged in the activity; and
  • high volumes of content used for bullying, abusing or manipulating someone, including child grooming and to 'pile on' a victim.

The Online Safety Act 2021 (Cth) provides the eSafety Commission with regulatory functions to mitigate these risks. Under the Act, eSafety can require social media services, messaging services, apps and websites to report on reasonable steps taken to comply with the Government's Basic Online Safety Expectations (BOSE), enhancing transparency and accountability, and ultimately safety.

eSafety's four complaints-based investigations schemes capture AI-generated images, text, audio and other content meeting certain 'high risk' categories, including:

  • class 1 material (child sexual exploitation, terrorism, porn);
  • intimate images;
  • cyberbullying aimed at children; and
  • cyber abuse aimed at adults.

These schemes provide support to complainants, including assisting with removal of content. eSafety also conducts horizon scanning to identify tech trends and potential online safety issues. It also promotes a safety-by-design (SbD) initiative that encourages industry to anticipate potential harms and implement risk-mitigation, and provides free risk assessment tools.

The DP-Reg submission recommends the SbD should be applied to all AI products and services from the earliest stages of design and throughout their lifecycle.

The DP-Reg submission comes ahead of an imminent review of the Online Safety Act, which will provide opportunity to further consider issues of AI and how they might be specifically addressed by governments and regulators.

For assistance engaging in the digital economy or for more information on the findings and recommendations from the DP-Reg submission and how they could impact your business, please contact a member of Lander & Rogers' Digital Economy team.

All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.