Insights

CyberSight 360: Cyber risk and AI: the good, the bad and the ugly

Futuristic city scape in black and white

Artificial intelligence (AI) dominated global headlines in 2024 as generative AI capabilities continued to grow at a rapid pace. Adoption of the technology will no doubt increase in 2025 and beyond as it continues to evolve and advance, with Agentic AI emerging as a highly adaptable, autonomous agent that will open up new possibilities to automation and tap into unchartered territories of personalisation.

At the global level, the AI arms race is already heating up between the United States and China as both countries invest heavily to build advanced AI infrastructure, cultivate AI talent and compete to make the next breakthrough. At the local level, we are seeing businesses increasingly adopt and incorporate AI technology within their system architecture and work processes, to improve efficiency and productivity. Even lawyers, who are traditionally slow at adopting new technology, have started to incorporate generative AI in their work, which recently prompted the Supreme Court of New South Wales to issue Practice Note SC GEN 23 to provide direction on where the use of generative AI is acceptable.

In both its current and future form(s), AI certainly has a huge role to play within the cyber security context as well. However, as with every new technology, the good often brings with it the bad, with malicious actors leveraging the same technology to perpetrate crime and widespread disruption.

Focusing our lens on AI within the cyber security context ─ how can it be used, what challenges does it bring, and why is robust regulation key for the safe development and use of AI?

Malicious use of AI

AI has opened up a number of opportunities for bad actors to lodge cyber attacks using novel methods that are becoming harder to detect. Threat actors have used AI to varying degrees, regardless of the amount of resources available to them and their level of sophistication. This is the result of the increasing democratisation of cyber attacks through "franchise models" that provide even amateur bad actors, who have limited resources or experience, with the malicious tools they need to lodge their own cyber attacks, including generative AI tools that can be purchased on the dark web.

Threat actors have maliciously utilised AI to perpetrate cyber crime by:

  1. leveraging AI to enhance the scale and effectiveness of cyber attacks; and
  2. attacking the AI technology itself, for example through data poisoning, whereby the training data is corrupted or compromised, which leads to biased, inaccurate or malicious outputs.

Bad actors have particularly leveraged AI by:

  • using natural language processing to create convincing phishing emails and enabling social engineering attacks to be done at scale. AI provides the tools to automate cyber attacks, scan attack surfaces, and even to tailor phishing emails to a particular culture or context to make them more convincing. Traditionally, spelling and grammatical errors have been prominent red flags in potential phishing emails. However, the use of generative AI in phishing attacks eliminates such signs and makes detection even harder. In a recent example1, when security researchers tested WormGPT to design a phishing email, WormGPT demonstrated it was capable of creating a phishing email with correct spelling and grammar
  • using AI-generated synthetic media to create fake images, video or audio recordings to impersonate a legitimate person and thereby convince victims to either transfer money or volunteer sensitive information (i.e. deepfake attacks)
  • automating code generation, which enables rapid creation of new malware variants
  • analysing exfiltrated data more quickly and in a more targeted fashion, in order to undertake triple extortion or perpetrate further attacks with the information found.

While the advanced use of AI by bad actors is likely to be limited to those with access to significant resources, quality training data and significant expertise both in AI and cyber, the cyber criminal community can be collegiate, commercial and innovative. With a subscription model no different to the Ransomware-as-a-Service model, the more sophisticated bad actors with resources and capability to invest in AI technology can lower barriers to entry by offering subscriptions to amateurs to utilise their AI tools or products to launch AI-powered attacks. For example, it was reported that a bad actor was offering to sell WormGPT with a subscription model ranging from approximately US$112 to US$5,621.

As AI continues to evolve and become more powerful, so will its malicious application. This is unlikely to be prevented even with strict regulation, particularly as it becomes easier for bad actors to gain access to a number of malicious resources on the dark web. However, AI-powered cyber defence systems can play an important role in levelling the playing field.

AI-powered cyber defence: uplifting cyber resilience

While bad actors are able to leverage AI for malicious purposes, cyber security defenders can also utilise AI-driven cyber security tools to improve threat detection and identify new attack vectors.

There are a number of use cases where AI can play an integral part in a defensive strategy against cyber attacks:

  • One of the top advantages of AI is its ability to quickly sieve through large datasets. AI can enhance threat intelligence by analysing large datasets in real time, further training the AI model and providing predictive insights that enable cyber security defenders to anticipate attacks and be proactive in mitigating them.
  • AI can improve incident response times by automating threat detection, containment, analysis and mitigation.
  • AI can also help automate the patch management process and enable faster identification of critical vulnerabilities, reducing the time gap between release of a patch and deployment.

There is much potential in integrating AI into cyber security tools and using AI to power cyber defences. Applying Agentic AI allows automated threat detection that autonomously identifies malware, phishing attempts and network intrusions by analysing real-time data and recognising unusual patterns of behaviour. Critically, Agentic AI can engage in predictive analysis by identifying trends and vulnerabilities that can be exploited in future attacks. This proactive approach will be key in ensuring that security teams implement countermeasures ahead of any potential cyber attack.

That said, it is likely that in the near future, only state actors and large tech AI organisations will have the ability to further develop and harness the full potential of AI in advancing AI-powered cyber defences. This is because advancing AI-powered cyber defences requires significant resources, infrastructure, expertise and time.

Governments may need to consider how the benefits of advancements in the development of AI-powered cyber defences can be shared and distributed as a public good, particularly with small businesses who cannot afford large budgets for cyber uplift and sophisticated cyber security tools.

A strong commitment to harnessing and developing AI-powered cyber defences would go a long way in tipping the balance against mounting cyber threats.

Ethical, legal and regulatory challenges: robust regulation needed

Despite its many benefits, the use of AI in cyber security also presents ethical, legal and regulatory challenges. As is the case with any form of innovation, measures must be put in place to ensure there are appropriate checks and balances governing the development and use of AI.

If only state actors and large tech AI organisations have the ability to advance AI-powered cyber defences in the short to medium term, having concentration of power over a significant piece of technology in the hands of a few raises ethical concerns over potential misuse by authorities or large tech AI companies.

We have already seen OpenAI request that the Trump Administration helps to shield AI companies from a growing number of proposed state regulations if they voluntarily share their models with the federal government. The premise for this is that the hundreds of AI-related bills currently proposed across the US risk impeding the US's technological progress at a time when competition from China has renewed, particularly following recent news surrounding the DeepSeek platform. This request was in the context of the administration calling for public input in drafting a new policy to ensure US dominance in AI.

The Trump Administration has generally indicated that it will take a deregulation approach to AI technology, and to date, there has been no federal legislation governing the AI sector. Privacy and intellectual property concerns have always been raised in relation to AI technology and the use of training data, but OpenAI has called for:

  • the US government to take steps to support AI infrastructure investments and requested copyright reform, arguing that America’s fair use doctrine is critical to maintaining AI leadership; and
  • AI companies to get access to government-held data, which could include health-care information, on the basis that such information would help “boost AI development”.

If the US government agrees and accepts OpenAI's submissions, this will largely leave the US government and US-based AI companies above the law, with no checks and balances in place and little protection for the data sets they want access to for data training to develop their AI projects.

However, fundamental guiding standards and regulations are critical to innovation and the development and use of AI. They enable AI developers to build with confidence, and provide the public with trust in the security of the technology, particularly as emerging forms of independent, autonomous AI (which require minimal human supervision) increasingly raise questions about reliability, accountability and data security.

In Australia, we currently have the first iteration of the Voluntary AI Safety Standard which consists of 10 voluntary AI guardrails and seeks to support and promote best-practice governance to help more businesses adopt AI in a safe and responsible way. As the name suggests, it is “voluntary”, although it was proposed in September 2024 that mandatory guardrails be introduced, which largely mirror the voluntary standards. One of the guardrails requires that AI developers have appropriate data governance, privacy and cyber security measures in place to appropriately protect AI systems.

While these standards and guardrails are a start, they represent the minimum requirement to regulate the development and use of a powerful technology with significant potential. Clarity on AI-related risks, the legal boundaries of AI liability, and legal consequences for any potential misuse of AI technology is needed now, before more advanced forms of AI develop. Regulators must develop clear and comprehensive guidelines that define AI-related risks and standardise practices across the industry. This includes setting benchmarks for AI system performance, transparency, and accountability, to ensure AI technologies are deployed responsibly and ethically. Otherwise, we risk descending into a lawless sphere, with the unfettered development and use of AI benefiting and playing into the agendas of bad actors.

This article appears in the 2025 edition of CyberSight 360: A legal perspective on cyber security and insurance


1 Deloitte Threat Report: How threat actors are leveraging artificial intelligence (AI) technology to conduct sophisticated attacks

All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.

Key contacts

Jack Boydell

Jack Boydell

Lawyer

Jeffrey Chung

Jeffrey Chung

Lawyer

Rebekah Maxton

Rebekah Maxton

Lawyer