Insights

Supreme Court spotlights importance of privacy and disclosure in AI guidelines

Man sitting at a desk using a laptop computer.

In response to the growing use of generative AI in legal practice, the Supreme Court of Victoria has published guidelines for litigants and lawyers using AI tools to assist the litigation process.

The Court recognises that AI tools drive efficiencies for litigants and lawyers by streamlining information management, document preparation and simple legal tasks, which brings commensurate savings in time and cost of litigation.

The guidelines highlight that, as with all legal technology, lawyers need to understand how AI tools function and be mindful of protecting client confidentiality and personal information when using a particular tool. Lawyers should also be transparent about their use of AI and always exercise care and professional independence when reviewing the output of an AI tool.

About the guidelines

The guidelines highlight the importance of:

  • properly understanding how AI tools work and their limitations
  • ensuring the privacy and security of the use of data
  • disclosing the use of AI to the Court and other parties where appropriate

The Court provides examples of how these principles apply to different types of AI tools, including Technology Assisted Review, specialised legal databases, and generative AI and Large Language Models. The guidelines also caution that not all AI tools are created equally, and warn litigants and practitioners about general-purpose AI tools that may yield inaccurate, incomplete, incorrect, inapplicable or biased output.

The Court has also clarified that AI is not presently used by judges when drafting decisions, as AI does not engage in a reasoning process or a process specific to the circumstance before the Court.

Key takeaways for lawyers and litigants

The use of AI tools has the potential to drive greater efficiency in the delivery of legal services, and to save litigants time and money. The Supreme Court's guidelines highlight the importance of selecting reliable AI tools, recognising the challenges of employing these tools to assist with court proceedings, and approaching these issues with transparency and candour in interactions with practitioners, litigants and the court.

Responsible use of AI at Lander & Rogers

To leverage the benefits of AI while maintaining the highest ethical and professional standards, Lander & Rogers has implemented a comprehensive policy on responsible AI use that outlines our commitment to maintaining client confidentiality, privacy, transparency, accuracy, and accountability. Our people are encouraged to experiment with AI's capabilities while ensuring that the data used is handled with utmost care and integrity.

We have licensed Microsoft Copilot* as our dedicated generative AI tool, which provides a secure environment while meeting privacy and data security obligations.

Earlier this year we launched the AI Lab by Lander & Rogers, which brings together dedicated experts focused on the intersection of innovation, technology, ethics and law and the application of artificial intelligence for the benefit of our people, clients, community, and the environment. For more details, please visit our AI Lab webpage.

*An initial version of this article was drafted with the assistance of Copilot

All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted. Lander & Rogers is furthermore committed to providing legal advice and content that is factual, true, practical and understandable. Learn more about our editorial policy.