Risk vs Reward – The Use of Artificial Intelligence in Litigation

Written by Jessica Woods

Although the use of artificial intelligence continues to be commonplace in society, we are continually reminded of the significant risks this can involve where the information obtained from such sources is not verified before being relied upon.

And now, we are seeing it more and more in Australia. The recent case of May v Costaras [2025] NSWCA 178 has set the tone for not only unrepresented litigants, but the broader legal community when using AI in litigated matters.

Lily Costaras became involved in litigation after the breakdown of her relationship with Michael May. Costaras and May owned a property together in Maryborough, Queensland which became the subject of the litigation wherein May had sought to establish that Costaras held her legal interest in the property on trust solely for him. The primary judge found in favour of Costaras. The matter then went on appeal to the New South Wales Supreme Court.  

Throughout the course of the Appeal, it became readily apparent Costaras had used Generative AI in the preparation of her submissions. Costaras, an unrepresented litigant fell victim to a “hallucinated authority”, after using AI to assist in her preparation for the Appeal.

Bell CJ commented that there was no personal criticism of Costaras “who was self-represented and doing her best to defend her interests”1.

The submissions prepared by Costaras through the use of AI:

  1. included expressions which, on their face, did not make sense (for example, the expression “he went to rescind counter restitution of the earlier joint endeavours”2);
  2. introduced concepts which were inappropriate or irrelevant to the issues in dispute, including reference to the case of Wheatley v Bell [1982] 2 NSWLR 544 which “had nothing remotely to do with the issues in the present case”3;
  3. referred to “hallucinated authorities”; and
  4. included a list of authorities which were either inaccurate or had little to do with the case itself.

The Court referenced the following observations of Dame Victoria Sharp when delivering the reasons of the Court in Ayinde v The London Borough of Haringey [2025] EWHC 1383 (Ayinde), wherein it was stated that although it is generally accepted that artificial intelligence “is likely to have a continuing and important role in the conduct of litigation in the future”:

Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example).4

Luckily for Costaras, the use of AI did not ultimately affect her case, and May’s appeal was dismissed.

Takeaways

This Judgment further confirms both the risks, and the opportunities, presented by AI. Importantly, it stresses the importance to ensure information provided by AI is verified before use, to ensure the information is real, accurate and relevant.

According to a whitepaper from Gallagher Bassett, “Carrier Perspective: 2025 Claims Insights”, 72% of global insurers are using AI to automate routine tasks5. As many of our insurer clients look to utilise AI to drive efficiencies within their business, consideration must be given to the importance of verification and oversight of any AI generated documentation.


Should you wish to discuss any of the above, please contact Jessica Woods on 03 9947 4516 or any member of the Ligeti Partners team on 03 9947 4500.

  1. May v Costaras [2025] NSWCA 178 at [2] ↩︎
  2. Ibid at [6] ↩︎
  3. Ibid [8] ↩︎
  4. Ayinde v The London Borough of Haringey [2025] EWHC 1383 at [5]-[9] ↩︎
  5. Generative AI – The Carrier Perspective: 2025 Claims Insights | UK ↩︎

Ligeti Partners Contacts

A women smiling, dressed in a black blazer, posing for a headshot.

Jessica Woods

Principal Lawyer

Melbourne