MELBOURNE, Australia — A senior Australian lawyer has formally apologized to a judge after admitting that court submissions filed in a murder case contained fabricated quotations and references to legal decisions that do not exist, errors that were generated using artificial intelligence. The incident, which unfolded in the Supreme Court of Victoria, has drawn renewed attention to the risks posed by improper use of AI tools within court systems around the world.
The mistake adds to a growing series of AI-related failures in judicial proceedings internationally, where unverified outputs from generative technology have made their way into official legal filings, raising concerns about professional standards and the integrity of court processes.
Defense lawyer Rishi Nathwani, who holds the prestigious title of King’s Counsel, acknowledged full responsibility for submitting the flawed material in a case involving a teenager charged with murder. According to court documents reviewed by The Associated Press on Friday, Nathwani addressed the issue directly before Justice James Elliott during a hearing on Wednesday. Speaking on behalf of the defense team, Nathwani told the court that the lawyers involved were “deeply sorry and embarrassed” by what had occurred.
The court filings reveal that the defense team conceded that several of the legal citations included in their submissions “do not exist” and that portions of the document relied on “fictitious quotes.” The lawyers explained to the court that while they had verified the accuracy of some of the initial case references used in the submission, they incorrectly assumed that the remaining citations generated by the AI system would also be correct. As a result, those additional references were not independently checked before the document was filed.
The erroneous submissions were also provided to prosecutor Daniel Porceddu. Court records indicate that the prosecution did not identify the inaccuracies either, as the cited cases and quotations were not independently reviewed prior to being raised before the court.
In addressing the issue, Justice Elliott pointed out that the Supreme Court of Victoria had issued formal guidance last year outlining how lawyers should responsibly use artificial intelligence in legal work. He emphasized that while AI tools may assist with research or drafting, they cannot replace professional judgment or verification. Elliott stressed that reliance on AI-generated material without thorough checking falls short of acceptable legal practice. “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,” the judge said, according to the court documents.
The court did not disclose which generative AI system was used by the defense team, and the documents provide no further technical detail about how the fabricated citations were produced.
The Australian case echoes a comparable incident in the United States in 2023, when a federal judge imposed $5,000 fines on two lawyers and their law firm after they blamed ChatGPT for submitting fictional legal research in an aviation injury lawsuit. That case, widely cited in legal circles, became an early warning about the dangers of relying on generative AI without adequate oversight.
Together, the incidents highlight the growing challenge courts face as artificial intelligence becomes more common in legal practice, prompting judges and regulators to clarify expectations around accountability, verification, and ethical use of emerging technologies.

