Firms regard GenAI violations as disciplinary matters in wake of recent court order against claimants’ counsel to personally pay costs for citing fictitious case
[SINGAPORE] Law firms here are treating breaches of generative artificial intelligence (GenAI) policies as potential sackable offences, following a landmark High Court case that ordered a lawyer to personally pay S$800 in costs for citing a hallucinated case.
Several firms approached by The Business Times said that violations of their AI guidelines would be treated as serious disciplinary matters, with consequences ranging from restricted tool access to dismissal.
Stephanie Yuen Thio, joint managing partner at TSMP Law, said key to the firm’s AI policy is that lawyers must continue to meet their professional responsibilities.
“AI, like any other software we use, is only a tool; and it is our responsibility to ensure that the final work product is accurate and appropriate,” she said.
Lawyers who violate this policy will be in breach of employment terms with “attendant consequences”, she added.
At Withers KhattarWong, non-compliance with AI policy may result in disciplinary action, ranging from retraining and restricted access to tools, to formal sanctions under its human resource and compliance procedures.
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
“The principle is clear,” said Chenthil Kumarasingam, the firm’s regional division leader for dispute resolution in Asia. “Lawyers remain personally accountable for their work, whether AI is used or not.”
Similarly, Drew & Napier will treat AI violations “as any other transgression and dealt with according to firm policies, which can result in disciplinary action”, said chief technology officer Rakesh Kirpalani.
Fictitious citation
The firms’ responses come after a Sep 29 High Court judgment where Assistant Registrar Tan Yu Qing found that counsel for the claimants had acted “improperly, unreasonably and negligently” by citing a fictitious authority generated by a GenAI tool in written submissions.
SEE ALSO
The claimants’ counsel, Lalwani Anil Mangan from DL Law Corporation, initially characterised the error as “clerical” or “typographical” in correspondence with the court. He admitted the case “did not exist” only after questioning during a hearing on Jul 22, 2025.
This is likely to be among the first cases in Singapore where an AI hallucination was cited as a legal precedent.
Lalwani later revealed that a junior lawyer had used an AI app to generate the citation, which he failed to verify before filing the submissions on behalf of his clients.
Assistant Registrar Tan ordered Lalwani to personally pay the defendant S$800 in costs – separate from the usual costs of the application – to compensate for the unnecessary time and expense incurred due to his improper conduct.
Addressing the issue of supervisory responsibility highlighted in the case, the view of the law firms is that the lead lawyer bears full responsibility for all work output, even if AI-assisted work was first prepared by a junior.
At Withers KhattarWong, the firm adopts a two-tier oversight system: junior lawyers may use GenAI tools for efficiency, but senior lawyers are responsible for validating outputs and ensuring compliance before anything is filed or shared externally.
The firm is also running training modules on prompt engineering, hallucinations and the risks of fabricated citations, emphasising that lawyers must never assume outputs are correct without independent validation, said Kumarasingam.
TSMP holds weekly team meetings and monthly firm-wide meetings to share experiences using AI and problems that have surfaced.
“Legal AI is still in the early stages of development and adoption, so I think responsible law firm leadership requires a hands-on approach to implementation,” said Yuen Thio.
At Drew & Napier, lawyers have been expressly instructed to verify all GenAI output and explain the positions they take in their work, said Kirpalani. They are also expected to disclose AI use when asked.
The requirement to file bundles of authorities – cases that lawyers intend to rely on in their submissions – provides an important opportunity to verify all citations, he added.
Safeguards and checks
Allen & Gledhill has custom-built its own AI tool, A&GEL, with safeguards to mitigate the risks of hallucinations, said Stanley Lai, head of the intellectual property practice and co-head of the cybersecurity and data protection practice.
During development, the firm identified more than 100 potential AI use cases and selected those with the greatest impact and highest likelihood of success, while optimising outputs to be easily verified by lawyers.
“Knowing when and how to utilise AI is a skill in itself,” said Lai. “We expect our lawyers to understand that while AI can augment their existing workflows, it cannot replace the nuanced judgment, ethical reasoning and interpersonal skills that define effective legal practice.”
Said Yuen Thio: “The bottom line is this: Legal AI tools should be used in the same way we would use the work product of a good intern – we need to check the work and ensure that our professional obligations to our clients are fulfilled.”
“Relying on an AI work product without checking is asking for trouble,” she added.