Insights + Resources

July 29, 2025

Law in the AI Age – Can you take legal advice from a Gen AI Platform…and who is liable?

This article is part 1 of our new series about law in the AI age, exploring the question of liability in the context of online discussions with AI platforms.

Background

We live in the emerging age of generative AI large language models (LLMs), such as ChatGPT, DeepSeek and Google AI. It is a world where more and more sensitive ‘advice’ is asked for and provided by the AI tools, and this extends to counsel about legal problems.

The scale and breadth of the knowledge repository available from generative AI platforms is beyond debate, and it is tempting for non-lawyers to look to them for immediate help with their legal problems in an effort to avoid engaging a lawyer and paying professional fees. Yet users need be aware that the outputs received from LLMs are not legal advice as such, and should not, and cannot, be relied upon as such.

The erroneous outputs or ‘hallucinations’ of generative AI are well known and much discussed. In addition, the Terms of Use for LLMs like ChatGPT exclude essentially all liability for and reliance on their outputs. Unlike qualified lawyers, the owners of the platforms are not regulated by jurisdiction-based law societies and are not required to hold professional indemnity insurance, in effect taking away the safety net of the legal profession.

This article looks at the liability for LLMs, and in particular ChatGPT, for wrong advice.

Hallucinations and Errors

It is becoming a kind of modern folk law that LLM outputs are notoriously wrong or “hallucinogenic”. Whilst this will probably change over time, ChatGPT CEO and founder Sam Altman recently indicated that he was surprised that users trusted ChatGPT so much, stating: “People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates.[1]

The footer of the ChatGPT web-based user interface even carries the warning: ‘ChatGPT can make mistakes. Check important info.

Chat GPT is known to make up references to sections of legislation that do not exist, as well as citing false cases as judicial authority with case names and numbers as though they had truly been heard in court. To those not experienced in the legal profession, it can be challenging to cross check these references and ensure their accuracy. AI identifies and matches patterns, and the more training data is available, typically the more patterns are identified. However, unlike humans, LLMs cannot distinguish specific patterns from more general rules.  While AI can generate novel outputs and solutions by combining existing knowledge in unique ways, it cannot ‘think’ or analyse in the same way humans do.

According to a study conducted by Melbourne Business School[2] involving 48,000 people in 47 countries between November 2024 and January 2025, 66% of people rely on AI output without evaluating its accuracy, and 56% of participants reported making mistakes in their work due to their use of AI.

Interestingly, in another study conducted by Express Legal Funding[3], while 34% of Americans say they generally trust the outputs of ChatGPT, legal and medical advice were the least trusted use cases. This suggests that while ChatGPT is often used as an advisor for everyday matters, a human professional is still preferred when it comes to the more ‘high stakes’ spheres of law and medicine.

Disclaimers under Chat GPT Terms of Use

The ChatGPT Terms of Use expressly warn users against relying on the outputs from their prompts.

By accessing the AI service, the user agrees to the following disclaimers:

  • ‘Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.
  • You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.
  • You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.’

Further, the user ‘accept[s] and agree[s] that any use of outputs from our service is at your sole risk and you will not rely on output as a sole source of truth or factual information, or as a substitute for professional advice.’ This is particularly relevant to legal advice, and further reinforces that Open AI itself cautions that the generative AI outputs should not be considered as a substitute for professional advice from a qualified human in the user’s jurisdiction.

Breaking this down into plain English, the disclaimers mean that the user agrees that:

  • The Outputs may not be accurate;
  • The Outputs should not be relied upon without being truth or fact checked by a human;
  • The Outputs are not professional advice;
  • The Outputs must be evaluated for appropriateness for the user’s circumstances; and
  • The Outputs must not be used where they may have a legal or material impact, or where important decisions are required.

Liability Exclusion under Chat GPT Terms of Use

With respect to OpenAI accepting liability for claims made against ChatGPT, the Terms of Use provide that:

Our aggregate liability under these terms will not exceed ​​the greater of the amount you paid for the service that gave rise to the claim during the 12 months before the liability arose or one hundred dollars ($100).

This significantly limits the recourse users may have against ChatGPT. For example, if a user relied on a statement of claim they drafted with the help of ChatGPT which turns out to be erroneous, at most OpenAI’s liability is $100.

In addition, liability for consequential or exemplary damages is excluded:

‘Neither we nor any of our affiliates or licensors will be liable for any indirect, incidental, special, consequential, or exemplary damages, including damages for loss of profits, goodwill, use, or data or other losses, even if we have been advised of the possibility of such damages.’

In any event, because of the disclaimers above, it is more likely that the user will not be able to bring a claim against OpenAI at all.

Foreign Jurisdiction

In relation to where the “advice” is given from, ChatGPT, for example, operates under OpenAI’s Terms of Use. These terms are governed by California law and specify the federal or state courts of San Francisco as the jurisdiction for any claims.

Any claim would therefore have to be taken up in San Francisco under Californian law. For most international users, this is costly and impractical, particularly when the Terms of Use significantly limit the liability of OpenAI.

Professional indemnity insurance

Under Australian law, including s 210 of the Legal Profession Uniform Law (NSW), legal practices are required to take out legal professional indemnity insurance for claims arising from negligence, errors and omissions up to a minimum amount in the course of providing services within the regulated profession. This requirement provides a safety net scheme where legal advice leads to a financial loss for a client, or if a breach of contract or negligence in professional duty occurs. The insurance provides coverage for legal costs, settlements, and damages arising from claims made against the practitioner.

The Legal Profession Uniform Law (which applies in NSW and Victoria) requires the insurance be available as a condition of holding a practising certificate. LLMs like ChatGPT are software-based tools developed and operated by corporate entities, such as OpenAI LLC. Obviously the LLMs themselves cannot take out professional indemnity insurance. As to their owners, in the case of OpenAI LLC, it is not required to hold professional indemnity insurance.

Further, as LLM outputs are generated without human legal oversight, they are often disclaimed as not constituting legal advice under the service terms of the relevant platform. Accordingly, LLMs cannot engage in insurable professional conduct within the meaning of standard indemnity policies, nor can they trigger the regulatory requirements that would necessitate such cover.

Concluding Remarks

It can be tempting to substitute legal advice from a trained professional with a free and immediate Q&A session with an LLM chatbot. However, users who rely on these predictively generated outputs instead of consulting a qualified human lawyer need to be aware of the risky path they are undertaking. In particular:

  • The generative AI platforms cannot think or analyse the information they process, and cannot recognise when they are wrong;
  • The Terms of Use disclaim responsibility for the outputs and require independent human fact checking;
  • Essentially all liability is excluded by the owner of the LLM;
  • The company behind the LLM may not be in the jurisdiction in which the law applies; and
  • There is no professional indemnity insurance safety net for errors or hallucinations.

Do you need business legal advice you can trust? Please contact Edwards + Co via our contact details below. Our qualified, human team provides legal solutions for Australian businesses.

[1] https://www.windowscentral.com/software-apps/sam-altman-says-ai-will-be-smarter-than-his-kids

[2] Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919.

[3] https://legalfundingjournal.com/34-of-americans-trust-chatgpt-over-human-experts-but-not-for-legal-or-medical-advice/

 

Close Btn Created with Sketch.

RECEIVE FREE NEWS + EXCLUSIVE INSIGHTS

Straight to your inbox on legal and business developments set to disrupt and transform.