Legal privilege and AI under English law: two big issues

Graeme Johnston / 8 March 2026

 

This article suggests some ways of approaching two big issues under English law and procedure concerning the application of some well-established legal requirements concerning legal professional privilege to cases of AI tool use. The two issues concern;

  1. The requirement of confidentiality 
  2. The requirement of legal advice provided by a lawyer 

 

The article doesn’t seek to provide definitive answers and I’ve written it in a deliberately informal style without detailed citations as I want it to be more about principles than authority. I have focused on English law: one US case is cited but it’s important to bear in mind that the legal principles of privilege are not identical to those in England, and neither is the regulatory context. In particular, US notions of “unauthorized practice of law” are generally wider than English “reserved activities” – for instance, provision of legal advice is not as such restricted to lawyers under English law – the “reserved activities” are much more narrowly drawn (e.g. conduct of litigation).

Definitions

 

Some informal definitions first, just to clarify what we’re talking about here.

  • “Privilege” is the legal right to insist on keeping information to yourself and not having it used against you, where a court or other authority could otherwise compel disclosure, or permit seizure or use. This article focuses specifically on “legal professional privilege” (LPP) – a modern name for a sub-category of privilege which itself contains two main types. (n1)
  • One type is legal advice privilege (LAP). This covers information generated for the dominant purpose of seeking legal advice from a qualified lawyer. It extends to instructions given to the lawyer and also to the advice generated by the lawyer and their preparatory work. The rationale is that, for the rule of law to be meaningful, people need to be able to obtain legal advice, and for them to be able to do this effectively in practice requires them to be confident that the information generated for that purpose won’t be used against them.
  • The other type is litigation privilege (LP). This covers information prepared for the dominant purpose of handling actual, or reasonably contemplated, litigation or other adversarial legal proceedings. It’s practical importance is that it (i) goes beyond legal advice and includes, for example, communications with witnesses and (ii) despite the word “professional” in LPP, LP applies regardless of whether a lawyer is involved in the adversarial matter. Litigants “in person” (self-represented parties) are therefore protected by LP, not just parties with legal representation. LP has been justified as giving people a safe space to prepare their case thoroughly, confident that their preparations won’t be used against them. Just as LAP can be justified in terms of the rule of law, LP can be justified as flowing from the right of access to justice.

 

The general concept was recognised by the English courts many centuries ago, but the details have evolved considerably. Some significant points have only been resolved by the courts in the last few years and other important issues remaining unresolved. 

 

On, then, to the two specific issues mentioned at the start.

 

1. The confidentiality issue

It’s clear that confidentiality is one of the requirements of privilege. If you publish your legal advice on Reddit, you lose privilege. And if you use part of your hitherto-confidential legal advice in court to justify what you did on a particular issue, you’re considered to have waived privilege in the totality of the relevant advice. But there hasn’t yet been any detailed analysis in English case law of what level of confidentiality is required in the AI tool context.

 

Mood music

Some important background is that established legal research database providers managed to implant early on the idea of a huge gulf between their AI tools and what could be done with the offerings of general purpose AI companies in two major ways:

  • Reliability
  • Confidentiality

 

There’s an emphasis on reliability for example, in the official England and Wales guidance for judges (October 2025 edition) that “public AI chatbots do not provide answers from authoritative databases.” The phrase “public AI chatbots” is not defined but in context presumably refers to general purpose AI tools like ChatGPT, Claude and Google Gemini, implicitly contrasted with the specialist ones offered by companies with their own proprietary legal case databases. This impression of what is meant by “public AI chatbots” is reinforced by the fact that the original December 2023 version of the same guidance referred to “ChatGPT, Google Bard and Bing Chat” as “publicly available examples” of “generative AI chatbots” – though product names have been removed from the 2025 edition. Whatever the position on reliability at the time of the 2023 and 2025, it seems wise to keep an open mind on this as AI develops beyond just “chatbots”. See for example this March 2026 piece by a Canadian law professor describing how he used Claude Cowork and the free-to-use CanLII database (similar to BAILII and Find Case Law in the UK) with better results than the Lexis AI research tool.

 

Some background on confidentiality

As to confidentiality, something which I suggest we need to think very carefully about is whether we are imposing notions of confidentiality in the AI context which are equivalent to those expected elsewhere. For example, it seems generally to be assumed that using a cloud service to store legal documents, or communicating by email and other digital means, doesn’t remove the confidentiality required for privilege. That is so even though lawyers generally don’t use “zero knowledge” technology and it’s well known that intelligence services and other state actors have ways to gain access to commonly used cloud services. And even outside such contexts, it is impossible entirely to exclude risks of information security problems. 

 

The pragmatic assumption, the edges of which are not fully tested in case law, seems to be that the law of privilege expects some attempt to protect confidentiality but doesn’t look too hard at how rigorous it really is. Some other situations which sometimes arise but are not suggested, so far as I’ve ever heard, to remove privilege involve lawyers who:

  • Work on a confidential legal matter on their laptop on the train with the screen visible to be read by other passengers
  • Speak identifiably about such a matter on the train in words which can be overheard by other passengers
  • Use a password on a software application which is not approved by their organisation, and without turning on multi-factor authentication
  • Handle some legal work on a shared home computer on which their emails may be read by family members, again in breach of the organisation’s IT policy
  • Work in an organisation with deficient information security which predictably leads to a loss of confidential information
  • Carelessly sends client information to a counterparty (there have been several reported cases about this over the years).
  • Give another law firm access to unredacted client files, without client consent, as part of a possible acquisition (a disciplinary case from 2018).

 

It can of course be said that a difference between these examples and the use of AI tools is that the examples are individual “accidents” (or at least presented as such) whereas the use of AI may be seen as a more deliberate ongoing practice. But in reality, some of the examples are already likely to involve ongoing habits or practices; a poor system. And the courts have, so far as I can determine, not shown any enthusiasm to strip away privilege outside cases where a party clearly intended to waive confidentiality. The general approach is quite forgiving.

 

Also: while the lawyers in such cases may be exposed to professional discipline and possible civil liability, this is not the same as saying confidentiality and therefore privilege have been lost.

 

AI tools and confidentiality: judicial statements so far

So far as AI tools specifically are concerned, the official England and Wales guidance for judges (October 2025 edition) isn’t about privilege specifically, but has this to say about confidentiality:

 

“Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that you input into a public AI chatbot should be seen as being published to all the world.

The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known.

You should disable the chat history in public AI chatbots if this option is available, as it should prevent your data from being used to train the chatbot and after 30 days, the conversations will be permanently deleted. This option is currently available in ChatGPT and Google Gemini but not in some other chatbots. Even with history turned off, though, it should be assumed that data entered is being disclosed.”

 

While the drift is apparent, and it’s no doubt justified to err on the conservative side, the text is unsatisfactory:

(i) What is meant by “public AI chatbot” or “publicly available AI chatbots”? Presumably it means the free version of ChatGPT, CoPilot, Claude, Gemini and so on, but how about the various paid tiers? Presumably it also doesn’t mean the AI offerings of providers like Thomson Reuters and Lexis, though these are also “publicly available” in the sense that anyone can subscribe to them, just as anyone can subscribe to ChatGPT. The only major commercial difference is the absence of a free tier. The language of “public” and “publicly available” isn’t ideal to capture that distinction.

(ii) The notion that “public AI chatbots” should be seen as publishing any information “to all the world” or “into the public domain” is at best highly simplified.

(iii) What is the sort of information contemplated by the third paragraph, in which information is not so confidential to bar it from such a tool being used, yet confidential enough that training should be turned off?

 

Also unsatisfactory is this passage in a November 2025 decision of the Upper Tribunal chastising some lawyers for AI-hallucinated case citations. In passing, it had this to say about privilege:

 

“We also observe that to put client letters and decision letters from the Home Office into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and thus any regulated legal professional or firm that does so would, in addition to needing to bring this to the attention of their regulator, be advised to consult with the Information Commissioner’s Office. Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks.”

 

The distinction between ChatGPT and Copilot is not as simple as represented there given the tiers which each product has, ranging from free to enterprise. The reference to “public domain” is also simplistic. I imagine that the October 2025 guidance may have played a part in this: Copilot is more on the “public AI tools” side of the line in that it has a free tier. Perhaps the tribunal meant the enterprise version, or perhaps the tribunal members just aren’t very clear in their own minds about any of this. The latter interpretation is supported by the fact tribunal badly misuses the term open source. Moreover, no software products are bullet-proof on confidentiality, as illustrated by this February 2026 security problem with Copilot.

 

A non-binary situation

The point is: simple binaries along the lines of “ChatGPT bad, Copilot good” (or more abstract versions of that) are not a satisfactory way to approach this topic.

 

To address confidentiality satisfactorily, I would suggest that it’s useful to start by at least acknowledging some of the real world variants which we already see, such as

  1. Free tool with training and data retention left on.
  2. Free tool with training and data retention turned off.
  3. Paid subscription on “consumer” tier of a major AI tool.
  4. Paid subscription on “business” tier of such a tool.
  5. Paid subscription on “enterprise” tier of such a tool.
  6. Private tool entirely controlled by the lawyer or their organisation, which does not share data with any external tool. 

 

There is lots of nuance in the precise level of contractual and technical confidentiality, privacy and security assurances offered between these tiers and between different providers. And it will no doubt continue to develop over time. I’m not suggesting that there need be a detailed examination of terms in each case: quite the opposite in fact. Clearly, a workable legal principle needs to be broad enough to be sustainable over time. But this list is offered to illustrate that the sort of binaries seen in the examples above are unsatisfactory.

 

As a practical matter for anyone using AI tools to do legal work, it is clearly wise to avoid variant 1 in the list just offered, as there is already widespread concern about it (including in the judicial quotations above). And it’s free and easy to turn off training. Various statements by lawyers arguing for a hard line against litigants who fail do so are in my view unattractive, but the uncertainties about what will actually happen to the data, and the unpredictability of what the courts will ultimately decide, makes it clear that there is risk here which you should avoid or at least mitigate if you can.

 

Privilege and the unwise 

But if the case arises in which someone is already in scenario 1, however unwisely, and privilege is challenged on this basis, how should the court approach it? And what should the answer be as one moves through scenarios 2, 3 and so on?

 

In considering this, I would suggest that attention ought to be given to the following issues, in addition to the points already mentioned:

  1. How memorisation works in such tools, and the feasibility of extraction attacks taking into account the countermeasures. This needn’t go into great detail and shouldn’t focus on any particular tool, as that would quickly become out of date, but there should be some judicial engagement with these issues.
  2. As part of that, consideration should be given to what extraction attackers actually involve and whether the possibility of that sort of determined act is really the sort of thing that should be regarded as removing confidentiality for the purpose of privilege. 
  3. How the risk really compares to other common non-AI situations, such as those listed above, where confidentiality is already at some risk.

 

A suggested approach

I would also suggest that, in the absence of a demonstration that the reality of a particular tool is that it does indeed publish someone’s confidential information to the world in a way which just about anyone can easily read in plain text, then the best approach might be to regard the confidentiality requirement of privilege as maintained. The burden, in other words, should be on the party arguing for loss of confidentiality to show that the material has become readily available to the public at large, not just to a determined attacker targeting a particular party. 

 

Otherwise, we risk a situation in which the better resourced or more savvy are given the protection of privilege by the courts in this area than people who just didn’t understand the risks. Would that really be justice, irrespective of whether the lack of sophistication is on the part of the layperson or their lawyer?

 

Another more pragmatic consideration is that unless a rather easy-to-apply principle is adopted, the courts may look forward to detailed submissions and expert evidence as to the precise differences between particular tools. Of course, sometimes that sort of legal uncertainty just has to be accepted, but this is not, I would suggest, such a case.

 

Given the tendency to conflate regulatory scope, misconduct and privilege issues, I want to emphasise that none of this seeks to justify the lawyer, or anyone, who uses AI or other tools inappropriately. For lawyers in particular, there may still be regulatory or civil liability consequences for problematic conduct here, but that’s different from the question of privilege. The fact that a lawyer does something foolish, here as in other contexts, should not strip their client of privilege. And neither should the fact, I would suggest, that a lay client doesn’t fully understand the terms and conditions of their chosen AI tool.

 

2. Legal advice from a lawyer

 

The following points assume sufficient confidentiality. A major difference between the two types of LPP is that LAP requires “legal advice by a qualified lawyer” whereas LP does not.

 

Assuming that LP is unavailable, this raises the possibility of several arguments being raised in favour of LAP where a party to a legal proceeding, or their lawyer, has interacted with an AI tool.

 

Two weak arguments

 

Let’s start off with a couple of arguments which seem weak to me.

  1. AI as “lawyer.” Under English law it does not seem seriously arguable at the moment that the notion of “qualified lawyer” should be widened to include technology tools; or, indeed, human advisers other than officially qualified lawyers. The UK Supreme Court decided in 2013 by a 5:2 majority that the “qualified lawyer” requirement remained part of LAP. That was a case involving tax advice by accountants which, if given by a lawyer, would have qualified as “legal advice.” And, importantly, the dissent emphasises that the accountants were regulated professional advisers (see para 148): this was not a case of unregulated humans let alone tools. It seems unlikely that the courts will re-open that decision any time soon, and also unlikely that the government will legislate to do so. Although the Law Commission has raised the question of legal personality for AI in a 2025 discussion paper, it does not propose that it should in fact be the case, and does not address the issue of LAP.
  2. Cloud documents analogy. Lawyers these days commonly store emails and documents on a cloud document management system operated by a third party supplier, relying on the latter’s contractual undertakings, and security and governance measures, to maintain confidentiality. An AI tool is no different from that in principle, assuming comparable confidentiality measures. However, this point does not lead to privilege in itself, as recently noted by a US district judge February 2026 decision:
The Court is aware that some commentators have argued that whether Claude is an attorney is irrelevant because a user’s AI inputs, rather than being communications, are more akin to the use of other Internet based software, such as cloud-based word processing applications. But the use of such applications is not intrinsically privileged in any case, and the argument that Claude is like any other form of software only cuts against the invocation of privilege because all “[r]ecognized privileges” require, among other things, “a trusting human relationship,” such as, in the attorney-client context, a relationship “with a licensed professional who owes fiduciary duties and is subject to discipline.” 

 

Three better arguments

 

Some points which I think may more reasonably be argued are these.

  1. Preparation for obtaining legal advice. If a layperson uses an AI tool for the dominant purpose of preparing for the obtaining of legal advice from a qualified lawyer, then it would seem arguable that LAP applies to such preparatory work on ordinary English law LAP principles, just as would to notes written manually for that purpose. In the US decision mentioned above, the party claiming privilege contended that their interactions with Claude had been “for the express purpose of talking to counsel.” It is not clear to me what the evidential basis was for that submission (other than the party’s self-serving assertion), and the decision does not answer head-on whether it would have been sufficient if factually established. But it rejects the party’s privilege claim on the grounds that (i) the interaction had not been requested by their lawyer and (ii) an intention to “share” the communications with the lawyer was insufficient to found a claim to privilege. Had English law applied, these points would simply be factual considerations and the key question would be about the dominant purpose of using the tool. It is at least conceivable that in some cases a party could argue that their interaction with the AI tool was part of their preparation for instructing a lawyer effectively, assuming that there is a factual basis for this.
  2. Digesting legal advice. If a layperson uses AI to understand the meaning and implications of legal advice received from a qualified lawyer, then it may also be argued that this falls sufficiently within traditional LAP notions, as is already regarded as being the case when summaries of legal advice are shared confidentially within an organisation (n2). Some line will have to be drawn between summarisation / comprehension of the advice and expansion upon it. In principle, the line here is no different from what it should be outside the AI context, but the risks may be different in practice as AI tools may “helpfully” offer to explore implications and related issues, and the ease and lack of extra cost in accepting such help will make it tempting to do so.
  3. The supervision principle. A traditional law firm operating model in England involves modest numbers of trainees, paralegals and other staff not qualified as lawyers who execute legal work, in comparison to the number of the qualified lawyers in the firm. It has long been accepted that the work of such non-qualified individuals falls within the protection of LAP if they are properly supervised by a qualified lawyer. But what counts as proper supervision? Some questions already arise in relation to organisations with very high numbers of paralegals and other non-qualified staff in proportion to the qualified lawyers supervising them. To what extent are process maps, playbooks, quality assurance sampling and other such techniques acceptable for the purpose of privilege, as opposed to the sort of artisanal supervision traditionally encountered in law firms? The regulatory implications of such operations are already being explored in the Mazur case (which, at the time of writing, has been argued in the Court of Appeal but not decided yet) though the key legal questions there are not quite the same as those under established privilege law. The use of AI tools, perhaps in combination with increasingly cross-functional human teams, raises further questions as to how to interpret “supervision” – indeed, supervision may not be the right term or concept. Broader business notions of quality assurance and risk management ought to be relevant here: even if the lawyerish comfort zone prefers analogy, the reality is that this area of law has already incrementally developed over decades in ways which make it markedly different from what a lawyer of 1976, 1926, 1826 or 1726 would recognise. 

 

The real importance of these three arguments

On the face of it, these three arguments will only arise in rather narrow scenarios, in that they all still involve human qualified lawyers. But their real importance may not be so much in the short term as in the longer term, as business models can be envisaged (and are to some extent already emerging) which can benefit from one or more of these points by blending in small amounts of human lawyer time with AI tool generated content. This would have the inherent value of the input of such human lawyers plus an ability to argue for privilege. It is not possible to say at the moment where any lines would be drawn, but they would likely be influenced (judges being human) by perceptions of how beneficial such business models are, or are likely to be.

 

A fourth argument: procedural justice and equality of arms

In addition to these three points, I would suggest a fourth argument which I have not seen explored elsewhere but which would, if accepted, apply even if there is no qualified lawyer involved. This would involve drawing on the ideals of rule of law and due process which underlie privilege, and focusing on the point about access to justice in particular, including the ability of people to defend themselves fairly in civil, criminal or regulatory matters.

My suggestion here is that the courts may consider that the principle of “equality of arms” both under the ECHR and domestic procedural contexts such as the Civil Procedure Rules, militates against ordering disclosure of, or allowing reliance on, someone’s “conversations” with an AI tool about their legal issues. For example, in English civil courts the days are long gone in which wide-ranging disclosure was required of all relevant documents including those which merely lead to a “train of enquiry.” The modern approach to disclosure is influenced by modern notions such as proportionality and equality of arms. If faced with a case between a sophisticated party which could afford and did obtain legal advice before the dispute came into contemplation (i.e. assuming that is an LAP not an LP issue) and an unsophisticated party which could not afford legal advice and used an AI tool instead, is it just to order the latter to disclose what they said, and were told, about their legal position, warts and all, when the former has an absolute right not to do so?

 

Notes:

  1. LPP is also used to include “joint” and “common interest” privilege. Without getting too technical about these, they are essentially secondary concepts extending LAP or LP protections to multi-party situations. Other uses of the word “privilege” such as “without prejudice privilege” (the right to refuse to disclose the content of settlement proposals or discussions) or the privilege against self-incrimination raise separate issues, not considered in this note.
  2. There can be some issues about the scope of the “client” within organisations, a topic which has caused some difficulty in English law in recent years, but that point is not unique to AI tools and I don’t propose to discuss it further here.