Dispute Resolution

Dominic Holden
April 2026

The Civil Justice Council (CJC- which advises the Lord Chancellor, the Judiciary and the CPR Committee on civil matters) has produced an interim report and consultation on the “Use of AI for Preparing Court Documents”. (To read the full report, please follow this link: https://www.judiciary.uk/wp-content/uploads/2026/02/Interim-Report-and-Consultation-Use-of-AI-for-Preparing-Court-Documents-2.pdf )

The purpose of this consultation was to consider whether rules are needed to govern the use of AI by legal representatives for the preparation of court documents. The CJC has sought views on the issues, including the proposed way forward, and we present below a summary of our response and conclusions.

Executive summary

The CJC’s consultation on the Use of AI for Preparing Court Documents marks an important moment in the evolution of civil justice. AI has the potential to improve efficiency, reduce cost, and enhance access to justice, but its use in litigation must be carefully governed to ensure that evidence remains reliable and professional responsibility is preserved.

Our response supports a targeted, proportionate framework: one that enables the legitimate and beneficial use of AI, while reinforcing human control and decision making, accountability, as well as safeguarding the integrity of the court process.

Our overarching position

We support the responsible use of AI in civil litigation, particularly where it delivers efficiency and cost savings that benefit clients, improves access to justice and assists the judicial process. However, these advantages must be balanced against well‑documented risks, including hallucinations, opacity in decision‑making, and potential erosion of evidential authenticity.

Our core conclusions are:

  • Human responsibility must remain central. Where documents are submitted to court, professional accountability should never be displaced by technology.
  • Generative AI should not be used to create or re‑shape trial witness evidence. The provenance and authenticity of factual evidence must be preserved.
  • Transparency is essential where AI materially contributes to expert opinion, to ensure fairness and effective cross‑
  • Administrative and benign uses of AI should not be over‑regulated, as this would add cost, complexity, and satellite disputes, without improving judicial

This approach is consistent with the objectives of PD57AC, existing duties under PD32, and the judiciary’s repeated emphasis on verification, accuracy, and human oversight.

Statements of case and advocacy documents

We agree that no new procedural rules are required for statements of case, skeleton arguments, or other advocacy documents merely because AI has been used in their preparation.

Legal representatives already owe stringent duties to the court in respect of accuracy, propriety, and candour. Those duties apply regardless of the drafting tools used. Requiring routine AI‑use disclosures would be disproportionate and risks creating delay and unnecessary disputes (particularly as AI functionality becomes embedded in standard drafting software).

Our position is that disclosure of AI use should only be considered where AI has been used to generate substantive evidential content, not where it has been used for routine drafting, research assistance, or administrative purposes.

Disclosure and document review

Disclosure remains one of the most expensive stages of litigation, particularly in data‑heavy disputes. AI‑assisted review and Technology Assisted Review (TAR) are now well‑established tools for managing that cost.

We do not support a general requirement to declare AI use in disclosure lists or statements. Disputes in this area typically concern the scope and quality of the search, not the mere use of technology. These issues that are already addressed through case management, the Disclosure Review Document and Disclosure Certificate.

However, we do support greater transparency around how TAR is deployed, including a requirement to declare the recall threshold used to conclude first‑tier review, so that the opposing party and the court can assess whether the technology has been applied appropriately.

Used with proper oversight, AI has an important role in making disclosure more efficient and more proportionate, and its use should be encouraged rather than discouraged.

Witness statements: preserving authenticity

We strongly support the Civil Justice Council’s differentiated approach to witness evidence.

For non‑trial witness statements, existing professional obligations are sufficient, and no additional AI‑specific declarations are necessary.

For trial witness statements governed by PD57AC, we support a clear rule requiring confirmation that AI has not been used to generate, alter, embellish, or re‑phrase the witness’s evidence. This is essential to preserve the witness’s own words and to ensure the court can rely on the evidence before it.

This requirement should sit alongside the statement of truth and apply equally to legally represented parties and litigants in person, with appropriate steps taken to ensure unrepresented parties are made aware of the obligation.

Translation

We support the use of AI by certified human translators, provided the translator takes responsibility for accuracy by signing a statement of verification. We also support the use of identified machine‑translation tools, so long as the process is transparent and other parties are able to check translations themselves if required. The key safeguard is human accountability, not blanket prohibition.

Expert evidence and AI transparency

Expert evidence raises distinct issues. Where AI is used to inform or generate expert opinion (beyond administrative tasks such as transcription), transparent disclosure is essential.

We support amending expert statements of truth to require experts to identify and explain any substantive use of AI in their analysis. This protects the integrity of the expert process, ensures a level playing field, and allows AI‑related assumptions or limitations to be explored in cross‑examination.

At the same time, legitimate AI‑assisted analysis should not be stifled where it enhances accuracy or efficiency.

Defining AI and avoiding over‑regulation

We agree that the term “artificial intelligence” is sufficiently clear for procedural rules, but it can be refined to refer to generative AI or AI capable of producing substantive content, expressly excluding administrative uses such as spelling, grammar, formatting, transcription, and accessibility.

We support a clear distinction between:

  • administrative tools; and
  • AI that generates substantive factual or opinion content, with stricter controls applied where AI affects evidence rather than legal analysis.

We do not support routine requirements to name specific AI tools used. This risks shifting focus away from professional responsibility, creates practical difficulties where parties do not have access to the same tools, and could necessitate unnecessary disclosure of prompts or workflows. Where disclosure is required, the focus should be on how AI was used and what role it played, not on product branding.

Court permissions and case management

We do not consider it necessary to introduce a new rule requiring court permission for AI use. The court already has ample powers under CPR 3.1 and CPR 32.1 to control evidence, manage cases, and address any concerns about methodology or fairness as they arise.

Conclusion

AI has the potential to make civil litigation faster, fairer, and more accessible. But its use must be principled, proportionate, and anchored in human responsibility.

Our response supports:

  • clear prohibitions where AI risks corrupting evidence;
  • targeted transparency where AI materially shapes opinion evidence; and
  • regulatory restraint where AI is used administratively or as a drafting aid.

This balanced approach protects the integrity of the justice system while allowing innovation to deliver real benefits for courts, parties, and the public.

For advice on using AI in litigation safely and effectively, please contact Dominic Holden.