Is Your Law Firm's AI Contract Review Tool High-Risk? How to Check
AI contract review tools may be classified as high-risk under the EU AI Act. Learn how the SRA expects law firms to manage AI, how to classify your tools, and what documentation you need.
AI Contract Review Has Transformed Legal Work
AI-powered contract review has become one of the most widely adopted legal technology tools in UK law firms. From high-street practices to regional firms across the East Midlands, solicitors are using AI to analyse contracts faster, flag problematic clauses, and reduce the manual effort involved in due diligence exercises. The efficiency gains are genuine and significant.
But with the EU AI Act's high-risk provisions coming into force in August 2026, law firms need to ask a critical question: does your AI contract review tool qualify as a high-risk AI system? And if it does, what are you required to do about it?
AI Contract Review Under the EU AI Act
The EU AI Act does not explicitly list "contract review" as a high-risk application. However, the Act's high-risk classification is not limited to a fixed list. It applies based on the function and impact of the AI system, not just its label.
The Act identifies AI systems used in the "administration of justice and democratic processes" as high-risk (Annex III, Area 8). More broadly, any AI system that is a "safety component" of a product or service covered by EU harmonised legislation, or that falls within one of the specific high-risk categories in Annex III, must comply with the Act's requirements.
For contract review specifically, the classification depends on what the AI system does and how its output is used. There are two broad scenarios.
Scenario 1: The AI assists but does not determine. If the AI tool flags clauses for a solicitor's review and the solicitor exercises independent professional judgement over every flagged item, the tool is functioning as an assistive technology. In this scenario, the AI is less likely to be classified as high-risk under the Act, though it still requires transparency and appropriate professional oversight.
Scenario 2: The AI materially influences legal outcomes. If the AI tool's analysis is relied upon to make decisions about contract acceptability, risk allocation, or legal exposure, and the human review is minimal or pro forma, the tool is effectively making or substantially contributing to decisions that affect people's legal rights. This is far more likely to constitute a high-risk AI system, particularly where the contracts involve EU parties or are governed by EU law.
The distinction hinges on the degree of human oversight and the materiality of the AI's contribution to the final legal output. Many firms believe they are in Scenario 1 when they are actually in Scenario 2.
What the SRA Expects
The Solicitors Regulation Authority has issued clear guidance on the use of AI in legal practice. While the SRA does not regulate AI systems directly, it regulates solicitors and the firms that employ them, and its expectations are directly relevant to how law firms deploy AI tools.
The SRA's core position is that a solicitor's professional duties are non-delegable. Whether advice is generated by a human, an AI, or a combination of both, the solicitor bears full responsibility for its accuracy and appropriateness. This means that deploying an AI contract review tool does not reduce a firm's professional obligations. If anything, it increases the need for robust quality assurance.
The SRA expects firms to:
- Understand the tools they use. Firms must have sufficient understanding of how their AI tools work to ensure they are being used appropriately. This does not require solicitors to understand the technical details of neural networks, but it does require an understanding of the tool's capabilities, limitations, and known failure modes.
- Maintain competence. Solicitors using AI tools must remain competent in the underlying legal area. AI cannot substitute for legal knowledge, and firms must ensure that fee earners can identify when an AI tool has produced an incorrect or incomplete analysis.
- Supervise outputs. All AI-generated work product must be reviewed by a qualified person before it reaches the client. The review must be meaningful, not a box-ticking exercise.
- Protect client confidentiality. Firms must ensure that client data processed by AI tools is handled in compliance with data protection requirements and professional confidentiality obligations. This is particularly important where cloud-based AI tools process data outside the UK.
How to Classify Your Contract Review Tool
To determine whether your firm's AI contract review tool is high-risk under the EU AI Act, work through the following assessment.
Step 1: Identify the AI system's function. What exactly does the tool do? Does it flag clauses, summarise terms, compare against precedent, assess risk, or recommend changes? Document each distinct function.
Step 2: Assess the output's influence. For each function, determine how the AI output is used in practice. Is it genuinely reviewed and assessed by a solicitor before any action is taken, or is the AI's output effectively adopted as the firm's position? Be honest in this assessment. If fee earners routinely accept the AI's analysis without substantive review, the system is functionally making decisions.
Step 3: Determine EU nexus. Does the contract review work involve EU parties, EU-governed agreements, or outcomes that affect persons in the EU? If so, the EU AI Act's jurisdiction is likely engaged.
Step 4: Check against Annex III. Review the specific high-risk categories in the Act's Annex III. Pay particular attention to categories relating to access to essential services, administration of justice, and employment (if the AI reviews employment contracts). If your tool's function falls within these categories, it is high-risk.
Step 5: Apply the "significant impact" test. Even if the tool does not fall neatly into an Annex III category, consider whether its output has a significant impact on natural persons. The European Commission has the power to add new high-risk categories, and guidance is expected to evolve. A precautionary approach is advisable.
What to Document
Whether or not your contract review tool is ultimately classified as high-risk, the documentation process itself is valuable. For firms whose tools are within scope, the EU AI Act requires the following.
A risk management record covering the AI system's intended purpose, foreseeable risks, risk mitigation measures, and residual risks. This must be maintained throughout the system's lifecycle, not just at deployment.
Data governance documentation covering the data the system processes, data quality measures, and data protection safeguards. For contract review tools, this includes client data handling protocols.
Technical documentation from the AI provider describing the system's architecture, training methodology, performance metrics, and known limitations. If your vendor cannot supply this, your firm should reconsider using the tool for high-risk applications.
Human oversight procedures documenting who reviews the AI's output, what qualifications they hold, what authority they have to override the AI, and how overrides are recorded.
Usage logs recording when the system was used, by whom, for what purpose, and what output it generated. These logs must be retained for a period appropriate to the system's risk level.
Incident records documenting any instances where the AI system produced inaccurate, biased, or otherwise problematic outputs, and the corrective actions taken.
Getting Started
The classification and documentation process may seem daunting, but it does not need to be. At Twisthand Intelligence, we have developed a structured approach to AI compliance auditing specifically for legal practices. We help firms identify which tools are within scope, classify them correctly, and build the documentation framework required by the EU AI Act.
For firms that need to implement new AI tools or modify existing ones to meet compliance requirements, our compliant implementation service ensures that AI adoption and regulatory compliance go hand in hand. Our legal sector page provides additional detail on the specific challenges facing law firms.
The firms that address this now, whilst there is still time before the August 2026 deadline, will be in the strongest position. Those that wait risk discovering their most productive tools are non-compliant just as the enforcement regime takes effect.
Assess Your Legal AI Tools
Our free compliance assessment helps law firms identify which AI tools need attention before the EU AI Act deadline.
Start Your Free Assessment →