Is your AI compliant? EU AI Act deadline: August 2026 Book a Call →
• 5 min read

AI Compliance for Financial Advisers: What You Need to Know Before August 2026

Financial advisers face dual AI compliance pressures from the FCA and the EU AI Act. Understand the high-risk classifications, credit scoring rules, and practical steps to prepare before August 2026.

A Dual Compliance Landscape

Financial advisers in the UK are operating in an increasingly complex regulatory environment when it comes to artificial intelligence. On one side, the Financial Conduct Authority has made clear that its existing principles, including treating customers fairly, ensuring suitability of advice, and maintaining adequate systems and controls, apply fully to AI-driven processes. On the other side, the EU AI Act introduces a parallel set of obligations for any AI system whose output reaches EU citizens or residents.

For many financial advisory firms, particularly those in the East Midlands serving clients with cross-border interests, this creates a dual compliance requirement that must be addressed before the EU AI Act's high-risk provisions take full effect in August 2026.

What the FCA Expects

The FCA has not introduced AI-specific regulations, but it has been unambiguous in its messaging: firms cannot use AI as a reason to lower their compliance standards. In its 2025 feedback statement on AI in financial services, the regulator emphasised several key expectations.

Explainability. Firms must be able to explain to clients and to the FCA how AI-driven recommendations are reached. A "black box" model that produces investment recommendations without any interpretable rationale does not meet the FCA's suitability requirements. If a client asks why a particular fund was recommended, "the AI suggested it" is not an acceptable answer.

Accountability. The senior managers regime means that a named individual within the firm must take responsibility for AI-driven outcomes. This person must understand, at a sufficient level, how the AI systems the firm uses arrive at their outputs. The technology does not shift accountability away from the firm or its leadership.

Fairness and bias. AI systems used in client-facing decisions must not produce systematically unfair outcomes. The FCA has flagged particular concern about AI systems that could discriminate against protected groups in credit decisions, insurance pricing, or access to financial products. Firms must actively test for and monitor bias in their AI systems.

Data protection. AI systems in financial services process vast quantities of personal data. The FCA expects firms to comply fully with UK GDPR requirements, including the right to meaningful information about the logic involved in automated decision-making under Article 22.

High-Risk AI Uses in Financial Services

The EU AI Act's Annex III explicitly identifies several AI applications common in financial advisory firms as high-risk. Understanding these classifications is essential for determining your compliance obligations.

Credit scoring and creditworthiness assessment. Any AI system used to evaluate a natural person's creditworthiness or credit score is classified as high-risk. This includes tools used by financial advisers to assess whether clients qualify for certain products, even where the adviser is not the direct lender. If your AI-powered platform generates a risk score that influences product recommendations, this classification likely applies.

Insurance pricing and risk assessment. AI systems used to set insurance premiums or assess insurance risk for natural persons fall within the high-risk category. Advisers using comparison tools or platforms with embedded AI pricing engines should assess whether these tools are within scope.

Fraud detection with adverse decisions. AI systems used to detect fraud that can result in adverse actions against individuals, such as freezing accounts, rejecting transactions, or flagging clients, are high-risk. While fraud prevention is critical, the AI systems that drive these decisions must meet the Act's requirements for accuracy, transparency, and human oversight.

Investment risk profiling. Where AI tools are used to profile a client's risk tolerance and automatically generate investment recommendations, these may constitute high-risk systems, particularly where the output materially determines the advice given. The classification depends on the degree to which the AI output influences the final recommendation without meaningful human intervention.

Credit Scoring and Fraud Detection: The Details

Credit scoring deserves particular attention because it is one of the most common AI applications in financial services and carries some of the most specific requirements under the EU AI Act.

High-risk AI systems used for credit scoring must implement a documented risk management process covering the system's entire lifecycle. The training data used by the system must be relevant, representative, and as free from errors as possible. The system must produce outputs that are interpretable by the human overseeing the decision, and there must be a mechanism for the affected individual to challenge the decision.

For fraud detection, the challenge is balancing the speed required for effective fraud prevention with the human oversight requirements of the Act. The regulation recognises that real-time decisions may be necessary but still requires that humans can review and override the AI's decisions after the fact, and that the system's logic is documented and auditable.

Many financial advisers rely on third-party platforms for these capabilities. It is important to understand that while the provider of the AI system bears the primary compliance obligations, the deployer (your firm) also has responsibilities. These include using the system in accordance with the provider's instructions, monitoring the system's operation, and reporting serious incidents.

Practical Steps to Prepare

With the August 2026 deadline approaching, financial advisory firms should take the following steps.

Audit your AI footprint. Identify every AI system in your technology stack. This includes obvious tools like robo-advisers and less obvious ones like the AI features embedded in your back-office platforms, CRM systems, and compliance monitoring tools. Our AI compliance audit service is designed specifically for this purpose.

Classify each system. Determine whether each AI tool falls into the high-risk category based on its function and the nature of the decisions it influences. Pay particular attention to any system that affects a client's access to financial products or the terms on which products are offered.

Assess your EU exposure. Review your client base to determine the extent of your EU-connected business. Even a small number of EU-connected clients can bring your AI systems within the Act's scope. Our financial services compliance page details the specific scenarios that trigger EU AI Act obligations.

Request vendor documentation. Contact the providers of your AI tools and request their compliance documentation. Under the Act, providers of high-risk AI systems must supply deployers with sufficient information to understand the system's capabilities and limitations. If a vendor cannot provide this documentation, that is a significant red flag.

Establish human oversight protocols. Review your current processes to ensure that AI-driven recommendations are subject to meaningful human review. This means more than a cursory sign-off. The human overseeing the system must have the authority and competence to override the AI's output and must understand the basis on which the output was generated.

Build your documentation. Begin compiling the technical documentation, usage logs, and incident reporting procedures required by the Act. Starting this process now, rather than six months from now, gives your firm time to address gaps without rushing.

The Cost of Inaction

The penalties under the EU AI Act are substantial: up to 35 million euros or 7% of global turnover for the most serious violations. But the regulatory risk extends beyond the EU framework. The FCA has its own enforcement powers, and a firm that deploys AI carelessly risks regulatory action, client complaints, and professional indemnity claims regardless of the EU Act.

More practically, financial advisory firms that fail to prepare will find themselves unable to serve EU-connected clients or forced to withdraw AI tools that have become integral to their operations, both of which carry significant commercial costs.

Twisthand Intelligence works with financial advisory firms across the East Midlands to build compliant AI implementations that satisfy both FCA expectations and EU AI Act requirements. We help firms continue benefiting from AI-driven efficiency whilst managing the regulatory risks that come with it.

Understand Your AI Compliance Position

Take our free assessment to identify which of your AI systems are high-risk and what you need to do before August 2026.

Start Your Free Assessment →