FINRA 2026 AI Governance Requirements: What Advisors Need to Know
Spencer Gauta
June 26, 2025

FINRA's 2026 Oversight Report includes a new section that should get every financial advisor's attention: AI governance and oversight.
For the first time, FINRA is explicitly calling out artificial intelligence as a compliance risk area, and making it clear that firms using AI tools need formal policies, vendor due diligence, and data handling protocols.
If you're using AI in your practice (or planning to), here's what FINRA expects and what you need to do to stay compliant.
What FINRA's 2026 Report Actually Says
FINRA's 2026 Oversight Report doesn't ban AI. It doesn't even suggest advisors avoid it. What it does is put firms on notice: if you're using AI, you need to manage the risks.
The report highlights three specific concerns:
1. Vendor Oversight
"Member firms are increasingly relying on third-party AI-powered tools for client-facing and operational functions. FINRA examiners will assess whether firms have conducted adequate due diligence on AI vendors, including evaluating data security practices, model transparency, and potential conflicts of interest."
What this means: You can't just sign up for an AI tool and start using it with clients. You need to document:
- What data the tool accesses
- Where that data is stored and for how long
- What security certifications the vendor holds
- Whether the vendor uses client data to train models
- How the vendor handles data deletion requests
If you can't answer these questions during an audit, you have a problem.
2. Data Retention and Privacy
"AI tools that process or store customer information must align with existing data privacy regulations, including Regulation S-P and state-level privacy laws. Firms must have policies governing how AI-generated data is retained, accessed, and ultimately disposed of."
What this means: If your AI tool stores client data, that storage must comply with your firm's existing data retention policy. If your policy says client records are deleted after 7 years, but your AI vendor keeps transcripts indefinitely, you're out of compliance.
FINRA also notes that advisors must be able to demonstrate data destruction, not just assume it's happening. If your vendor's terms of service say "data may be retained for up to 3 years," you need to either accept that risk or find a different tool.
3. Shadow AI
"FINRA has observed increased use of consumer-facing AI tools (e.g., ChatGPT, Gemini) by registered representatives for client communications and data analysis. Firms must implement policies that address the use of unapproved AI tools and ensure representatives understand the risks of exposing client information to third-party platforms."
What this means: Advisors are using ChatGPT. They're pasting client transcripts into Claude. They're asking Gemini to draft emails. FINRA knows this, and they're putting firms on notice to control it.
If your compliance manual doesn't address AI, you're behind. And if your reps are using consumer AI tools without approval, FINRA considers that a supervision failure.
What FINRA Will Look for in Exams
Based on the 2026 report and recent sweep exam priorities, here's what you can expect FINRA examiners to ask:
| Exam Question | What They're Really Asking |
|---|---|
| "What AI tools does your firm use?" | Do you even know what your reps are using? |
| "Where is client data from these tools stored?" | Can you demonstrate data control? |
| "What due diligence did you conduct before adopting this tool?" | Do you have documentation? |
| "How does your AI tool comply with Reg S-P?" | Do you understand the privacy implications? |
| "What happens to the data if the vendor goes out of business?" | Have you thought about third-party risk? |
| "Do you have an AI acceptable use policy?" | Have you communicated expectations to reps? |
If you can't answer these questions with specific documentation, FINRA will cite it as a deficiency.
How to Build FINRA-Ready AI Governance
FINRA doesn't require perfection. They require reasonable policies and documented oversight. Here's a practical framework:
Step 1: Create an AI Acceptable Use Policy
Your policy should address:
- Approved tools: List which AI tools are approved for use (if any).
- Prohibited tools: Explicitly ban consumer AI tools (ChatGPT, Claude, Gemini) for client work unless approved.
- Data handling: Specify what client data can and cannot be input into AI tools.
- Supervision: Require reps to disclose AI tool usage and get compliance approval before adoption.
- Training: Provide annual training on AI risks and acceptable use.
Template language:
"Representatives may not input client names, account numbers, financial data, or personally identifiable information into unapproved AI tools. Use of consumer AI platforms (e.g., ChatGPT, Google Gemini) for client-related work is prohibited unless explicitly approved by the Chief Compliance Officer."
Step 2: Conduct Vendor Due Diligence
Before adopting any AI tool, document your evaluation. Key questions to answer:
- What data does the tool access? (Recordings, transcripts, emails, calendar events?)
- Where is data stored? (Cloud provider, geographic location, encryption standards?)
- How long is data retained? (Real-time processing vs. indefinite storage?)
- Is data used for model training? (Some vendors use client data to improve their AI, often buried in terms of service.)
- What security certifications does the vendor hold? (SOC 2 Type II, ISO 27001, etc.)
- What happens to data if the vendor is acquired or shuts down? (Do you get a data export? Is it automatically deleted?)
Save your vendor's responses. If they can't or won't answer, that's a red flag.
Step 3: Align AI with Your Data Retention Policy
Your firm's data retention policy likely specifies how long client records are kept (typically 6-7 years for FINRA-regulated records, per Rule 4511).
If your AI tool stores client data longer than your policy allows, you have three options:
- Update your policy to reflect the AI tool's retention (not recommended, this weakens your privacy posture).
- Negotiate with the vendor for custom retention settings (many won't accommodate this for small firms).
- Choose a zero-retention tool that destroys data after processing (the cleanest solution).
FINRA doesn't mandate zero-retention. But they do mandate consistency between your policy and your practice.
Step 4: Implement Training and Monitoring
Your reps need to understand:
- What AI tools are approved
- What data they can input
- What the penalties are for using unapproved tools
Pro tip: Include AI governance in your annual compliance training. Make reps attest that they've read and understand the policy. That documentation matters in exams.
Also consider monitoring:
- Email for signs of AI tool usage (e.g., "As generated by ChatGPT...")
- Browser activity if your compliance stack includes web filtering
- Expense reports for unapproved software subscriptions
Real-World Compliance Scenarios
Scenario 1: The ChatGPT User
An advisor pastes a client meeting transcript into ChatGPT and asks it to generate follow-up action items. The advisor finds this helpful and does it after every meeting.
The compliance problem:
- ChatGPT stores the conversation (unless the user has opted into data controls, which most haven't).
- Client PII is now on OpenAI's servers, likely indefinitely.
- This violates Reg S-P (client information wasn't adequately safeguarded).
- The firm didn't approve or supervise the tool's use.
FINRA's likely finding: Failure to supervise. Failure to implement adequate data security policies.
Scenario 2: The "SOC 2 Compliant" Tool
An advisor signs up for an AI meeting assistant that advertises "SOC 2 Type II certified" security. The advisor assumes this means the tool is compliant and starts using it for client meetings.
Six months later, during a FINRA exam, the examiner asks: "How long does this vendor retain client data?"
The advisor doesn't know. The vendor's terms say "data is retained for up to 3 years for quality assurance." The firm's data retention policy says 7 years.
The compliance problem:
- No vendor due diligence was documented.
- The advisor didn't verify that the tool aligns with the firm's retention policy.
- SOC 2 only addresses how data is stored, not how long. The advisor misunderstood the certification.
FINRA's likely finding: Inadequate vendor due diligence. Policies not enforced.
Scenario 3: The Zero-Retention Tool
An advisor adopts an AI meeting assistant that uses zero-retention architecture. The tool processes meetings, syncs notes to the CRM, and permanently destroys the recording and transcript within minutes.
During a FINRA exam, the examiner asks: "Where is client data from these meetings stored?"
The advisor responds: "The processed notes are in our CRM, which we control. The AI vendor doesn't store any client data. It's destroyed after processing. Here's the vendor's deletion certification and their architecture documentation."
The compliance outcome: No findings. The firm demonstrated reasonable due diligence and implemented a tool that aligns with data minimization best practices.
The Bottom Line: FINRA Wants to See Thoughtfulness
FINRA isn't trying to ban AI. They're trying to prevent reckless adoption.
The regulators understand that AI improves productivity. They also understand that most advisors aren't cybersecurity experts. What they want to see is:
- Awareness of the risks
- Documented policies
- Supervision of how reps use AI tools
- Vendor due diligence
If you can show those four things, you're in good shape, even if your AI governance program isn't perfect.
The firms that will struggle are the ones who:
- Have no AI policy at all
- Can't name the AI tools their reps are using
- Can't explain where client data is stored
- Assume "SOC 2" means "compliant"
What to Do This Week
If you don't have an AI governance policy yet, start here:
- Audit your current AI usage. Ask every advisor: What AI tools are you using? (You might be surprised by the answers.)
- Draft a simple AI acceptable use policy. (We've published a free template below.)
- Document vendor due diligence for any approved tools. Even a one-page summary is better than nothing.
- Add AI governance to your next compliance meeting agenda.
FINRA's 2026 Oversight Report is a signal. The firms that respond now will be ahead of the curve. The ones that wait will be scrambling when the exam requests start coming.
Free Resource
AI Acceptable Use Policy Template
A ready-to-customize policy template for RIAs adopting AI tools. Covers approved tools, data handling, supervision requirements, and staff training.
Download FreeReady to try AI Secretary?
Start your 14-day free trial. No credit card required.
Start Your Free Trial