Back to Blog
Industry Insights 12 min read

Shadow AI in Financial Advisory: What Your Compliance Officer Needs to Know

Spencer Gauta

June 26, 2025

Shadow AI in Financial Advisory: What Your Compliance Officer Needs to Know

It's Tuesday afternoon. Your associate advisor is drafting a financial plan. He copies the client's goals from the CRM, pastes them into ChatGPT, and asks it to generate a summary for the plan document.

Across the hall, another advisor just finished a client meeting. She uploads the meeting recording to a free transcription tool she found on Reddit. It emails her a transcript in minutes.

Your newest hire is struggling with a complex estate planning question. He screenshots the client's balance sheet, uploads it to Claude, and asks, "What's the most tax-efficient way to structure this?"

None of these tools were approved by your compliance department. None were reviewed by IT. None appear in your vendor management records.

But they're all happening, right now, at financial advisory firms across the country.

This is shadow AI.

What Is Shadow AI?

Shadow AI refers to the use of unapproved, unvetted AI tools by employees, especially tools that process or store sensitive client data.

It includes:

  • General-purpose AI chatbots (ChatGPT, Claude, Gemini, Perplexity)
  • Free transcription services (Otter.ai free tier, Rev.ai, Trint)
  • Browser extensions that add "AI superpowers" to Gmail, LinkedIn, or CRMs
  • Document analysis tools (uploading client PDFs to Claude or ChatGPT for summarization)
  • Email drafting assistants (AI Gmail plugins, Superhuman AI, etc.)

These tools are powerful. They save time. They improve quality. And they're incredibly easy to adopt: no IT approval needed, no procurement process, no training required.

But for financial advisors operating under SEC and FINRA oversight, shadow AI is a ticking compliance bomb.

Why Shadow AI Is Spreading in Advisory Firms

Advisors aren't using unapproved AI tools to be reckless. They're using them because:

1. AI solves real problems

Post-meeting admin takes hours. Drafting financial plans is tedious. Searching through CRM records is slow. AI makes all of this faster and easier.

When there's no approved tool to solve the problem, advisors improvise.

2. Approved tools are slow to adopt

Many firms have a 6-12 month procurement cycle. By the time compliance approves a vendor, the tool might be outdated, or the advisor has already found a workaround.

3. Advisors don't realize it's a problem

Most advisors don't think of ChatGPT as a "third-party vendor" subject to due diligence. They think of it as a search engine or a word processor. They don't realize that pasting client data into ChatGPT is the same as uploading it to OpenAI's servers, where it might be stored, used for training, or exposed in a breach.

4. There's no technical barrier

Unlike installing software (which IT can block), using web-based AI tools requires nothing more than a browser. Advisors can sign up, start using it, and never tell anyone.

The Compliance Risks of Shadow AI

Shadow AI creates three major compliance failures:

1. Unauthorized Data Disclosure

When an advisor pastes client information into ChatGPT, that data is transmitted to OpenAI. Depending on the user's settings, OpenAI may:

  • Store the conversation indefinitely
  • Use the data to train future models (unless the user has opted out)
  • Retain the data in backups even after the user "deletes" it

Under Regulation S-P, financial advisors are required to protect client information and provide notice before sharing it with third parties. Shadow AI violates both requirements. Advisors are sharing data without authorization and without client notice.

2. Failure to Supervise

FINRA Rule 3110 requires firms to supervise their representatives' activities, including their use of technology. If a firm doesn't know what tools advisors are using, they can't supervise them.

This is a strict liability standard. It doesn't matter if the advisor meant well or if the tool "didn't cause any harm." If the firm failed to supervise, FINRA can cite it as a deficiency.

3. Data Breach Exposure

Every system that stores client data is a potential breach point. In 2024, OpenAI, Google, and Microsoft all experienced security incidents where user data was exposed or mishandled.

If an advisor uses one of these platforms with client data and a breach occurs, the advisory firm is responsible, even though the breach happened on someone else's infrastructure.

Worse, many firms won't even know they're exposed until after the breach is disclosed publicly.

Real-World Shadow AI Scenarios

Scenario 1: ChatGPT for Meeting Summaries

What happened: An advisor has 8 client meetings in a day. After each meeting, she copies her handwritten notes into ChatGPT and asks it to format them as meeting summaries. The summaries include client names, account balances, investment recommendations, and estate planning details.

The compliance problem:

  • Client PII was uploaded to OpenAI's servers
  • The advisor never checked OpenAI's data retention policy (conversations are stored unless the user opts into "data controls")
  • The firm has no record of this activity and can't demonstrate supervision
  • If OpenAI is subpoenaed, those conversations could be disclosed

FINRA's likely finding: Failure to supervise. Unauthorized disclosure of client information. Inadequate cybersecurity policies.

Scenario 2: Free Transcription Tool

What happened: An advisor discovers a free transcription service advertised on YouTube. He uploads 6 months of client meeting recordings to get transcripts, saved him hours of manual note-taking.

The service's terms of service (which he didn't read) state: "Uploaded audio may be retained for up to 90 days and used to improve transcription accuracy."

The compliance problem:

  • 6 months of client meetings (including Social Security numbers, account details, health information, estate plans) are now stored on a third-party server
  • The vendor is using this data to train their AI model
  • The advisor has no way to verify deletion or request data removal
  • The firm doesn't even know this vendor exists

FINRA's likely finding: Unauthorized third-party service provider. Failure to conduct vendor due diligence. Data retention policy not followed.

Scenario 3: Document Upload for Analysis

What happened: An advisor is preparing for a complex tax planning conversation. She uploads the client's prior-year tax return (PDF) to Claude and asks, "What tax optimization strategies should we consider?"

Claude provides a thoughtful response. The advisor incorporates it into her recommendations.

The compliance problem:

  • The client's tax return (which includes SSN, income details, investment holdings) was uploaded to Anthropic's servers
  • Even if Claude doesn't store the file permanently, it was processed on Anthropic's infrastructure
  • The advisor has no way to verify what happened to the data after the conversation
  • The client never consented to sharing their tax return with a third party

SEC's likely finding: Violation of Regulation S-P (inadequate safeguards). Failure to provide required privacy notices.

How to Detect Shadow AI in Your Firm

Shadow AI is hard to detect because it's happening outside your approved systems. Here's how to uncover it:

1. Anonymous Survey

Send an anonymous survey to all advisors and support staff asking:

  • What AI tools do you use for work? (List examples: ChatGPT, Claude, Gemini, Otter.ai, Grammarly, etc.)
  • What tasks do you use them for?
  • Do you ever input client names, account details, or personal information?

Frame it as a fact-finding mission, not a witch hunt. You'll get honest answers, and you'll likely be surprised by what you learn.

2. Browser History Audit (If Firm Devices)

If advisors use firm-issued laptops, your IT team can run a scan for:

  • Visits to openai.com, claude.ai, gemini.google.com
  • Uploads to transcription or AI services
  • Installation of browser extensions with "AI" in the name

This is more invasive, but it's justified if you have reason to believe shadow AI is widespread.

3. Review Expense Reports

Look for subscriptions to AI tools:

  • ChatGPT Plus ($20/month)
  • Claude Pro ($20/month)
  • Otter.ai Business ($16/month)
  • Superhuman ($30/month)

If advisors are expensing these, they're using them.

4. Network Traffic Analysis

Your IT team can monitor outbound network traffic for connections to known AI service domains. This won't catch personal device usage, but it will catch firm network activity.

5. Spot-Check CRM Records

Randomly pull 10-20 recent client meeting notes. Look for language patterns like:

  • "Here is a summary of the meeting..." (ChatGPT's default phrasing)
  • "Certainly! Let me break that down..." (common AI assistant phrasing)
  • Unusually polished or formal tone compared to the advisor's typical style

If you see AI-generated language, follow up with the advisor to ask what tool they used.

How to Stop Shadow AI Without Killing Productivity

The worst response to shadow AI is to ban all AI tools. That won't stop usage. It will just drive it further underground.

The better approach: Provide approved alternatives.

Step 1: Adopt a Compliant AI Meeting Assistant

Give advisors a tool that solves their biggest pain point (meeting notes and CRM updates) in a compliant way.

Requirements:

  • Built for financial advisors (not a consumer tool repurposed)
  • SOC 2 Type II certified (or better: zero-retention architecture)
  • Integrates with your existing CRM
  • Approved by your compliance department

When advisors have a tool that works and is approved, they're far less likely to use shadow AI.

Step 2: Create an AI Acceptable Use Policy

Your policy should:

  • List approved tools
  • Explicitly ban consumer AI tools (ChatGPT, Claude, etc.) for client work
  • Define what data can and cannot be input into AI tools
  • Require advisors to disclose any AI tools they want to use

Sample language:

Representatives may not input client names, account numbers, financial data, or personally identifiable information into unapproved AI tools. Use of consumer AI platforms (e.g., ChatGPT, Claude, Gemini) for client-related work is prohibited unless explicitly approved by the Chief Compliance Officer.

Step 3: Train Advisors on the Risks

Most advisors don't realize that ChatGPT stores conversations or that uploading a document to Claude means it's processed on Anthropic's servers.

Include AI risk awareness in your annual compliance training:

  • What shadow AI is and why it's a problem
  • Real-world examples of data breaches involving AI tools
  • How to get approval for new tools
  • What happens if the policy is violated

Step 4: Offer a Path to Approval

If an advisor finds a useful AI tool, don't just say no. Give them a process:

  • Submit a vendor evaluation request
  • Compliance reviews security, data handling, terms of service
  • If approved, the tool is added to the approved list for the whole firm

This turns motivated advisors into advocates for better tools, instead of secret shadow AI users.

Step 5: Monitor and Audit

Even with approved tools and clear policies, some advisors will test the boundaries. Implement spot checks:

  • Quarterly review of browser activity (on firm devices)
  • Annual attestation that advisors have followed the AI policy
  • Random audit of meeting notes for AI-generated language

What to Do If You've Already Had Shadow AI Usage

If you discover that advisors have been using unapproved AI tools with client data, here's how to respond:

1. Don't Panic

Shadow AI is widespread. You're not the first firm to deal with this. The key is to address it systematically.

2. Assess the Exposure

For each tool that was used:

  • What data was input?
  • How many clients were affected?
  • What is the tool's data retention policy?
  • Can you request data deletion?

3. Request Deletion (If Possible)

Some platforms (OpenAI, Anthropic) allow users to request deletion of conversation history. If your advisor has an account, have them submit a deletion request immediately.

Document the request and any confirmation you receive.

4. Update Your Policies

If shadow AI happened because you didn't have a policy, write one. If you had a policy but it wasn't enforced, communicate it clearly and add consequences for violations.

5. Decide Whether to Disclose

This is a legal and compliance judgment call. Factors to consider:

  • Was client PII exposed?
  • What is the likelihood of harm?
  • Do state breach notification laws require disclosure?
  • Would disclosure be required under SEC/FINRA rules?

Consult with your compliance consultant or legal counsel before making this decision.

The Opportunity in Shadow AI

Shadow AI isn't just a risk. It's a signal.

When advisors seek out unapproved tools, it means they have real needs your firm isn't meeting. They're trying to be more productive, deliver better service, and reduce admin burden.

The firms that respond by providing compliant alternatives will win. They'll be more productive, more efficient, and better positioned to scale.

The firms that just say "no" will continue to struggle with shadow AI, and with retention, as frustrated advisors leave for firms that embrace technology.

The Bottom Line: You Can't Ignore This

Shadow AI is not a hypothetical risk. It's happening now, at your firm, whether you know it or not.

FINRA has already flagged AI governance as a 2026 exam priority. The SEC has issued guidance on cybersecurity and vendor management. State regulators are watching.

The firms that get ahead of this will:

  • Adopt approved AI tools that meet compliance standards
  • Educate advisors on the risks of shadow AI
  • Provide clear policies and paths to approval
  • Monitor and audit to ensure policies are followed

The firms that ignore it will end up explaining to regulators why client data ended up on ChatGPT's servers, and why nobody noticed.


Free Resource

AI Acceptable Use Policy Template

Stop shadow AI before it starts. This ready-to-customize policy template covers approved tools, prohibited uses, data handling, and staff training requirements.

Download Free

Ready to try AI Secretary?

Start your 14-day free trial. No credit card required.

Start Your Free Trial