close
close

FINRA and SEC are reviewing specific AI use cases

Artificial intelligence tools, such as large language models (LLMs) and ways to use AI to market to new clients, are proving extremely useful for financial advisors. But they also pose new risks that are heavily assessed by regulatory bodies such as FINRA and the US Securities and Exchange Commission.

In recent weeks, authorities have addressed issues with emerging risks such as “AI washing,” where a company overstates its use of AI; Hallucinations that may occur when using models such as ChatGPT; and ethical concerns about using data to personalize marketing tactics to individuals.

“We need to understand not only how these models work, but also the opportunities and risks presented by each of the available models, particularly in generative AI,” said Brad Ahrens, senior vice president of advanced analytics at FINRA, on September 27 spoke at the Self-Regulatory Authority's Advertising Regulatory Conference.

Preventing AI woes through regulation

A week before the FINRA conference, US Securities and Exchange Commission Chairman Gary Gensler issued a somewhat humorous warning about and compared AI the film “Her”, in which an AI assistant Samantha forms a romantic bond with her user who relies too much on AI.

“Regulators and market participants, I think, need to think about what it means to have potentially 8,316 heartbroken financial institutions relying on an AI model or data aggregator,” Gensler said during his Sept. 19 speech Video “Office hours.” “Let’s do our best to keep this grief out of our capital markets.”

However, problems with AI tools are not always the most obvious cases, such as: The SEC charged two consulting firms in March for misleading the public by exaggerating the use of AI or the well-known hallucinations in large language models and so-called AI “deepfakes”.

Officials are weighing how companies use AI dictation services in meetings and whether a company's employees are overly aware that that data can leak into the public learning models of OpenAI's ChatGPT or Anthropic's Claude.

“If you use openai.com or anthropic.com, you should be concerned about your employees using that because there is a possibility that data could flow back into the models,” Ahrens said. “It’s happening.”

Another area where FINRA is seeing more AI use cases, according to Ahrens, is where a company uses chatbot-like AI to answer or summarize a question, such as how to handle disputes, if it is in an investment property There are multiple tenants in the property.

“A lot of companies use an LLM there, and they can just have the company put in a prompt or question that says, 'What do I do if there's a tenant dispute?' And it will take them directly to the content of the manual using something called retrieval augmented generation,” he said.

FINRA didn't necessarily take a position on this type of AI use, but officials stressed the importance of having a human monitor and compliance procedures on the back end or at the bottom line.

“How do you make sure this genetic AI works the way it’s supposed to? In other words, how do you monitor their use,” said Philip Shaikun, vice president and deputy general counsel in FINRA’s Office of General Counsel. “You want to be sure that you have certain types of procedures in place, keeping a human updated and doing spot checks.”

That too applies to technology providerswhich both FINRA and the SEC have strongly emphasized and companies need to monitor closely. Amy Sochard, vice president of FINRA's advertising regulation division, said companies need to double check with their existing vendors who may have transitioned their technology to AI since the contract was first signed.

For example, “If you didn't update the vendor contract because it was a two-year contract, and you notice that they're now using some kind of generative AI,” she said. “You need to know about this.”

AI could open the door to exploitative advertising

FINRA has updated its Rule 2210 guidance in May to incorporate the use of chatbots and AI in communications with investors and the public. While Ahrens said they “don't see a lot of customer-facing use cases” due to the increased risks, they are seeing more “hyper-personalization of ads.”

This presents challenges to the use of AI and machine learning tools to track a customer and their behavior based on their so-called “digital footprint” for marketing.

“It opens the door to exploitative advertising tactics where advertisers may know more about a digital user than the person knows what they are giving up. And this is where we want to start talking about ethics,” said Rachael Chudoba, a senior planning and research strategist at McCann Worldgroup, a global advertising network.

Chudoba, who spoke at the FINRA conference, said that companies need to ensure that their teams are not only informed about the ethical concerns surrounding AI, but that they also understand how these are applied to the various tools and how they are used Tools work.

“For example: Is your bias and ethics training up to date to include generative AI situations? “Are your teams comfortable accessing and learning about AI legal and policy guidance?” she said. “And are you educating your teams on the systems you want them to use so they feel like the tools are explainable and transparent to them?”

FINRA relies on AI for sentiment analysis

FINRA itself is also experimenting with AI tools and behavioral data analysis. For example, AI is used to detect the sentiment of public comment letters submitted during a proposed rulemaking process, rather than requiring an individual to read thousands of comment letters.

“It's not just a simple feeling that you're used to – like happy, sad or mythical – it's more about finding out: who wrote it, where did it come from? Then we group these. And then we actually go deeper into that feeling,” Ahrens said. “We have a number of use cases that are already in progress.”