artificial intelligence, robot, ai-2167835.jpg

Artificial intelligence in the securities industry

Artificial intelligence (AI), although originally conceived in its simplest form in the mid-20th century, has enjoyed a resurgence in the past few years and a particularly rapid uptick in popularity over the past few months. This can be attributed to a number of factors, including cheaper and more accessible computing power and cloud storage; more advanced algorithms; larger volume of available data; greater investment into AI companies, and the availability of open-source AI technologies.

In the common lexicon, “AI” has become a general term for any computer system that solves problems by emulating the rational thought processes and decision-making capabilities of humans. Within this definition are a multitude of specialized (and often overlapping) AI architectures and applications, e.g., machine learning, natural language processing, robotics process automation, etc.

As many companies, including firms in the securities industry, race to implement AI-based tools into their service offerings and backend operations, it’s worth grappling with both the potential benefits and drawbacks of such technology.

In 2020, the Financial Industry Regulatory Authority (FINRA) issued the report “Artificial Intelligence (AI) in the Securities Industry” in which it examined applications of AI by broker-dealers and their firms, specifically in the areas of communications with customers, investment processes, and operational functions.

The report found that broker-dealers were employing tools such as virtual assistants to help address basic client inquiries and AI apps to automatically categorize and redirect customer emails to the appropriate recipient. Some firms also indicated that they were using AI-based tools to help build more comprehensive client profiles by aggregating data regarding clients’ assets, spending patterns, debt balances, electronic communications, etc. to aid in suggesting investment products. The report also found that AI was being used via predictive models to help forecast the price movements of specific investment products.

Along with the above uses, firms were also employing AI to mitigate their risk exposure and enhance their regulatory compliance in the form of AI systems that performed surveillance, monitored for financial crime indicators, tracked updates to regulatory requirements, aided in credit scoring, anticipated cybersecurity threats, and automated labor-intensive administrative duties.

The use of algorithms to generate tailored investment advice is perhaps the most intriguing potential application of AI in the securities industry, while also being the one to be most cautious of. Many Registered Investment Advisor (RIA) firms already employ what are commonly known as “robo-advisors” which are automated platforms that can provide investment advice and help retail investors manage their assets. These robo-advisors vary in the functions that they perform, with some operating independently of and some working in tandem with human advisors.

In May 2023, JPMorgan caused a stir in the fintech world when it filed a trademark application with the U.S. Patent and Trademark Office for “IndexGPT,” which the company describes as an AI-powered product that will implement “Generative Pre-trained Transformer models” (the same type of powerful language processing frameworks used by OpenAI’s popular chatbot ChatGPT) for “analyzing and selecting securities tailored to customer needs” (USPTO Serial #97931538). If successful, this platform could prove to be one of the most innovative and sophisticated financial advisory AIs to date.

But with great computing power comes great responsibility. The use of AI-based tools is certainly not without its risks and challenges, such as the lack of transparency and limited “explainability” of how algorithms reach their conclusions; consumer privacy risks due to the collection and centralization of vast amounts of data; and the proliferation of historical, systemic, and human biases.

Regardless of how sophisticated an algorithm or model might be, an AI system can only be as good as the data it receives — quantity, completeness, relevancy, accuracy, and timeliness can all affect a system’s output. But even when a system is trained on quality data and is designed to be “bias-free,” algorithms can still sometimes skew results in unexpected ways.

A 2019 study found that an algorithm that delivered ads for jobs in STEM fields favored delivering the ad to men over women, even though the algorithm had been specifically designed to be gender neutral. That was likely because the algorithm was also designed to optimize cost efficiency, and advertisers place greater value on marketing to women as a demographic group. Therefore, the costs of marketing to women tended to be higher. The algorithm factored in that higher value, which takes into account that women have traditionally controlled the household expenses, and they make up a larger proportion of consumer spending than men. (See Lambrecht & Tucker, 2019, “Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads.”)

This example draws an important parallel to the securities industry, especially pertinent for RIAs and broker-dealers who are bound by obligations such as fiduciary duty, duty of care, duty of loyalty, best execution, and best interest. When a firm or advisor utilizes an AI-based tool, they are still responsible for adhering to the appropriate fiduciary standards. If an algorithm’s design leads it to prioritize the advisor or firm’s interests over those of the investor — or if it results in the proliferation of other biases — then the advisor could be held responsible for violating their fiduciary duty.

On April 6, 2023, the Investor Advisor Committee (IAC) for the Securities and Exchange Commission (SEC) submitted a letter to SEC Chair Gary Gensler on the “Establishment of an Ethical Artificial Intelligence Framework for Investment Advisors.”

The letter urged the SEC to focus on three primary tenets with regard to AI:

(1) Equity — Firms should consider the context of the data that is both being used to train AI models and that is being produced by these models, with an eye to identifying any implicit biases. The IAC suggests that firms seek multidisciplinary guidance from experts to assist with this.

(2) Consistent and persistent yesting — Firms should test AI tools in their developmental stages as well as monitor and re-test them after implementation in order to ensure the algorithms are performing accurately and without bias. The IAC suggests using either an internal governance team (separate from the AI creators) or external auditors.

(3) Governance and oversight — Firms should create governance and risk management policies pertaining to the use of AI that will help ensure that investors’ interests are prioritized over those of the firm or individual advisors. The IAC also urges the SEC to create clearer guidance and best practices on the topic of AI.

While the waters of artificial intelligence are likely just beginning to be tested, the IAC’s letter summed up the mindset with which those in the securities space should be approaching their use of AI tools:

“The IAC believes that the SEC has ample authority under the Investment Advisers Act of 1940 to oversee and monitor the investment adviser industry’s use of technology to provide investment advice to investors…advisers have an affirmative duty of care, loyalty, honesty and utmost good faith to act in the best interests of investors when providing investment advice…The use of technology by advisers does not change the fiduciary nature of advice or the regulatory environment in which they operate.”

Jason Wallen, a summer associate at the firm, assisted in the research and preparation of this article.

Roger E. Barton is a regular contributing columnist on securities regulation and litigation for Reuters Legal News and Westlaw Today.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top