AI Is Here. Fiduciaries Must Remain Diligent

Experts discuss how the benefits of artificial intelligence for 401(k) plan administration and management also come with risks to be questioned and considered.

Artificial intelligence is playing an increasingly important role in employer-sponsored retirement plans, used by everyone from asset managers to recordkeepers to financial wellness providers. But with evolution also comes risks, from bad inputs to cybersecurity concerns.

When operating under the Employee Retirement Income Security Act, it is important that the same processes and evaluations are in place as they would be for other plan design and investment decisions, according to Michael Abbott, a partner in Foley & Lardner LLP who works with ERISA plan fiduciary clients.

For more stories like this, sign up for the PLANADVISERdash daily newsletter.

“We are still in an environment where going through the procedural prudence and process matters,” Abbott says. “Just relying on an AI-generated output is probably not going to get you where you need to be in terms of satisfying ERISA requirements.”

In a post concerning the use of AI and 401(k) fiduciary and investment committees, Abbott and colleague Aaron Tantleff, also a partner in Foley & Lardner, laid out a variety of ways AI is being used in financial services.

Those include personalizing messages to plan participants and prospective customers (Vanguard’s use of Persado); assisting financial advisers (Morgan Stanley’s Debrief); and automating investing with digital robo-advisors (Charles Schwab’s Intelligent Portfolios).

Tantleff, who focuses specifically on AI implementation in the financial sector, says it is important to know what data and information are being used by AI, allowing one to account for any bias or errors in the materials it is producing.

“Are we using training data, validation data? What am I putting in here, and what is the purpose of it?” he asks. “I, as a human, can create a selection bias in terms of what is being put into the AI. … That is always a risk, so there must be controls to it.”

Tantleff notes that, unlike with an algorithm created to run a process, AI can go off in many different directions, producing results that can be hard to track back to the source. The inputs, then, must be well-understood, and checks and balances must exist on the results of AI-produced or AI-backed material. He also says that systems using AI may be coming from a third-party provider; in those cases, it is important to ask questions that will get them into the conversation and detail their process.

To that end, Abbott and Tantleff’s blog post lists 14 different questions a plan committee can ask about the use of AI. Those questions range from how much AI is being used for 3(38) investment decisions to whether a recordkeeper gives the option for a company or participant to opt out of an AI-driven offering.

Vast Amounts of Data

It also includes a section warning against the cybersecurity risk that AI can introduce, noting: “Vast amounts of sensitive participant data fuel these systems, making them prime targets for malicious actors seeking to exploit weaknesses in security protocols. A data breach or cyberattack could not only compromise the integrity of the retirement plan but also expose fiduciaries to legal and regulatory repercussions.”

Lisa Crossley, executive director and CEO of the National Society of Compliance Professionals, agrees that AI use for investing and financial services has to be as rigorously vetted as any other priority process.

“What happens if investment recommendations based on AI rely on factors that are incorrect?” she asks. “It has to have its own governance structure, its own compliance, its own risk assessment.”

In an annual cybersecurity benchmarking survey, the NSCP and the ACA Group collected responses from asset managers, investment advisers and private markets. Crossley says they were interested to find that 38% of respondents do not yet identify AI as a cybersecurity risk, but a larger amount (49%) are considering using AI to combat cybersecurity concerns.

She notes that the organization is delving further into the topic of AI use and concerns among its audience and will be discussing initial results in October at NSCP’s national conference. For people and organizations interested in compliance-related issues, considering AI’s uses and the risks that emerge from them will clearly, per Crossley, be a burgeoning area of study.

For now, she says, the society representing compliance professionals is advocating for humans to backstop AI-driven processes and procedures, with policies and procedures similar to combatting cybersecurity concerns themselves.

“You have to have the same governance structures and protocols that you do for cybersecurity,” she says. “You can’t just trust the AI.”

Avoiding Biases

Foley & Lardner’s Abbott notes that, when operating as a plan fiduciary, one must be especially diligent in ensuring that an AI process is not introducing bias, not only to protect the plan and participants, but to stay protected against potential lawsuits.

“I’m concerned about this in the ERISA space,” he says. “We have active plaintiffs’ bar [lawyers] who are looking for weak spots. … They may say, ‘How could you totally rely on [an AI process] for an outcome that you were a fiduciary on?’ We can’t just put a rubber stamp on it.”

As with other processes of plan design and management, understanding the starting point and documenting the steps along the way is the best form of protection, he says.

Some of the other questions for committees recommended by Foley & Lardner are:

  • What is the risk of misinformation or a biased output?
  • Who is liable if the AI’s advice leads to poor investment decisions?
  • How does one evaluate the quality and accuracy of the content it produces? If the AI generates investment advice or market analysis for a 401(k) plan, how do fiduciaries ensure the information is reliable and compliant with regulations?
  • Should the committee seek independent professional advice regarding what AI can provide as a resource to satisfy fiduciary obligations under ERISA?

“If I’m on a committee and I’m a plan fiduciary, I need to be asking these professionals that I’m working: ‘How is AI figuring into what you are telling me?’” Abbott says. “I need to know how you came to do what you did and what went into it.”

Correction: This story adjusted a quote for accuracy.

«