Legal Alert

Treasury Issues RFI on Use of AI in Financial Services

by Michael R. Guerrero, Peter D. Hardy, Kaley N. Schafer, and Nathaniel B. Botwinick
June 27, 2024

Summary

After several years of monitoring and reporting on artificial intelligence (AI) in financial services, the U.S. Department of Treasury (Treasury) has embarked on initial rulemaking efforts and issued a request for information (RFI). Treasury is seeking comments from a broad array of financial services stakeholders on their use of AI, as well as the risks and opportunities the technology poses.

The Upshot

  • The RFI highlights the opportunities AI may provide, such as increased access to financial services and products. In addition, the RFI highlights the risks of AI, which include: compliance with fair lending and anti-discrimination laws; the potential for unfair, deceptive, or abusive acts and practices (UDAAP); Bank Secrecy Act (BSA), anti-money laundering (AML), and anti-fraud compliance; data privacy; and risks presented by third-party relationships.
  • Treasury supports responsible innovation and competition in the financial services industry. However, the agency seeks to maintain stability, market integrity, and equitable access to financial services; protect critical financial sector infrastructure; and combat illicit finance and national security threats.
  • Issuance of this RFI continues to evidence the interest of federal regulators in monitoring the risks of AI in the financial services industry, particularly relating to consumer protection and regulatory compliance.

The Bottom Line

As AI continues to evolve, federal regulatory agencies will need to keep up in understanding how these technologies are used and provide guidance so that financial services providers remain compliant. Financial institutions and impacted entities are encouraged to submit comments in response to the RFI. Ballard Spahr’s Artificial Intelligence team continues to monitor developments in AI.

Treasury issued an RFI seeking comments from “all parties that may have a perspective as to implications of AI in the financial sector” on the uses, opportunities, and risks of AI in the financial services sector.

The RFI indicates that Treasury continues to monitor AI development, application, and the impacts in the financial services sector. Treasury is the latest in a long list of federal regulators to initiate rulemaking efforts regarding AI. To date, the following federal regulators have initiated some form of pre-rulemaking/rulemaking activity: the Commodity Futures Trading Commission (CFTC); the Securities and Exchange Commission (SEC); the Office of the Comptroller of the Currency (OCC); the Board of Governors of the Federal Reserve System; the Federal Deposit Insurance Corporation (FDIC); the Consumer Financial Protection Bureau (CFPB); and the National Credit Union Administration (NCUA). This is in addition to the issuance of regulatory guidance by these agencies over the past few years, and rulemaking efforts at the state level. 

This RFI builds upon Treasury’s past work in monitoring and reporting on AI. In 2022, Treasury published a report that examined the use of AI by credit providers and identified data privacy and fair lending risks. Earlier this year, Treasury published a subsequent report describing the agency’s efforts to identify and mitigate cybersecurity fraud and other risks. Lastly, Treasury issued its 2024 National Strategy for Combatting Terrorist and Other Illicit Financing, which noted that AI innovations have the potential to strengthen AML and countering financial terrorism (CFT) compliance.

Treasury is seeking comments from all parties with a perspective on AI, including “financial institutions.” The term is broadly defined to include any company that facilitates or provides financial products or services and is regulated by a federal or state financial regulator. This includes banks, credit unions, insurance companies, non-bank financial companies, financial technology companies, asset managers, broker-dealers, investment advisors, other securities and derivatives markets participants or intermediaries, money transmitters, as well as any “impacted entity.”

Treasury is interested in understanding the use of AI by financial institutions of different sizes and complexity. Additionally, Treasury is interested in the unique challenges smaller financial institutions may face in accessing and using AI.

The RFI adopts the following definition of “AI”:

A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. See 15 U.S.C. 9401(3).

Treasury further interprets the above definition to describe a wide range of models and tools that utilize data, patterns, and other informational inputs to generate outputs, including statistical relationships, forecasts, content, and recommendations for a given set of objectives. AI includes, but is not limited to, advancements in existing AI and emerging technology such as deep learning neutral network (this includes generative AI and large language models).

Treasury is interested in the use of AI in connection with:

  • Product and Service Offerings: the use of AI to assist in decisions related to offering products and services, as well as financial forecasting products and pattern recognition tools;
  • Risk Management: the use of AI to manage various types of risk, including credit risk, market risk, operational risk, cyber risk, fraud and illicit finance risk, compliance risk, reputational risk, interest rate risk, liquidity risk, model risk, counterparty risk, and legal risk, as well as the use of AI for treasury management or asset-liability management;
  • Capital Markets: the use of AI to identify investment opportunities, allocate capital, execute trades, and provide financial advisory services;
  • Internal Operations: the use of AI to manage internal operations (such as, payroll, HR functions, training, performance management, communications, cybersecurity, software development);
  • Customer Management: the use of AI for complaint handling, investor relations, website management, claims management, or external-facing functions;
  • Regulatory Compliance: the use of AI to manage regulatory requirements, including capital and liquidity management, regulatory reporting and disclosure requirements, BSA requirements, consumer and investor protection requirements, and license management; and
  • Marketing Purposes.

Treasury identified the following risk categories associated with the use of AI:

  • Explainability and Bias: Financial institutions may not have an understanding of where the data used to train AI models and tools was acquired or how an algorithm is developed, which creates concerns regarding explainability. Financial regulators have provided guidance on risk management principles in connection with model development, validation, monitoring, outcome analysis, and governance and controls. This raises concerns about the potential obfuscation of model bias that can negatively affect impacted entities. AI models have the ability to perpetuate discrimination by utilizing data that reflects historical biases.
  • Consumer Protection and Data Privacy: AI may complicate financial institutions’ efforts to ensure compliance with fair lending and anti-discrimination laws, as well as laws prohibiting UDAAPs. AI may create or exacerbate issues related to data accuracy that may lead to violations of law. Compliance with existing data privacy laws may be more difficult as AI models develop and more readily and accurately identify owners of previously anonymized data. The RFI notes concerns over the use of alternative data in credit decisions and the use of AI predictive analytics.
  • Third-Party Risks: Many financial institutions rely on third-party providers, who may use AI. As AI complexity increases, third-party risk management becomes more complicated. The RFI references recent guidance from the federal banking agencies that provides a principles-based approach in managing third-party risks.

The RFI highlights that financial institutions currently manage AI-related risks through existing risk management frameworks; however, these frameworks may be inadequate to address emerging AI technologies. The RFI also highlights fraud and illicit finance risks related to AI, and asks how financial institutions are addressing these risks, what challenges do organizations face in adopting AI to counter illicit finance risks, and how do financial institutions use AI to comply with applicable AML/CFT requirements.

As AI continues to evolve, federal regulatory agencies will need to keep up in understanding how these technologies are used and provide guidance so that financial services providers remain compliant. Financial institutions and impacted entities are encouraged to submit comments on the RFI to help inform this process. Ballard Spahr’s Artificial Intelligence team continues to monitor developments in AI.

Subscribe to Ballard Spahr Mailing Lists

Get the latest significant legal alerts, news, webinars, and insights that affect your industry. 
Subscribe

Copyright © 2024 by Ballard Spahr LLP.
www.ballardspahr.com
(No claim to original U.S. government material.)

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the author and publisher.

This alert is a periodic publication of Ballard Spahr LLP and is intended to notify recipients of new developments in the law. It should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own attorney concerning your situation and specific legal questions you have.