Magnifying glass looking at financial data
AdobeStock / LuckyAI

As the financial industry rushes to embrace artificial intelligence and other fast-evolving technologies, U.S. federal financial regulators are adopting standards for automated valuation models, designed to guard against conflicts and discrimination in these tools.

A collection of six federal agencies, including the U.S. Federal Reserve Board, the U.S. Treasury and the U.S. Consumer Financial Protection Bureau, are setting standards for the models used by lenders and secondary market issuers in valuing residential mortgages.

“[Automated valuation models] that rely on artificial intelligence, machine learning and other technologies are developing rapidly,” the regulators noted in a notice detailing the standards. Given this ongoing evolution, regulators are taking a principles-based approach to standards in this area, rather than setting prescriptive rules.

Financial firms that engage in transactions involving secured real estate mortgages will be required to adopt policies, procedures and controls designed to avoid conflicts of interest, protect against manipulation, comply with anti-discrimination laws, and ensure a high level of confidence in the results produced by these models.

The standards are intended to inspire confidence in the credibility and integrity of the valuations these models produce.

“As with models more generally, there are increasing concerns about the potential for [automated models] to produce property estimates that reflect discriminatory bias, such as by replicating systemic inaccuracies and historical patterns of discrimination,” the regulators’ filing said.

While it’s possible that automated models have the potential to reduce bias, given the reduced role for human discretion, it’s also possible that biases could be built into the models themselves, which could amplify the harms, given that automated models can process a large volume of valuations.

“Models could discriminate because of the data used or other aspects of a model’s development, design, implementation or use,” it noted. It added: “Attention to data is particularly important to ensure that [automated models] do not rely on data that incorporate potential bias and create discrimination risks.”

The final rule takes effect at the start of the first quarter, 12 months after its publication in the Federal Register.