For decades, banks and those of us who have been involved with regulatory compliance have objected to subjectivity that makes managing compliance matters difficult.
The Community Reinvestment Act (CRA) is a good example. Although the CRA changed from the 12 assessment factor evaluation to a more data driven approach in 1996, there still remains a component of subjectivity. The Reg still calls for evaluation of lending penetration throughout a bank’s assessment to be “reasonable”.
A similar situation exists with regard to redlining analysis. There are often qualitative factors that come into play, hence, a degree of subjectivity. From UDAAP to BSA, measurement is sometimes dependent upon who is measuring and, thus, a review is subject to differing interpretations.
In the early 1990’s, the FDIC published Side-by-Side: A guide to Fair Lending. At the time, this publication was the most comprehensive resource available for evaluating fair lending. The manual provided step-by-step guidance as to how to conduct manual file reviews to test for disparate treatment of applicants.
By the late 1990’s, however, agencies began to use statistical techniques more frequently to evaluate fair lending. Even though file reviews are still part of regulatory fair lending reviews, today the statistical approach is becoming more and more common. This is especially true of larger institutions that have significant loan volume.
Many institutions today are fearful of being faced with a fair lending review that uses statistical methods or a finding of discrimination based on such an approach. Most do not understand it and, thus, do not know how to refute an alleged finding of discrimination.
What is all too often missed is the opportunity that a statistical approach to evaluating fair lending compliance provides. The agencies use of statistics is actually beneficial for institutions if they understand how to use and capitalize on it.
Subjectivity in an evaluation is greatly reduced via a statistical approach. Interpretation, for example, becomes very straightforward—there either is a disparity or evidence of discriminatory preferences or there is not.
Lenders also can evaluate their own practices by the same metrics, therefore, identifying areas that need further review or policy changes. By doing so, an institution can get an accurate sense of what their practices may suggest and address any areas of concern proactively.
By analyzing data they can determine if policy guidelines are adhered to, and such data could be useful in a regulatory review to help expedite the exam process. This is good for the institution as well as the regulatory agency conducting the review.
Applying a statistical approach to electronic data also allows an institution to analyze all or at least the vast majority of their lending activity. Covering all lending areas would be difficult using a file review approach only.
Now fast forward to 2018, and all lenders that report HMDA are potentially faced with the prospect of a statistical fair lending review. The data necessary to conduct such a review will be reported along with the GMI that identifies protected and non-protected classes.
This will be available to the agencies and to the public as well, including advocacy groups. As we pointed out in a previous article, this has the potential to significantly alter the fair lending landscape; and many institutions may find themselves unprepared.
Despite the risk that the new HMDA 2018 brings, the statistical approach to fair lending creates the opportunity for lenders to lower their risk and improve profitability by lowering their compliance costs. But, this opportunity must be recognized and embraced.
Premier Insights, Inc. conducts dozens of analyses each week for institutions. Our work includes routine and periodic analysis as well as assisting lenders who are facing regulatory and enforcement actions.
If you would like to discuss this issue further or would like more information, please contact us. We would be happy to speak with you or assist you in anyway possible.