Artificial Intelligence and its Unintended Learning Patterns

by C/A Staff

As technology matures, many financial institutions have been adopting artificial intelligence and machine learning algorithms to determine an applicant’s creditworthiness, which allows for a faster and more streamlined process in handling credit applications. It also helps to reduce human error, and enables a leaner operation. However, it is important to understand the possible bias that an algorithm can develop.  An algorithm bias can lead to violations of the Equal Credit Opportunity Act (“ECOA”) and the Fair Housing Act (“FHA”).

As you may know, both ECOA and FHA recognize discrimination if there is either: (1) disparate treatment or (2) disparate impact. To avoid disparate impact claims, it is crucial to monitor the input data to ensure that the artificial intelligence (“AI”) algorithm is not using data that will result in a disparate impact. For instance, an algorithm can determine that people that graduated from certain prestigious schools were less likely to default on a loan. This can sound like a useful metric, but if these schools were biased in their admissions procedure based on one of the prohibited factors, the algorithm can adversely affect the protected class as well.

It may also be a good idea to document the input attributes to show “business necessity” and why it was preferred over an alternative policy or practice to mitigate the risks associated with disparate impact. Documenting and monitoring the attributes that are input is important because AI algorithm can pick up on patterns which are irrelevant and can have unintended consequences. In the earlier days of AI training, researchers created a detection system designed for the military to correctly distinguish of pictures of camouflaged tanks versus picture of just trees. After the algorithm studied 50 of each training set, it was able to distinguish the remaining 100 pictures accurately. The Pentagon tested the algorithm on the field and the results were disastrous. It turned out that the algorithm developed a “brightness bias” because the photos of the camouflaged tanks were taken on cloudy days whereas the pictures of the trees were taken in clear, sunny weather.[1]

Similarly, an algorithm might find that those who shop at a specific franchise serves a higher credit risk. If this franchise is disproportionately located in communities with many minorities, it could adversely affect minorities, leading to disparate impact claims. Oftentimes, it is difficult to keep track of how the algorithm is adapting and developing. To avoid unnecessary disparate impact claims, it is always a good idea to create a tool to monitor what data the algorithm is using to learn and influence its decisions. By monitoring the types of data the algorithm is applying in its decision-making, you will find it much easier to make necessary adjustments to its learning methods to remain compliant with the Fair Lending laws.

 


[1] Yudkowsky, Eliezer. Artificial intelligence as a Positive and Negative Factor in Global Risk. P. 15