UDAP Regulation and Fintech Lending
A new breed of lender, often called fintech lenders, have made significant headway into the consumer credit market in recent years. Fintech lenders first emerged in 2006 and had just five percent of the U.S. market for consumer loans as recently as 2013. But, according to TransUnion, by 2018 fintech lenders “issued 38 percent of all U.S. personal loans.”
Fintech lenders claim to use Big Data and machine-learning techniques to improve credit-underwriting and market their products to prospective borrowers. For example, Lenddo claims to increase loan approval rates by fifteen percent and decrease default rates by twelve percent, all while making credit decisions in less than three minutes. Although each lender uses its own propriety blend of data, they are widely thought to combine traditional data, such as the inputs used for FICO scores, with “alternative” data, such as social media data, and rent and utility repayment history. Fintech lenders also use non-traditional data analytics, such as machine learning algorithms, which can sift through vast sums of data to decide which are relevant to credit underwriting decisions and how much weight to assign to each piece of data. In other words, fintech lenders are using data with unclear correlations to credit worthiness and use algorithms (rather than humans) to decide how to evaluate that data.
Fintech lending holds great promise relative to traditional lending models. Some research has found that fintech lenders can increase credit access by offering credit in areas where traditional lenders are less likely to operate and by lowering the cost of credit through improved underwriting. But regulators and scholars have repeatedly expressed concern that fintech lenders may discriminate against prospective borrowers. And regulators have begun to take action. For example, the New York State Legislature recently approved legislation to prevent lenders from using social media data to make credit worthiness determinations due to fears that, among other things, this data will disadvantage individuals in low-income communities.
In a series of articles on this topic, I have considered the effectiveness of existing fair lending laws in promoting the promise and reducing the threats posed by fintech lenders’ use of Big Data and machine learning techniques. In Preventing Predation & Encouraging Innovation in Fintech Lending, published by the Consumer Finance Law Quarterly Report, I analyze the ability of various regulators to prevent unlawful discrimination while increasing credit access and affordability through their authority to prevent unfair and deceptive acts and practices. I argue that this authority is sufficiently flexible to prevent predatory business practices while allowing fintech lenders the opportunity and time to iterate and improve their products.
An unfair act or practice is one that [1.] “causes or is likely to cause substantial injury to consumers [2.] which is not reasonably avoidable by consumers themselves and [3.] not outweighed by countervailing benefits to consumer or to competition.” Regulators could apply this test in a way that encourages innovation in fintech lending while preventing predation. For example, “diminished access” to credit can constitute a substantial injury. But how should a regulator measure whether a prospective borrower has suffered reduced credit access where, for instance, a fintech lender has used data unrelated to credit worthiness in its lending decision or because that fintech lender’s algorithm gave too much weight to negative information about a prospective borrower? Is the appropriate comparison to the lending criteria used by traditional lenders? Or to the hypothetical lending standards of fintech lenders using unbiased data and better-constructed credit-scoring algorithms? If the latter, regulators might successfully use their unfairness authority to push fintech lenders to vet their data more rigorously or to create better credit-scoring algorithms.
Regulators might also use their unfairness authority to push fintech lenders to design more transparent algorithms because greater transparency in algorithmic decision-making allows consumers to take affirmative steps to improve their algorithmic credit scores. While some doubt greater transparency is technically possible or normatively desirable, some fintech lenders claim to have succeeded in making their algorithms sufficiently transparent to aid consumers (and satisfy regulators).
Thus, I argue in Preventing Predation & Encouraging Innovation in Fintech Lending that “Regulators’ unfairness authority is an important regulatory stick for preventing unlawful discrimination” and promoting the promise of fintech lending. Please read the full article for a more nuanced analysis of these issues.
- Matthew Bruckner, School of Law, Howard University.