In New Lending Algorithms, Both Promise and Risk
January 21 2015New online lenders argue that today’s credit scores will soon be obsolete. These innovators exaggerate the pace of change — traditional credit scores will be important for years to come. But in the long term, they might be right to project that peoples’ credit opportunities will be shaped by a wider range of factors.
Today, credit scores rely on a person’s credit history. By contrast, lending start-ups assemble data from “diverse sources, including household buying habits, bill-paying records and social network connections.” They use this mosaic to try and guess whether a person is likely to repay his or her debts. For example, according to a recent New York Times report, big data lenders have discovered:
- Those who use proper capitalization on online forms tend to be more reliable in repaying their loans.
- Those who are employed as firemen, police officers, and teachers appear to be among the most reliable payers, even if their salaries are lower.
- A single bankruptcy, which can be disastrous to traditional credit scores, may not strongly predict that an individual will default on future loans.
These kinds of correlations might be consistently predictive for broad populations over time, or they might not: their usefulness is mostly unproven. Traditional credit history scores, like those provided by Fair Isaac (FICO), are well studied by companies and regulators. But new scoring methods, which may rely heavily on Facebook or LinkedIn data, have yet to show that they can provide the same predictive punch. They are most commonly used in subprime markets, such as payday lending. (For more analysis, take a look at our recent report, Knowing the Score.)
However, as the volume of data about us and the sophistication of computing grows, these new models could take off. “The potential is there to save millions of people billions of dollars,” argues Rajeev V. Date, a venture investor and former banker.
Fair lending laws, which prohibit discrimination against a loan applicant on the basis of race, sex, and other factors, were designed for the world of FICO, and it’s not yet clear how they will apply to new scores. When a computer is set loose to learn from a large, diverse pile of data, it may discriminate against a protected class, even if its designers didn’t mean for it to do so. (To better understand why, see this piece on how data mining can have a disparate impact.) Even “enthusiasts” of big data underwriting acknowledge this antidiscrimination pitfall, writes Steve Lohr of the New York Times.