An Interview With Tyler Haupert, Author of "The Landscape of Fintech Mortgage Lending"
Oct 17, 2023
Article: The Racial Landscape of Fintech Mortgage Lending
Author: Tyler Haupert
DOI: https://doi.org/10.1080/10511482.2020.1825010
Published Online: 09 Nov 2020
Oct 17, 2023
Article: The Racial Landscape of Fintech Mortgage Lending
Author: Tyler Haupert
DOI: https://doi.org/10.1080/10511482.2020.1825010
Published Online: 09 Nov 2020
Below is a transcript of our conversation with Tyler Haupert (TH) regarding his recent paper, “The Racial Landscape of Fintech Mortgage Lending,” moderated by Claudia Aiken (CA)
CA:
So, Tyler, how did you become interested in researching the relationship between fintech and equitable homeownership? What did the earliest phases of your research look like?
TH:
I've always been—just due to my professional background, which was in permanent, affordable housing development—I've always been interested in housing. My master’s and PhD education was very focused on issues about access to housing, be that homelessness or affordable housing or access to mortgage products, the housing world has always been my bread and butter.
During my PhD, I started getting exposed to folks who were looking into the rental housing world and the impact of technology on rental housing. These are, you know, people looking at “prop tech” or platform technologies distributing rental opportunities or landlord surveillance. This whole world was really bubbling up and creating a lot of research.
It got me thinking, like, okay, seems like people are already taking that baton and running with it in the rental housing world. And I didn't see any research in homeownership… When you're a homeowner, you don't have a landlord, right? There's not this whole world of, “how are landlords using technology?” But you do have to typically get a loan to own a home. I realized that the fintech lenders were, from my perspective, filling that nexus between homeownership and tech. And I didn't see much other existing research on the topic.
As a PhD student, you have to do all your own work. I spent many, many months looking at the websites of mortgage lenders to decide, based on some criteria that I had found in the precedent literature, if they were fintech or not. I remember having a call with a researcher who had categorized the lenders already in this way. And he was like, “yeah, my [research assistants] really had a tough time,” and I was like, “your RAs? I'm doing all this stuff on my own in the library!” So yeah, it was definitely a labor of love early on during the PhD.
CA:
Proponents of fintech might argue in favor of it on the grounds that it eliminates racial discrimination by using a (supposedly) objective algorithm. But your research really challenges that, of course. Can you talk briefly about how an “objective” method can lead to such a biased outcome?
TH:
I would say a couple of things. The objectivity attributed to the fintech lenders, I think, comes from two main sources. One is the assumption that an older or more traditional style of discrimination in homeownership came from a loan officer harboring a racial attitude, right, be it explicit racial animus towards Black and Brown communities or something more implicit. They harbored a stereotype. Fintech would take that person-to-person opportunity to act upon those racial attitudes away, out of the process. And so that's one way that fintech was assumed to potentially bring more objectivity in the process.
And the other way was that, you know, fintech brings this advanced mathematical analysis to the process, and so a lot of people thought, “Okay, these older banks have their way of underwriting loans. But the fintech lenders are using these big data sources, and they're using machine learning. And whatever the old lenders used in their underwriting, whatever the traditional metrics that they used to predict risk, fintech lenders might not be prone to patterns of bias because they're using advanced mathematics.” And this is something that occurs across many emerging technologies, not just in homeownership, but in tons, tons of sectors, right? There's this assumption that if you're using advanced mathematical tools, especially machine learning tools or artificial intelligence tools, you're going to be removing this human element, and you're going to be more objective because of the math.
In terms of the second part of your question—“how could this actually become a more biased system than the old system, if it's supposedly more objective?”—I think it's certainly true that that racial animus element of the process is removed. But we're still seeing results that display racial disparities at the same or greater rates as traditional lenders… Because it's all about the data that's fed into these systems. The objectivity we attribute to these sorts of advanced technologies is a bit of a false attribution, because they're only as good as the data that they're analyzing and the data that they are analyzing comes from human beings, right? It's going to reflect the existing cleavages in our society.
Whether they're analyzing traditional things like a credit score and income and assets, or whether they're incorporating new ‘big data’ metrics, credit card spending, or social media data, all of these things reflect the society that they're generated from right? And so, the math, the machine learning algorithm itself is not discriminating, but it's reflecting existing patterns of disparity in our society, and thereby recreating these same patterns that we've come to know from traditional lenders.
CA:
That's—yeah, that makes perfect sense.
TH:
One other relevant thing is that these fintech lenders are existing in a much less stringent regulatory environment than traditional lenders. Regulators don't know what's under the hood of these underwriters with fintech lending. They literally are not required to subject their data sources or their underwriting processes to regulatory scrutiny.
CA:
This is a perfect segue to my next question: in your conclusion, you actually talk about the relationship between subprime lending and the 2008 financial crisis. And then you go on to talk about the challenges with regulatory oversight and lack of transparency that makes holding these fintech lenders accountable really challenging. Can you talk a little bit more about that, also how you think public officials could take action?
TH:
I should say this: I think that the idea that the machine learning algorithms these fintech lenders are using are intentionally trying to discriminate or have any aspect that tries even to determine the racial identity of the applicant is not true, or I have no evidence to believe that. [Yet] we are seeing [discriminatory] patterns. And the regulatory environment is a place that I point to in my paper. I think that this sort of environment came about because we had the housing foreclosure crisis in 2008, and new regulatory entities were created to tighten the screws. We had a lax regulatory environment before 2008, with too many subprime loans, too many loans without adequate documentation, etc., etc., so the CFPB (Consumer Financial Protection Bureau) was created.
The Consumer Finance Protection Bureau, for the first few years, actually tightened regulatory standards for all lenders. But as the economy began to recover, and as the government realized, we needed the housing market to heat up again to help us get out of this foreclosure crisis, the mission of the CFPB changed. It pivoted from its initial [mission], as created in the Dodd-Frank Act, to tighten the regulatory environment for home lending, to being responsible for [encouraging] emerging technologies in lending. The viewpoint shifted from one of increased regulatory scrutiny to one of promoting efficiency and innovation. And that's still the environment that we're in today. If you go to the CFPB’s website, if you read their documents, their mission is all about helping to give financial entities more freedom to be innovative.
That has led to an environment where the data being used by fintech lenders, and the processes that they're using to analyze those data, are literally optionally scrutinized by lenders. They don't have to hand them over. And essentially, none of them have. They're not regulated in the same way that a bank, or even a mortgage lending institution that doesn't take deposits, is regulated.
CA:
Now to pivot a little bit, if you were speaking to your colleagues in housing research, what would you say are avenues for further research to shed light on how to address this?
TH:
I think that there are a few avenues that would really help us collectively understand how these fintech vendors are operating. One, there’s simply a need for more qualitative research. There's this long history of quantitative research on mortgage lending discrimination that goes back a few decades. That sort of research is only as good as the data that is available to it, right? But we don't have data on exactly how these sorts of vendors are underwriting loans. We don't have data on which metrics they're using to underwrite loans, what is the ‘big data’ that their machine learning algorithms are fed.
Quantitative researchers like myself are essentially pushing the limits right now of what we can discover given current access to data. More qualitative research needs to be done actually talking to Fintech lenders. These aren't people that are intentionally going out and discriminating. Their tools are reproducing patterns, and I think that the more that researchers can understand via conversations, interviews, and surveys the ways that the lenders are thinking about the technology, the better.
The second [avenue for future research depends on] access to credit scores. I only know of one study, which is an excellent study, in which Bartlett and colleagues from the Business School and the Law School of Berkeley were actually able to get their hands on credit scores and analyze the importance of credit scoring to fintech lending, that is, whether or not the data used by fintech lenders reflects the credit histories of their applicants. The more that people who have access to credit scores can investigate fintech lending, and the more that people can have access to credit score data, the better. It's very expensive to get that data; there's a huge barrier there. But that would make a big difference.
I would say, thirdly, public policy and legal scholars can continue to investigate and scrutinize the regulatory environment surrounding fintech lenders. There's a lot of really great legal scholarship about regulating technology, and that can continue to be strengthened and bolstered, I think, to help us better understand the environment that these lenders are subject to.
CA:
You challenge the ethics, and efficacy of fintech at a time when issues of technology and automation are very much in the public spotlight. Proponents of Chat GPT advocate for the use of AI in writing and the arts on similar grounds, maybe, to save costs, increase efficacy, and so on. So, though the two technologies obviously differ, do you think AI might enter the housing landscape anytime soon? And what do you think that might look like in terms of equitable homeownership, for example?
TH:
Yeah, it's an interesting question, and it's one that might be, you know, a bit above my pay grade. But I’ll say a couple of things. Machine learning, which is the technology used by fintech lenders, is considered a type of AI. But it's a predictive AI, right? A machine learning algorithm can sift through millions and millions of data points really quickly and make a prediction based on those data points (in the case of fintech lending, prediction of the level of default risk on a loan that this applicant brings to the table). But the type of AI these days that's really making the news, things like Chat GPT is generative AI: AI that can look at all those millions of data points and not just come up with a prediction, but actually create something.
If we think about generative AI entering the lending landscape–gosh, it's so speculative!—I would actually think about it more on the side of the borrowers than the lenders. You could imagine a world where there's some sort of a program, a Chat GPT-style program that a consumer could use and say, “I want this house, and I have this much money. And here's my credentials. How can I tailor my application to get a loan from this or that lender?” In an ideal world, consumers could actually use this technology to tailor their profiles to more successfully access housing… That's probably the best case scenario.
I would say that we have a history of technology starting as this grassroots, open source, idealistic thing. Then eventually, corporations co-opt it. I'm not so optimistic that consumers will benefit from this new style of AI.
CA:
A last question. You give different policy recommendations regarding increased data, transparencies, stricter discrimination regulations, and so forth, around Fintech. Is there anything the average person could do to further these goals and protect themselves or protect others?
TH:
It’s tough, because so much of this is regulated at the federal level. And so, to the extent that the average person is interested, they can vote in a way they think matches their viewpoint on the regulation of technology. That's one avenue.
I would also say, I don't mean to demonize an organization like the CFPB in this process; they have public hearings where not-for-profits will submit statements. Individuals who are interested can also attend those public hearings. And I do think that the regulators want to get this right, and there is room for the public, more than I expected going into this, to make comments.
I wish there was a way I could say the public could sort of protect themselves from being subject to these disparities in in lending outcomes, but I think that to live in our modern world, you're going to leave a digital footprint behind every time you make a credit card transaction, every time you visit a website and click “I accept these cookies,” whatever it is. You're opting into this world where every single one of those data points is subject to being analyzed by something like a fintech lender, and it's tough to envision a world where people protect themselves from that.
The one thing that comes to mind is that there have been communities who have pushed back on technology companies using their data. [One example is] Google Sidewalks, which communities in Toronto pushed back and protested against, and ended up stalling and eventually stopping that project from having control over all sorts of data collected in public spaces. When technology really gets in people's faces and they see that risk, there can be grassroots campaigns.
CA:
Thank you so much, Tyler. Is there anything else that you would want to say?
TH:
When you asked the question about Chat GPT, it got me thinking.. What fintech can teach us that we might be able to apply to Chat GPT is that it's dangerous to put blind faith in, to attribute objectivity to, technology just because it's using advanced mathematical tools.
I think we can learn as a public that when a new technology comes out, if it seems too good to be true, that just might be right. It's essentially going to reflect the aspects of our society that we already find unethical or inequitable. Because all of it is fed data that reflects the world that we live in.
Hopefully, we can, as academics and regulators and citizens can take a normative stance on technology and try to change it proactively. But in the meantime, let’s not assume that any new tech is going to be objective and fair, just because it doesn’t directly involve human beings with implicit bias.
CA:
So, Tyler, how did you become interested in researching the relationship between fintech and equitable homeownership? What did the earliest phases of your research look like?
TH:
I've always been—just due to my professional background, which was in permanent, affordable housing development—I've always been interested in housing. My master’s and PhD education was very focused on issues about access to housing, be that homelessness or affordable housing or access to mortgage products, the housing world has always been my bread and butter.
During my PhD, I started getting exposed to folks who were looking into the rental housing world and the impact of technology on rental housing. These are, you know, people looking at “prop tech” or platform technologies distributing rental opportunities or landlord surveillance. This whole world was really bubbling up and creating a lot of research.
It got me thinking, like, okay, seems like people are already taking that baton and running with it in the rental housing world. And I didn't see any research in homeownership… When you're a homeowner, you don't have a landlord, right? There's not this whole world of, “how are landlords using technology?” But you do have to typically get a loan to own a home. I realized that the fintech lenders were, from my perspective, filling that nexus between homeownership and tech. And I didn't see much other existing research on the topic.
As a PhD student, you have to do all your own work. I spent many, many months looking at the websites of mortgage lenders to decide, based on some criteria that I had found in the precedent literature, if they were fintech or not. I remember having a call with a researcher who had categorized the lenders already in this way. And he was like, “yeah, my [research assistants] really had a tough time,” and I was like, “your RAs? I'm doing all this stuff on my own in the library!” So yeah, it was definitely a labor of love early on during the PhD.
CA:
Proponents of fintech might argue in favor of it on the grounds that it eliminates racial discrimination by using a (supposedly) objective algorithm. But your research really challenges that, of course. Can you talk briefly about how an “objective” method can lead to such a biased outcome?
TH:
I would say a couple of things. The objectivity attributed to the fintech lenders, I think, comes from two main sources. One is the assumption that an older or more traditional style of discrimination in homeownership came from a loan officer harboring a racial attitude, right, be it explicit racial animus towards Black and Brown communities or something more implicit. They harbored a stereotype. Fintech would take that person-to-person opportunity to act upon those racial attitudes away, out of the process. And so that's one way that fintech was assumed to potentially bring more objectivity in the process.
And the other way was that, you know, fintech brings this advanced mathematical analysis to the process, and so a lot of people thought, “Okay, these older banks have their way of underwriting loans. But the fintech lenders are using these big data sources, and they're using machine learning. And whatever the old lenders used in their underwriting, whatever the traditional metrics that they used to predict risk, fintech lenders might not be prone to patterns of bias because they're using advanced mathematics.” And this is something that occurs across many emerging technologies, not just in homeownership, but in tons, tons of sectors, right? There's this assumption that if you're using advanced mathematical tools, especially machine learning tools or artificial intelligence tools, you're going to be removing this human element, and you're going to be more objective because of the math.
In terms of the second part of your question—“how could this actually become a more biased system than the old system, if it's supposedly more objective?”—I think it's certainly true that that racial animus element of the process is removed. But we're still seeing results that display racial disparities at the same or greater rates as traditional lenders… Because it's all about the data that's fed into these systems. The objectivity we attribute to these sorts of advanced technologies is a bit of a false attribution, because they're only as good as the data that they're analyzing and the data that they are analyzing comes from human beings, right? It's going to reflect the existing cleavages in our society.
Whether they're analyzing traditional things like a credit score and income and assets, or whether they're incorporating new ‘big data’ metrics, credit card spending, or social media data, all of these things reflect the society that they're generated from right? And so, the math, the machine learning algorithm itself is not discriminating, but it's reflecting existing patterns of disparity in our society, and thereby recreating these same patterns that we've come to know from traditional lenders.
CA:
That's—yeah, that makes perfect sense.
TH:
One other relevant thing is that these fintech lenders are existing in a much less stringent regulatory environment than traditional lenders. Regulators don't know what's under the hood of these underwriters with fintech lending. They literally are not required to subject their data sources or their underwriting processes to regulatory scrutiny.
CA:
This is a perfect segue to my next question: in your conclusion, you actually talk about the relationship between subprime lending and the 2008 financial crisis. And then you go on to talk about the challenges with regulatory oversight and lack of transparency that makes holding these fintech lenders accountable really challenging. Can you talk a little bit more about that, also how you think public officials could take action?
TH:
I should say this: I think that the idea that the machine learning algorithms these fintech lenders are using are intentionally trying to discriminate or have any aspect that tries even to determine the racial identity of the applicant is not true, or I have no evidence to believe that. [Yet] we are seeing [discriminatory] patterns. And the regulatory environment is a place that I point to in my paper. I think that this sort of environment came about because we had the housing foreclosure crisis in 2008, and new regulatory entities were created to tighten the screws. We had a lax regulatory environment before 2008, with too many subprime loans, too many loans without adequate documentation, etc., etc., so the CFPB (Consumer Financial Protection Bureau) was created.
The Consumer Finance Protection Bureau, for the first few years, actually tightened regulatory standards for all lenders. But as the economy began to recover, and as the government realized, we needed the housing market to heat up again to help us get out of this foreclosure crisis, the mission of the CFPB changed. It pivoted from its initial [mission], as created in the Dodd-Frank Act, to tighten the regulatory environment for home lending, to being responsible for [encouraging] emerging technologies in lending. The viewpoint shifted from one of increased regulatory scrutiny to one of promoting efficiency and innovation. And that's still the environment that we're in today. If you go to the CFPB’s website, if you read their documents, their mission is all about helping to give financial entities more freedom to be innovative.
That has led to an environment where the data being used by fintech lenders, and the processes that they're using to analyze those data, are literally optionally scrutinized by lenders. They don't have to hand them over. And essentially, none of them have. They're not regulated in the same way that a bank, or even a mortgage lending institution that doesn't take deposits, is regulated.
CA:
Now to pivot a little bit, if you were speaking to your colleagues in housing research, what would you say are avenues for further research to shed light on how to address this?
TH:
I think that there are a few avenues that would really help us collectively understand how these fintech vendors are operating. One, there’s simply a need for more qualitative research. There's this long history of quantitative research on mortgage lending discrimination that goes back a few decades. That sort of research is only as good as the data that is available to it, right? But we don't have data on exactly how these sorts of vendors are underwriting loans. We don't have data on which metrics they're using to underwrite loans, what is the ‘big data’ that their machine learning algorithms are fed.
Quantitative researchers like myself are essentially pushing the limits right now of what we can discover given current access to data. More qualitative research needs to be done actually talking to Fintech lenders. These aren't people that are intentionally going out and discriminating. Their tools are reproducing patterns, and I think that the more that researchers can understand via conversations, interviews, and surveys the ways that the lenders are thinking about the technology, the better.
The second [avenue for future research depends on] access to credit scores. I only know of one study, which is an excellent study, in which Bartlett and colleagues from the Business School and the Law School of Berkeley were actually able to get their hands on credit scores and analyze the importance of credit scoring to fintech lending, that is, whether or not the data used by fintech lenders reflects the credit histories of their applicants. The more that people who have access to credit scores can investigate fintech lending, and the more that people can have access to credit score data, the better. It's very expensive to get that data; there's a huge barrier there. But that would make a big difference.
I would say, thirdly, public policy and legal scholars can continue to investigate and scrutinize the regulatory environment surrounding fintech lenders. There's a lot of really great legal scholarship about regulating technology, and that can continue to be strengthened and bolstered, I think, to help us better understand the environment that these lenders are subject to.
CA:
You challenge the ethics, and efficacy of fintech at a time when issues of technology and automation are very much in the public spotlight. Proponents of Chat GPT advocate for the use of AI in writing and the arts on similar grounds, maybe, to save costs, increase efficacy, and so on. So, though the two technologies obviously differ, do you think AI might enter the housing landscape anytime soon? And what do you think that might look like in terms of equitable homeownership, for example?
TH:
Yeah, it's an interesting question, and it's one that might be, you know, a bit above my pay grade. But I’ll say a couple of things. Machine learning, which is the technology used by fintech lenders, is considered a type of AI. But it's a predictive AI, right? A machine learning algorithm can sift through millions and millions of data points really quickly and make a prediction based on those data points (in the case of fintech lending, prediction of the level of default risk on a loan that this applicant brings to the table). But the type of AI these days that's really making the news, things like Chat GPT is generative AI: AI that can look at all those millions of data points and not just come up with a prediction, but actually create something.
If we think about generative AI entering the lending landscape–gosh, it's so speculative!—I would actually think about it more on the side of the borrowers than the lenders. You could imagine a world where there's some sort of a program, a Chat GPT-style program that a consumer could use and say, “I want this house, and I have this much money. And here's my credentials. How can I tailor my application to get a loan from this or that lender?” In an ideal world, consumers could actually use this technology to tailor their profiles to more successfully access housing… That's probably the best case scenario.
I would say that we have a history of technology starting as this grassroots, open source, idealistic thing. Then eventually, corporations co-opt it. I'm not so optimistic that consumers will benefit from this new style of AI.
CA:
A last question. You give different policy recommendations regarding increased data, transparencies, stricter discrimination regulations, and so forth, around Fintech. Is there anything the average person could do to further these goals and protect themselves or protect others?
TH:
It’s tough, because so much of this is regulated at the federal level. And so, to the extent that the average person is interested, they can vote in a way they think matches their viewpoint on the regulation of technology. That's one avenue.
I would also say, I don't mean to demonize an organization like the CFPB in this process; they have public hearings where not-for-profits will submit statements. Individuals who are interested can also attend those public hearings. And I do think that the regulators want to get this right, and there is room for the public, more than I expected going into this, to make comments.
I wish there was a way I could say the public could sort of protect themselves from being subject to these disparities in in lending outcomes, but I think that to live in our modern world, you're going to leave a digital footprint behind every time you make a credit card transaction, every time you visit a website and click “I accept these cookies,” whatever it is. You're opting into this world where every single one of those data points is subject to being analyzed by something like a fintech lender, and it's tough to envision a world where people protect themselves from that.
The one thing that comes to mind is that there have been communities who have pushed back on technology companies using their data. [One example is] Google Sidewalks, which communities in Toronto pushed back and protested against, and ended up stalling and eventually stopping that project from having control over all sorts of data collected in public spaces. When technology really gets in people's faces and they see that risk, there can be grassroots campaigns.
CA:
Thank you so much, Tyler. Is there anything else that you would want to say?
TH:
When you asked the question about Chat GPT, it got me thinking.. What fintech can teach us that we might be able to apply to Chat GPT is that it's dangerous to put blind faith in, to attribute objectivity to, technology just because it's using advanced mathematical tools.
I think we can learn as a public that when a new technology comes out, if it seems too good to be true, that just might be right. It's essentially going to reflect the aspects of our society that we already find unethical or inequitable. Because all of it is fed data that reflects the world that we live in.
Hopefully, we can, as academics and regulators and citizens can take a normative stance on technology and try to change it proactively. But in the meantime, let’s not assume that any new tech is going to be objective and fair, just because it doesn’t directly involve human beings with implicit bias.