Redlined by Algorithm

Redlined by Algorithm

A new rule proposed by Trump’s Department of Housing and Urban Development could allow landlords and real-estate brokers to get away with discrimination by blaming it on computer modeling.

(Getty Images)

Artificial intelligence is slowly creeping into the gears of everyday life—making our homes smarter, our consumption habits more predictable for companies, our airports more tightly secured. AI is also helping to mechanize our ugliest social prejudices. The Trump administration’s Department of Housing and Urban Development (HUD) is now poised to boost AI as a tool to automate discrimination in the housing market, allowing algorithms to exclude and segregate on a landlord or mortgage lender’s behalf.

HUD recently issued a proposal to roll back a core principle in the Fair Housing Act—Title VIII of the Civil Rights Act of 1968—making it harder to prove that people have been deliberately excluded from housing based on race, gender, or another protected class. In its rulemaking proposal, HUD sought to dramatically weaken the scope of the “disparate impact” framework, a legal theory that has, since the late 1960s, enabled victims of discrimination to bring claims based on the discriminatory effect, rather than the internal motivations, of the defendant. Disparate impact has been a major legal avenue for holding mortgage lenders, real-estate companies, and other housing providers to account for reinforcing and expanding residential segregation and the marginalization of low-income communities of color. Under the current framework, which was affirmed by a keystone Obama-era HUD rule, a housing provider that systematically rejects applications from black prospective tenants but not white ones could be held liable for discrimination even without direct proof that the exclusion was motivated by racial animus.

The new proposed HUD rule would heighten the bar for proving discrimination, requiring that plaintiffs show evidence that “the practice or policy is arbitrary, artificial, and unnecessary”; that there is a “robust causal link” between the challenged practice and the discriminatory effect; and that members of a protected group under federal law have suffered direct harm.

According to Paul Goodman, technology equity director of the housing justice organization the Greenlining Institute, algorithmic tools are already widely used for risk assessment by companies that finance housing. “Off the top of my head, I can’t think of a lender that doesn’t use some kind of algorithmic type of analysis as it is,” Goodman said.

A recent study on mortgage discrimination that found that although financial technology–based platforms tended to discriminate less than face-to-face lenders overall, there was still a significant disparate impact: among online mortgage applicants, black and Latinx borrowers paid over 5 basis points more in interest than non-minority borrowers with similar financial backgrounds.

Much of the HUD rulemaking document is devoted to justifying the use of such algorithmic models, which civil rights advocates see as a kind of get-out-of-jail-free card. The proposal outlines a point-by-point legal defense for AI that essentially guides landlords and companies to wheedle their way out of allegations of discrimination.

First, HUD explains how defendants can show that the data inputs they put into their algorithms are not “protected characteristics” included under the Fair Housing Act, which means a housing provider can use purportedly neutral data inputs in its assessments to pursue a “valid objective,” like evaluating the financial risk of a mortgage applicant. Second, the defendant can argue that the algorithm is created by a third party, absolving the defendant of any resulting discrimination. Third, HUD suggested that the defendant can avoid liability by getting “a qualified expert” to vouch that the model is not the cause of a disparate impact—which critics say gives corporations a blank check to validate their practices through experts-for-hire.

Echoing the Silicon Valley rationale that virtually any regulation is a threat to “innovation,” HUD has argued that it was necessary to minimize restrictions on such technologies “so employers and other regulated entities are able to make the practical business choices and profit-related decisions that sustain a vibrant and dynamic free-enterprise system.” The proposal’s central assumption is that entrepreneurialism will suffer if companies fear being sued over algorithms that impose higher costs on black residents or shut out immigrant families from an all-white neighborhood.

In more than 45,750 comments submitted in response to HUD’s proposal since August, technology watchdog groups, housing advocates, and civil rights organizations have argued that the proposed rule would enable a mortgage lender or housing agency to hide discrimination behind algorithmic models and effectively preempt many civil rights lawsuits.

“The proposed rule makes it virtually impossible to successfully bring a charge of disparate impact,” said Ed Gramlich, senior policy advisor with the National Low Income Housing Coalition. “It opens up the door for all these excuses that are provided through the algorithm defense.”

The Electronic Frontier Foundation (EFF) challenged the rationale behind the AI exemption—that using seemingly neutral data inputs is enough to avoid discrimination—noting that “the point of sophisticated machine-learning models is that they can learn how combinations of different inputs might predict something that any individual variable might not predict on its own.” A landlord wary of renting to black or Latinx tenants might seek to exclude prospective tenants from certain zip codes in non-white neighborhoods, for example.

“[I]f this rule gets implemented,” said Goodman of Greenlining, “it’s actually going to create an enormous incentive for [others to follow]. . . . It’s going to drive people toward these algorithmic tools, and I think we’ll end up in a market place where everyone is taking advantage of this loophole.”



AI is often celebrated as a way to combat discrimination by enabling decision making through a seemingly more objective process, detached from human passions. At the same time, a growing body of research has revealed the potential harms of machine learning and algorithms when they are applied in ways that, intentionally or not, intensify existing hierarchies and social barriers. ProPublica has revealed critical blind spots in machine-learning applications in the criminal legal system. When police used predictive software in criminal courts to mete out sentences based on “risk” of re-offending, their algorithm was found to replicate patterns of historical bias in the criminal legal system, leading to black people receiving higher sentences than white counterparts with comparable records and backgrounds.

In many arenas of business and civic life, predictive analytics and machine learning have faced criticism for reflecting the institutionalized race and gender hierarchies embedded in (largely white and male-dominated) Silicon Valley, and U.S. society as a whole. Seemingly neutral technologies have been shown to have an unintentional discriminatory impact. Facial recognition programs, for example, are relatively ineffective at identifying darker-skinned people’s faces—prompting some cities to ban the use of facial recognition technology by their local police forces for fear of exacerbating the existing disparate impact of policing in black communities. In recent months, tech giants like Amazon have been publicly denounced for partnering with federal agencies and marketing facial recognition software to Immigration and Customs Enforcement (ICE). Facial recognition is already a common technique in ICE operations that mine driver’s license databases.

The new HUD rule goes beyond enacting discriminatory technological practices, providing legal justification and protection for their use.

Saira Hussain, staff attorney with EFF, said that the HUD rule proposal would allow AI users to preempt a case before any evidence is presented simply by showing that the alleged discriminatory practice involved an algorithm. “HUD is putting forward this complete defense that would prevent plaintiffs from even reaching the phase of discovery,” Hussain said, “and it would just kill a lawsuit outright, if a defendant is relying on an algorithmic tool [or] machine learning model in order to make housing decisions.”

Another concern for civil rights groups is a provision indicating that nothing in HUD’s regulations “requires or encourages the collection of data with respect to protected classes.” Interpreted broadly, that could amount to a blanket exemption from internal data collection for companies that use algorithmic models. EFF warns that this total lack of oversight would allow corporations to lock their algorithms in a “black box” by claiming their analytical methods are trade secrets. “If they’re not keeping track of the data,” Hussain said, “then how can we study that?”



Public anxieties about AI’s social ramifications often center on how automation and robots will revolutionize our workplaces, or the creepy ubiquity of voice assistants like Alexa. But HUD’s proposal shows that for many of the working poor, AI can exert control in more insidious ways. When we’re apartment hunting or applying for a loan, a “predictive” analysis of financial data can do the dirty work of redlining neighborhoods and reinforcing existing segregation patterns, without directly implicating the lender or real estate agent.

Some activists fear that HUD’s disparate impact rule is a step in a wider campaign to use technology to subvert a pre–Information Age civil rights regime. Although public backlash against the use of AI by ICE and local police has recently prompted some lawmakers to try to check the use of AI in law enforcement, HUD’s proposal might reflect a subsurface effort by the famously anti-regulation Trump administration to carve out exemptions for new technologies from basic civil rights standards.

Nonetheless, many AI skeptics also believe that algorithms could play an important role in countering discrimination, as long as the models are designed to make decisions that undercut or circumvent existing implicit biases. Making AI a weapon against discrimination requires rigorous research, transparency, and public oversight, as well as a commitment to correcting the endemic discrimination that technology threatens to amplify. In today’s housing market, however, the underlying rot of segregation and structural racism is being buttressed by brand new tools.

“Really cheap powerful computing is great,” Goodman said, “but it also allows us to be racist faster and more efficiently than ever before.”


Michelle Chen is a contributing editor to Dissent and co-host of its Belabored podcast.


Socialist thought provides us with an imaginative and moral horizon.

For insights and analysis from the longest-running democratic socialist magazine in the United States, sign up for our newsletter: