The AI Dragnet

The AI Dragnet

The U.S. government is activating a suite of algorithmic surveillance tools, developed in concert with major tech companies, to monitor and criminalize immigrants’ speech.

Image courtesy of The Intercept

This article was published in partnership with The Intercept.

Rita Murad, a twenty-one-year-old Palestinian citizen of Israel and student at the Technion Israel Institute of Technology, was arrested by Israeli authorities in November 2023 after sharing three Instagram stories on the morning of October 7. The images included a picture of a bulldozer breaking through the border fence in Gaza and a quote: “Do you support decolonization as an abstract academic theory? Or as a tangible event?” She was suspended from the university and faced up to five years in prison.

In recent years, Israeli security officials have boasted of a “ChatGPT-like” arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas’s bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians within Israel and East Jerusalem for social media–related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp or sharing images from Gaza on their Instagram stories.

When the New York Times covered Murad’s saga last year, the journalist Jesse Baron wrote that, in the United States, “There is certainly no way to charge people with a crime for their reaction to a terrorist attack. In Israel, the situation is completely different.”

Soon, that may no longer be the case.

Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the United States this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military’s policies in Gaza. There is Mahmoud Khalil, a green card holder married to a U.S. citizen, taken from his Columbia University residence and sent to a detention center in Louisiana. There is Rümeysa Öztürk, a Turkish doctoral student at Tufts disappeared from the streets of Somerville, Massachusetts, by plainclothes officers allegedly for co-authoring an op-ed calling on university administrators to heed student protesters’ demands. And there is Mohsen Mahdawi, a Columbia philosophy student arrested by ICE agents outside the U.S. Citizenship and Immigration Services office where he was scheduled for his naturalization interview.

In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the United States is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech.

In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.” The arsenal was built in concert with American tech companies over the past two decades and already deployed, in part, within the U.S. immigration system.

Rubio’s Catch and Revoke initiative emerges from long-standing collaborations between tech companies and increasingly right-wing governments eager for their wares. The AI industry’s business model hinges on unfettered access to troves of data, which makes less-than-democratic contexts, where state surveillance is unconstrained by judicial, legislative, or public oversight, particularly lucrative proving grounds for new products. The effects of these technologies have been most punitive on the borders of the United States or the European Union, like migrant detention centers in Texas or Greece. But now the inevitable is happening: they are becoming popular domestic policing tools.

Israel was one early test site. As Israeli authorities expanded their surveillance powers to clamp down on rising rates of Palestinian terrorism in the early 2010s, U.S. technology firms flocked to the region. In exchange for first digital and then automated surveillance systems, Israel’s security apparatus offered CEOs troves of the information economy’s most prized commodity: data. IBM and Microsoft provided software used to monitor West Bank border crossings. Palantir offered predictive policing algorithms to Israeli security forces. Amazon and Google signed over cloud computing infrastructure and AI systems. The result was a surveillance and policing dragnet that could entangle innocent people alongside those who posed credible security threats. Increasingly, right-wing ruling coalitions allowed it to operate with less and less restraint.

With time and in partnership with many of the same companies, the U.S. security state built its own surveillance capacities to scale.



Not long ago, Silicon Valley preached a mantra of globalization and integration. It was antithetical to the far right’s nationalistic agenda, but it was good for business in an economy that hinged on the skilled and unskilled labor of foreigners. So when Trump signed an executive order banning immigration from five Muslim countries and subjecting those approved for visas to extra screening in January 2017, tech executives and their employees dissented.

Google co-founder Sergey Brin, an immigrant from the Soviet Union, joined demonstrations at the San Francisco airport to protest Trump’s travel ban. Mark Zuckerberg cited his grandparents, Jewish refugees from Poland, as grounds for his opposition to the policy. Sam Altman also called on industry leaders to take a stand. “The precedent of invalidating already-issued visas and green cards should be extremely troubling for immigrants of any country,” he wrote on his personal blog. “We must object, or our inaction will send a message that the administration can continue to take away our rights.”

Many tech workers spent the first Trump presidency protesting these more sinister entailments of a data-driven economy. Over the following year, Microsoft, Google, and Amazon employees would stage walkouts and circulate petitions demanding an end to contracts with the national security state. The pressure yielded image restoration campaigns. Google dropped a bid for a $10 million Defense Department contract. Microsoft promised their software and services would not be used to separate families at the border.

But the so-called tech resistance belied an inconvenient truth. Silicon Valley firms supplied the software and computing infrastructure that enabled Trump’s policies. Companies like Babel and Palantir entered into contracts with ICE in 2015, becoming the bread and butter of ICE’s surveillance capacities by mining personal data from thousands of sources for government authorities, converting it into searchable databases, and mapping connections between individuals and organizations. By 2017, conglomerates like Amazon, Microsoft, and Google were becoming essential too, signing over the cloud services to host mounds of citizens’ and residents’ personal information.

Even as some firms pledged to steer clear of contracts with the U.S. security state, they continued working abroad, and especially in Israel and Palestine. Investigative reporting over the last year has brought more recent exchanges to light. Deals between U.S. companies and the Israeli military ramped up after October 7, according to leaked documents from Google and Microsoft. Intelligence agencies relied on Microsoft Azure and Amazon Web Services to host surveillance data and used Google’s Gemini and OpenAI’s ChatGPT to cull through and operationalize much of it, often playing direct roles in operations—from arrest raids to airstrikes—across the region.

These contracts gave U.S. technology conglomerates the chance to refine military and homeland security systems abroad until Trump’s re-election signaled they could do so with little pushback at home. OpenAI changed its terms of use last year to allow militaries and security forces to deploy their systems for “national security purposes.” Google did the same this February, removing language saying it wouldn’t use its AI for weapons and surveillance from its “public ethos policy.” Meta also announced U.S. contractors could use its AI models for “national security” purposes.

Technology firms are committed to churning out high-risk products at a rapid pace. Which is why privacy experts say their products can turbocharge the U.S. surveillance state at a time when constitutional protections are eroding.

“It’s going to give the government the impression that certain forms of surveillance are now worth deploying when before they would have been too resource intensive,” Ben Wizner, director of the ACLU’s Speech, Privacy, and Technology Project, offered over the phone last week. “Now that you have large language models, you know, the government may say why not store thousands of hours of conversations just to run an AI tool through them and decide who you don’t want in your country.”

The parts are all in place. According to recent reports, Palantir is building ICE an “immigrationOS” that can generate reports on immigrants and visa holders—including what they look like, where they live, and where they travel—and monitor their location in real time. ICE will use the database combined with a trove of other AI tools to surveil immigrants’ social media accounts, and to track down and detain “antisemites” and “terrorists,” according to a recent announcement by the State Department. “We need to get better at treating this like a business,” acting ICE Director Todd Lyons said in a speech at the 2025 Border Security Expo in Phoenix earlier this month, “like [Amazon] Prime, but with human beings.”

It is important to remember that many of the propriety technologies private companies are offering the U.S. surveillance state are flawed. Content moderation algorithms deployed by Meta often flag innocuous content as incendiary, especially Arabic-language posts. OpenAI’s large language models are notorious for generating hallucinatory statements and mistranslating phrases from foreign languages into English. Stories of error abound in recent raids and arrests, from ICE officials mistaking Mahmoud Khalil for a student visa holder to citizens, lawful residents, and tourists with no criminal record being rounded up and deported to El Salvador.

But where AI falters technically, it delivers ideologically. We see this in Israel and Palestine, as well as other contexts marked by relatively unchecked government surveillance. The algorithms embraced by Israel’s security forces remain rudimentary. But officials have used them to justify increasingly draconian policies. The Haifa-based human rights organization Adalah says there are hundreds of Palestinians with no criminal record or affiliation with militant groups held behind bars because right-wing activists and politicians instructed police forces to search their phones and social media pages and label what they said, shared, or liked online as “incitement to terrorism” or “support of terrorism.”

Now we hear similar stories in American cities, where First Amendment protections and due process are disintegrating. The effects were nicely distilled by Ranjani Srinivasan, an Indian PhD student at Columbia who self-deported after ICE officials showed up at her door and cancelled her legal status. From refuge in Canada, she told the New York Times she was fearful of the expanded U.S. algorithmic arsenal. “I’m fearful that even the most low-level political speech or just doing what we all do—like shout into the abyss that is social media—can turn into this dystopian nightmare,” Srinivasan said, “where somebody is calling you a terrorist sympathizer and making you, literally, fear for your life and your safety.”

It is frightening to think that all this happened in Trump’s first 100 days in office. But corporate CEOs have been working with militaries and security agencies to sediment this status quo for years now. The visible human cost of these exchanges may spawn the opposition needed to head off more repression. But for now, the groundwork is laid for the U.S. surveillance state to finally operate at scale.


Sophia Goodfriend is an anthropologist who writes about automated warfare in Israel and Palestine. She is currently a postdoctoral fellow with the Belfer Center’s Middle East Initiative at the Harvard Kennedy School.