The American Immigration Council does not endorse or oppose candidates for elected office. We aim to provide analysis regarding the implications of the election on the U.S. immigration system.

On April 30, the Department of Homeland Security (DHS) released the updated 2024 inventory of unclassified and non-sensitive AI use cases within the department. The public data revealed something powerful: artificial intelligence isn’t just a future possibility in immigration enforcement—it’s already here. In fact, the inventory listed 105 active DHS AI use cases deployed by major immigration agencies, with wide-ranging applications that impact asylum screenings, border surveillance, fraud detection, and other key DHS functions.

The department has been releasing its AI use inventory publicly since 2022, but the 2024 inventory has provided the most comprehensive disclosure so far. The 2023 list only has 39 use cases by immigration agencies under the DHS. But the Office of Management and Budget (OMB) issued a new guideline in 2024, requesting agencies disclose more AI use cases.

Of the 105 AI applications deployed by the immigration agencies in the 2024 inventory list:

  • Customs and Border Protection (CBP) leads with 59 AI use cases.
  • Immigration and Customs Enforcement (ICE) follows with 23.
  • U.S. Citizenship and Immigration Services (USCIS) uses AI in 18 cases.
  • DHS headquarters has 5 more general AI applications that apply to all agencies.

The inventory organizes use cases into topic areas. While there are many use cases for internal agency support or government services, the majority of these AI use cases, 61%, are tied to “Law and Justice.” They include tools for:

  • Biometric identification
    Example: CBP’s Unified Processing/Mobile Intake system uses facial recognition to match individuals against the agency’s photo repositories. Integrated with the Traveler Verification Service, the system helps agents quickly identify people with prior apprehensions or security flags to expedite processing at the border.
  • Screening
    Example: CBP uses a tool called Babel to collect and analyze open-source and social media content related to specific travelers. Using AI for translation, image recognition, and text detection, Babel helps analysts identify potential threats or people who may need additional inspection—supplementing the manual review process and sometimes reducing the need for additional screening.
  • Assistance for investigation
    Example: ICE’s Homeland Security Investigations unit uses an AI tool called Email Analytics for Investigative Data to generate leads. The system uses natural language processing and pattern detection to analyze large volumes of email, video, and audio data, identifying communications that may be related to criminal activities. Based on these patterns, ICE identifies certain individuals or networks for further actions by investigators and analysts.

Among CBP’s 59 cases, 71% relate to Law and Justice, and for ICE, 65% of its AI projects fall into the same topic area. The share of cases that fall under Law and Justice is the lowest for USCIS at 39%.

The DHS inventory also identifies AI cases that use facial recognition and face capture technologies. From the DHS AI inventory, 16 DHS immigration AI use cases involve facial recognition or face capture. Most of these are used by CBP and ICE.

According to the DHS inventory, 27 out of 105 use cases are labeled as “rights-impacting.” These are cases that the OMB, under the Biden administration, identified as impacting an individual’s rights, liberty, privacy, access to equal opportunity, or ability to apply for government benefits and services.

Last month, however, the OMB under the Trump administration released two new memos addressing the use of AI. Some lawyers noted that “rights-impacting” minimum risks standards were missing from the two new memos.

CBP has the highest number of rights-impacting cases (14), followed by USCIS (7) and ICE (5). In addition, 28 cases are identified as “too new to assess,” suggesting that these tools will need to be reassessed if they are “rights-impacting” before the end of their initiation stage.

Many immigration attorneys already navigate complex and opaque processes. Now, they must also contend with the increasing use of AI in decision-making—often behind the scenes. For example, if an AI-powered system flags an asylum application as potentially “fraudulent,” how will that factor into the decision-making process? How is that disclosed to the applicants? How can applicants appeal an AI-powered decision?

There are many unanswered questions about how AI is used by DHS in immigration enforcement. One thing that is clear, however, is that AI is already shaping immigration outcomes. The question is whether we’re prepared to hold those systems—and the agencies deploying them—accountable.

This post is the first in a series that explores how DHS is integrating AI into immigration enforcement. Future posts will explore how AI is shaping decision-making in immigration enforcement, the use of surveillance tools at the border, the risks of bias, the potential for greater efficiency, and, ultimately, how we might build more transparent and accountable systems.

FILED UNDER: , ,