When masked federal agents stopped two teenagers riding their bikes near an Illinois high school last fall, the encounter followed a now familiar script. The agents demanded proof of citizenship. One of the teenagers, who said he was 16 and a U.S. citizen, told the agents he had a school ID, but did not have it on him.
According to a lawsuit filed by the State of Illinois and the City of Chicago, one agent then asked another, “Can you do facial?” The other agent pointed a cell phone at the teenager, appearing to take a photo of his face.
That moment, captured not by body camera footage but by sworn allegations, has become emblematic of a shift now under legal scrutiny as mobile facial recognition expands into everyday encounters with children far from the border and outside the controlled settings DHS has traditionally used to justify biometric identification.
The lawsuit accuses the Department of Homeland Security (DHS), component agencies Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP), and senior Trump administration officials, of operating an unlawful interior enforcement regime built around coercive stops and the routine use of mobile biometric tools.
Among its most serious allegations is that DHS agents have used facial recognition on minors who are U.S. citizens without consent, individualized suspicion, or meaningful public limits on retention and sharing.
At the center of the case is DHS’s use of Mobile Fortify, a field-deployed application that scans fingerprints and performs facial recognition, then compares collected data against multiple DHS databases, including CBP’s Traveler Verification Service, Border Patrol systems, and Office of Biometric Identity Management’s Automated Biometric Identification System.
The complaint alleges DHS launched Mobile Fortify around June 2025 and has used it in the field more than 100,000 times since launch.
Unlike CBP’s traveler entry-exit facial recognition program in which U.S. citizens can decline participation and consenting citizens’ photos are retained only until identity verification, Mobile Fortify is not restricted to ports of entry and is not meaningfully limited as to when, where, or from whom biometrics may be taken.
The lawsuit cites a DHS Privacy Threshold Analysis stating that ICE agents may use Mobile Fortify when they “encounter an individual or associates of that individual,” and that agents “do not know an individual’s citizenship at the time of initial encounter” and use Mobile Fortify to determine or verify identity.
The same passage, as quoted in the complaint, authorizes collection in identifiable form “regardless of citizenship or immigration status,” acknowledging that a photo captured could be of a U.S. citizen or lawful permanent resident.
The lawsuit further alleges DHS retains biometric data collected through Mobile Fortify, regardless of citizenship or age, for up to fifteen years.
If that claim is borne out, children subjected to brief street encounters could carry a biometric record into adulthood without being charged, arrested, or even suspected of wrongdoing.
What makes the use of facial recognition on children especially fraught is not merely the technology itself, but how sharply current practice collides with DHS’s own written guardrails and the oversight warnings DHS has received about transparency and accountability.
DHS policy recognizes age as protected but offers children no affirmative safeguards.
In September 2023, DHS issued Directive 026-11, a department-wide policy governing the use of face recognition and face capture technologies.
The directive emphasized that DHS “does not collect, use, disseminate, or retain” facial recognition information solely based on protected characteristics, including age, and it frames these systems as inherently privacy sensitive.
But the directive does not establish heightened thresholds for using facial recognition on minors, does not require parental notice or consent, does not mandate shortened retention periods for juvenile data, and does not impose a child-specific proportionality analysis.
That omission becomes decisive once facial recognition migrates from controlled checkpoints into coercive street encounters, where a teenager cannot meaningfully opt out and a child confronted by armed agents cannot meaningfully consent.
Directive 026-11 also draws a bright line around enforcement. It clearly states that facial recognition used for identification “may not be used as the sole basis for law or civil enforcement related actions,” and potential matches must be manually reviewed by human examiners before action is taken.
That safeguard reflects DHS’s recognition that facial recognition is probabilistic and context-dependent, and it is meant to prevent automation-driven enforcement.
In street-level encounters involving minors, however, a facial scan can function as de facto identity confirmation in real time, shaping how agents question, detain, or release a child, even if no arrest follows.
The directive’s current status has also become part of the controversy. The Privacy and Civil Liberties Oversight Board’s 2025 staff report examining the Transportation Security Administration’s (TSA) facial recognition use found that Directive 026-11 was no longer available on DHS’s website.
DHS was unable to confirm to PCLOB whether it remained official policy, prompting PCLOB to recommend that DHS restore it and publicly affirm it as controlling policy, or reissue an analogous directive.
The Illinois and Chicago lawsuit goes further, alleging DHS rescinded Directive 026-11 sometime on or before February 14, 2025 – not long after Donald Trump took the oath of office.
DHS often points to testing and evaluation regimes to defend facial recognition, but the PCLOB report underscores how difficult it can be for the public to assess whether government facial recognition systems have “sufficient accuracy” and acceptably minimal demographic differential performance without meaningful transparency about the algorithms, datasets, and deployments.
PCLOB also details how TSA’s program, despite broad operational deployment, was repeatedly described as a “pilot,” and how TSA had not yet produced a comprehensive Privacy Impact Assessment (PIA) describing the full system even after reaching initial operational capability, raising concerns about public notice and oversight.
Those transparency problems apply with even greater force to mobile enforcement uses.
The lawsuit alleges there is no meaningful restriction on when and where Mobile Fortify can be used, and it describes a regime in which biometric capture is routine during street encounters, including with minors, which are precisely the kind of deployment where context, coercion, and error risk are most acute.
The PCLOB said DHS cannot state whether a policy is in force, oversight agencies, operators, and the public cannot know what is allowed and what is not.
For minors, that uncertainty has a human cost. If children’s facial images are being captured during street encounters and retained for years, the public still lacks the basic documentation needed to understand how those images are stored, shared, audited, or removed.
The Illinois lawsuit situates biometric scanning within a broader enforcement architecture defined by neighborhood-scale targeting and stops near sensitive locations, including places where children gather.
The complaint describes teenagers scanned near schools and children left crying after parents were detained – everyday moments transformed into biometric events, with consequences that may follow a child for years.
DHS Directive 026-11 and the PCLOB report together establish that the department understands facial recognition is privacy sensitive, recognizes age as a protected class, forbids enforcement based solely on algorithmic results, and relies on policy clarity as a precondition for oversight.
But what they do not explain is why those principles appear to fall away at the point where ICE encounters children in the field.
Until DHS publishes child specific safeguards, retention rules, system specific performance data, and workable limits on sharing and redress, the legal and ethical questions surrounding facial recognition on minors will continue to be answered behind closed doors.
There is no publicly available PIA that evaluates field use with a child-focused risk analysis. There is no published retention schedule for probe photos captured on the street. Memoranda of understanding governing biometric data sharing with state, local, or private partners are often withheld or heavily redacted.
As a result, it is often impossible to trace how a child’s facial image moves from a brief encounter into DHS systems, vendor platforms, or partner databases, or whether it can ever be removed.
The consequences, however, are already playing out in public on sidewalks, near schools, and in the lives of children who may never know where their biometric identities now reside.
Related Posts
Article Topics
biometric identification | biometrics | children | data collection | DHS | ethics | facial recognition | ICE | Mobile Fortify | real-time biometrics | regulation | U.S. Government
Latest Biometrics News
By Tyson Moler, Vice President, Identity and Biometrics Solutions, Thales North America Digital IDs are steadily moving from pilot projects…
House Democrats introduced legislation that if passed would force federal immigration enforcement agents to wear a public-facing identification in the…
Onboarding users to corporate systems involves complexity beyond consumer applications, particularly for multinationals. Henry Balani of Encompass Group argues in…
Biometrics are now central to some of the hottest news stories not just in the identity sector, but all around…
