A public safety disaster and an audacious jailbreak have police in New Orleans considering broader use of facial recognition technology.
The Times-Picayune reports that the New Orleans Police Deapartment (NOPD) has requested that city council sponsor a rewrite of a 2022 ordinance that has limited its use of facial recognition. The request comes after ten prisoners escaped from Orleans Parish Prison last month. Eight have been recaptured, while two – Antoine Massey and Derrick Groves – remain at large.
NOPD says the incident, combined with the New Year’s Day ramming attack on Bourbon Street, are enough proof that cops need better surveillance and tracking tools.
New Orleans has been a focal point for the debate about police use of facial recognition. It is home to Project NOLA, a privately-run nonprofit that manages a crime camera network across the city. Its subsidized HD surveillance cameras are wired into a central crime monitoring hub hosted at the University of New Orleans, which feeds footage to law enforcement in real time.
Its facial recognition cameras are primarily situated in commercial areas.
In April, NOPD Superintendent Anne Kirkpatrick ordered a pause on automated alerts officers had been receiving from Project NOLA, to conduct a review to see if the practice complies with city ordinance. The move was reportedly prompted in part by reports that police were using live facial recognition.
Current policy allows officers, “after exhausting other options, to ask Louisiana State Police to run facial recognition searches over specified violent crimes, including shootings, carjackings and rapes.” Officers must run the request through the Louisiana State Analytical and Fusion Exchange. At the moment, facial recognition is not “heavily used” by NOPD; monitors reported 19 requests by NOPD for all of 2023 to the state fusion center.
There is public support for loosening the rules. The NOLA Coalition, a group of more than 600 businesses and nonprofits formed in 2022, has called for council members to allow “constitutional use of such advanced policing technology,” so that police can respond immediately to security events.
There is also the expected pushback from civil rights groups, such as the ACLU of Louisiana, which claims Project NOLA’s system is dangerous.
City camera technology not useful for facial recognition: Project NOLA founder
Bryan Lagarde, the former New Orleans police officer who founded Project NOLA, says the city’s own cameras have poor resolution and technical limitations on distance. “The city’s cameras are not capable of facial recognition. They would have to replace a very large number of cameras, possibly all of the cameras they have.” Whereas, he says, Project NOLA cameras are “able to clearly see faces 700 feet away, regardless of lighting conditions.”
There is evidence to support police’s claim that Project NOLA’s surveillance capabilities are helpful. Its cameras helped retrace the planning of the January 1 ramming attack, and aided in the arrest of Kendell Myles, one of the Orleans Parish Prison escapees.
Lagarde says he believed a proposed new ordinance would “free up NOPD to tap the Project NOLA network without concern” as needed.
The superintendent’s decision to suspend alerts is still in effect, but is reportedly under review.
Virginia standards on police use of AI seven months overdue
Meanwhile, debates about facial recognition and AI systems for law enforcement continue to simmer, at varying levels, across America. In Virginia, according to VPM News, Governor Glenn Youngkin and state Attorney General Jason Miyares are seven months behind a deadline set by the governor to outline standards for Virginia State Police’s use of artificial intelligence.
Youngkin’s executive order, issued with some fanfare, directed the state public safety and homeland security secretary to develop rules for AI use “applicable to all executive branch law enforcement agencies and personnel.” The original deadline of October 18, 2024 is now in the distant rearview.
Youngkin leaves office in January 2026, and there is currently no news on how long the delay could last. A spokesperson tells VPM News that “the Administration is working with its AI Task Force and stakeholders to finalize standards that address all key issues.”
Virginia State Police have a 15-year contract with Tech5, worth an estimated $54 million, to improve the force’s fingerprint collection capabilities. Some AI rules are in place, but the state doesn’t have comprehensive legislation. Lawmakers imposed a ban on the tech in 2021, but two years later passed a bill allowing law enforcement to use facial recognition with some guardrails. AI from Chinese firm DeepSeek is banned on state devices. However, it is fine to contract with Dataminr, a social media surveillance company that partners with X; the firm recently helped Los Angeles police track protesters demonstrating against the war in Gaza.
Dallas cops not using Clearview as much as some might have hoped
In some cities, police are explicitly pushing for more use of facial recognition – and wondering why uptake isn’t meeting expectations. Dallas police rolled out Clearview AI’s facial recognition system in October 2024; according to the Dallas Observer, the department has so far generated leads in 34 cases, out of 94 approved requests by officers.
Apparently, that’s not enough for some. The tech is only to be used for violent offenses or in the case of imminent public safety threats. But there is encouragement that use will pick up as officers come to realize how effective it is.
Such enthusiasm could be ill-placed, given Clearview’s rocky record on privacy. The firm recently settled a long-running data privacy lawsuit, which will cost it an estimated $51.75 million. Complainants say the company’s practice of scraping social media accounts for face biometrics is a violation of data privacy law.
According to government records, since 2020, U.S. Immigration and Customs Enforcement has paid millions in contracts to Clearview AI. A recent article in Mother Jones alleges that the company was always intended to monitor immigration and border control, saying it “doled out free trials to hook users, urging cops to ‘run wild’ with searches. They did.”
Facial recognition more dangerous under current government: Massachusetts ACLU
In Massachusetts, state police have fielded at least 45 requests to use facial recognition to conduct searches for criminal suspects. The Gloucester Daily Times says that’s a more than 200 percent increase over the same time period in the previous year, when the agency had 14 requests.
Critics at the American Civil Liberties Union (ACLU) of Massachusetts are calling on policymakers to set restrictions on its use. The organization’s director of technology and justice programs, Kade Crockford, says that AI and facial recognition for surveillance is “always dangerous, but particularly so in an environment like today’s, with a federal government hell-bent on attacking our most basic rights, including our First Amendment rights to freely associate and speak out on issues that matter to us.”
A proposal from state lawmakers would limit and regulate law enforcement’s acquisition, possession, and use of biometric surveillance technology, and require officers to have a warrant based on probable cause before conducting a facial recognition search.
Related Posts
Article Topics
biometric identification | biometrics | Clearview AI | criminal ID | facial recognition | New Orleans | police | Project NOLA | United States | Virginia
Latest Biometrics News
The UK Ministry of Defense has awarded a contract to Hippo Digital to assess its identity verification and validation systems…
In a sweeping redirection of U.S. cybersecurity strategy, President Donald Trump has signed a new Executive Order (EO) that is…
In a sweeping response to an escalating wave of identity theft and fraud within federal student aid programs, the U.S….
The Executive Director of Liberia’s National Identification Registry (NIR), Andrew Peters, says one of the biggest challenges they face in…