Policing in the AI Era: Balancing security, privacy, and public trust

Law enforcement is relying more heavily on video evidence, both submitted from community members and analyzing data from public and privately owned security cameras. As departments wade through thousands of hours of video, they are increasingly relying on AI-trained video analytics solutions to decipher data more effectively. This article reviews the historic and emerging uses of this technology in the data-driven policing toolkit and addressing concerns around the technology.

The technological evolution of data-driven policing

Aggregating data from multiple sources to more effectively allocate police resources is not a novel concept; CompStat in the New York City Police Department launched more than 30 years ago. The core practices of aggregating data and generating visualizations to inform predictive policing have been duplicated across more than 100 law enforcement agencies in the United States and are practiced in Real Time Crime Centers of contemporary policing. Data-driven policing can help short-staffed agencies and foster greater community trust and engagement through informed outreach programs and data dashboards.

Historically CompStat and other such programs relied on data from previous crimes to allocate resources and predict future crimes. New York City led American cities in investing aggressively in closed-circuit camera surveillance with its Domain Awareness System more than a decade ago. Video analytics were a key factor in helping to quickly identify two suspects in the aftermath of the Boston Marathon Bombing in 2013. Contemporary technology such as body-worn cameras, license plate readers, gunshot detectors, internet connected private security cameras (shared voluntarily with law enforcement departments) and facial recognition technology allow departments to track criminal activity in near-real time. A surge of post-pandemic American Rescue Plan Act (ARPA) dollars have funded departments with license plate readers, video analytics tools, and security camera investments that serve as force multipliers to help short-staffed law enforcement agencies to monitor their communities.

Most recently, in the aftermath of the murder of United Healthcare CEO Brian Thompson in late 2024, video analytics and AI facial recognition technology were utilized to help track a suspect’s path within New York City and distribute widely security camera footage of the suspect’s face. The suspect fled the crime scene but could not escape detection by the network of thousands of cameras monitoring New York City.

How AI tools make hours of video content functional

The volume of video captured across these interconnected tools would never be manageable for real-time human review. AI incorporates behavior analysis in complex environments to flag anomalies for human review – finding the needle in the digital haystack. Algorithms are either trained on a specific behavior (such as identifying if a person enters a secure area, or if a suspicious item is left behind) or are programmed with a learning algorithm which can adjust based on past behaviors. A learning algorithm would, for example, be able to identify the difference between the movement of a bag blowing across a parking lot versus a human being in the same parking lot.

Video analytics companies can review body-worn camera footage and recognize instances of police professionalism and enhance citizen experience. Law enforcement agencies generally review an inordinately small amount of body-worn camera footage due to resource constraints, but analytics tools which automatically detect critical events (such as use of force, apprehensions, de-escalation attempts) can inform areas for enhanced training and consider customer experience.

Controversy around facial recognition technology

Facial recognition technology and its use by law enforcement agencies has been a hotly debated topic since the 2020 aftermath of the death of George Floyd. The technology has been scrutinized for alleged inappropriate use of the technology by law enforcement agencies and concerns that the tool has poor accuracy ratings in recognizing black and brown faces. The State of Maryland adopted legislation regulating law enforcement use of facial recognition technology this past year. Maryland State Police adopted a model policy around the technology, in accordance with new state law. Policies and legislation commonly ensure that facial recognition technology is not used alone to establish probable cause or surveil Constitutionally protected activities. There are also concerns about protecting the identity of minors. New York City Police Department has a public-facing Q&A page on its website about how facial-recognition technology is used and the frequency of its use.

Addressing staffing shortages through more effective policing methods

Video analytics and other data-driven technologies can aid departments in addressing criminal activities more efficiently, especially if their agency is short-staffed. Seattle invested heavily in police surveillance equipment in three high activity corridors last year, andt installing license plate readers on all police department vehicles. This is concurrent with the police department indicating that they will not respond to tripped alarm systems calls without additional verification (video, audio, eyewitness, for example.) The Seattle Police Department has struggled to address a nearly 30% vacancy rate since the COVID-19 pandemic and must prioritize which calls to respond as responding to all in a timely fashion is not attainable. The intersection of technology in policing and staffing shortages has cracked open the door on discussing when calling 911 is appropriate and if 24/7 alternative response programs might be more appropriate than traditional law enforcement responses.

In summary, law enforcement agencies can use video analytics and other emerging technologies to respond to crime in a more resource-conscious and efficient manner. These efficiencies must balance public desire for transparency and collaboration in how these technologies are deployed, how privacy is protected, and data is secured. Lastly, policy measures can be implemented to ensure the utilization of the most current technologies. Lastly, efforts should be made to train AI algorithms on unbiased data sets to prevent any perpetuation of harm against marginalized communities.

Next
Next

Book Review: Your Team Loves Mondays… Right? by Kristin Sherry