AI’s potential to transform public safety in Georgia is moving from theory to practice, as state lawmakers consider how emerging technology might aid police, firefighters, and emergency responders. From predicting emergency hot spots to trimming 911 hold times, AI is increasingly stepping in to do some heavy lifting. But as enthusiasm grows, so do concerns over bias, data accuracy, and the risk of losing the “human touch” in critical, often life-or-death situations.
AI in Action: Firefighting with a High-Tech Edge
Imagine firefighters entering a burning warehouse, armed not only with hoses and helmets but with live-streamed data. With AI, they could see hazard zones and temperature spikes, directing rescue efforts with precision. John Chiaramonte, president of consulting services at Mission Critical Partners, envisions exactly this kind of tech-assisted mission for firefighters, where real-time visuals and pre-programmed safety data feed into goggles or visors, boosting their capacity to handle high-risk rescues.
These AI-enabled tools wouldn’t be limited to hazardous materials tracking or temperature mapping. AI could also help first responders find the quickest, safest escape routes or locate survivors faster. Yet for many, such high-tech solutions evoke scenes straight out of science fiction, sparking both excitement and a need for careful oversight.
Faster, Smarter 911 Call Management
AI’s impact may be even more immediate for emergency dispatchers facing a flood of 911 calls. Brad Dispensa, a security specialist at Amazon Web Services (AWS), explains how AI-driven call management can filter out non-emergencies before they ever reach a human dispatcher. From neighborhood noise complaints to calls meant for other city services, AI can screen non-urgent requests, giving dispatchers time to focus on emergencies.
And for citizens dealing with long wait times, this technology could be a game-changer. In Los Angeles County, AI integration helped reduce 911 hold times from nearly an hour to under four minutes. By efficiently routing calls, AI allows the public to reach help faster—a crucial benefit when every second counts.
But not all emergencies are straightforward, and some worry that AI could fail to recognize nuanced, life-threatening situations. If an algorithm misinterprets a caller’s tone or urgency, it could potentially cost lives. Balancing efficiency with careful human oversight is paramount.
AI in Public Defense: Reducing Data Overload
LA County’s public defenders are using AI to handle the avalanche of data that accompanies each new case. Currently, defenders have to sift through records from dozens of agencies, searching for evidence and relevant details to help build a defense. With AI pulling together case information from various sources, lawyers can quickly access critical data, speeding up case prep and allowing more time for strategy and representation.
Dispensa notes that these AI tools aren’t just black boxes; they include transparency features that show the original sources of data, aiming to prevent any accidental misuse of incorrect information. The goal is to streamline, not sideline, the role of the public defender by letting AI take care of tedious, repetitive tasks so humans can focus on judgment calls and legal strategy.
For defenders, this AI-assisted data sorting means a clearer focus on their clients. Yet it raises questions about the limits of AI in justice, especially if sensitive or biased information gets processed incorrectly. If an AI system “learns” from past mistakes, those errors could become part of its database, potentially affecting future cases.
Bias and the “Garbage In, Garbage Out” Problem
While the efficiency gains are clear, AI’s use in public safety isn’t without risk. When it comes to predicting crime, for example, some lawmakers fear that AI systems could perpetuate existing biases. This concept, known as “garbage in, garbage out,” points to how flawed or biased data can lead to flawed outcomes. An AI that relies on arrest records to identify high-crime areas might inadvertently amplify historical biases, over-policing certain communities while ignoring others.
Earlier this year, a group of U.S. lawmakers voiced these concerns in a letter to Attorney General Merrick Garland. They argued that predictive policing software could unfairly target minority neighborhoods, creating a feedback loop of unjust policing practices. For AI in public safety to be equitable, Georgia legislators are being urged to ensure diverse and accurate data inputs and establish safeguards to prevent bias.
The public’s trust in AI is fragile, especially when there’s a risk that software might misinterpret or amplify societal biases. Many advocates say human oversight is necessary to ensure AI doesn’t just reinforce the biases it’s supposed to overcome.
Keeping the “Human Touch” in Public Safety
Even AI advocates agree: in areas like emergency response or public defense, there’s no substitute for human empathy. AI may streamline processes, but it can’t replace the judgment, empathy, or nuanced understanding that comes from a human being.
Chiaramonte emphasizes that AI in public safety should be used as a support tool, not a replacement for human decision-making. “We don’t want to replace the human,” he says. “AI should highlight issues, suggest solutions, but let the human make the final call.” He adds that this approach might slightly slow down processes, but it’s a worthwhile trade-off for preserving public trust and reducing the risk of automation errors.
This point resonates with many who fear that too much reliance on AI could erode trust in public institutions. When someone calls 911, they’re not just asking for help—they’re reaching out to another human. For many, knowing that a real person is on the other end, ready to make a thoughtful decision, remains an essential part of feeling safe and cared for.
Comments