Oklahoma City police are testing a new way to write reports using AI chatbots.
These chatbots, like Axon’s Draft One, create first drafts of police reports by analyzing audio from body cameras. The AI is used only for minor incidents that do not lead to arrests or felonies.
How AI Chatbots Help Police Write Reports Quickly
A body camera captured every detail during a search by Police Sgt. Matt Gilmore and his K-9, Gunner, as they looked for suspects for almost an hour. Ordinarily, Sgt. Gilmore would use his laptop to write a report, taking 30 to 45 minutes. But this time, he let artificial intelligence create the first draft.
The AI uses sound and radio chatter from the body camera to make a report in just eight seconds.
“It was a better report than I could have ever written, and it was 100% accurate. It flowed better,” said Gilmore. The report even included a detail he forgot – another officer’s mention of the car color the suspects had run from.
Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court? – KXII https://t.co/IjnF6Q9r5R
— audai (@audaiuk) August 27, 2024
Law enforcement is not new to AI applications. There have been several use cases, including detecting gunshot wounds and using AI to predict where crimes might occur before they happen.
But, many of these applications have led to privacy concerns and attempts by lawmakers to create safeguards. Despite these challenges, AI’s potential to streamline report-writing processes appears promising.
Rick Smith, the founder and CEO of Axon, highlighted the enthusiasm for their new AI product, Draft One, by noting, “They become police officers because they want to do police work, and spending half their day doing data entry is just a tedious part of the job that they hate.”
Experts Are Worried About AI Reports in Court
However, a critical concern with AI-generated reports is the potential for false information. AI models, including those used by Axon, can sometimes hallucinate or create inaccurate details.
AI hallucination is where a large language model (LLM) makes false claims that are not based on real events or data. This can manifest in AI chatbots fabricating stories, names, quotes, dates, and other factual details.
In this light, legal scholar Andrew Ferguson expressed a major concern, arguing that automation could decrease the accuracy and thoroughness of police officers’ documentation.
Community activists also raise concerns about AI’s role in policing. Aurelius Francisco, a local activist, fears that the technology could exacerbate existing biases and negatively impact marginalized communities.
In Detroit, a man was wrongly arrested and held overnight due to a false positive identification by an AI-powered facial recognition system. This system incorrectly identified him as a suspect.
This case highlights a critical flaw in some AI facial recognition systems, which have been shown to struggle with accurately distinguishing individuals with darker skin tones, leading to incorrect identification and arrests.
https://twitter.com/ahuja_priyank/status/1827201299034861782
Despite these concerns, some officers find the technology beneficial. It helps them focus on their primary duties rather than spending time on paperwork. They also appreciate the AI’s ability to produce clear and accurate reports quickly.