close
close

Some US police officers are using AI chatbots to write police reports, despite concerns about racial bias

US police use technology that records audio from their body cameras to write their incident reports in eight seconds.

ADVERTISING

Some police departments in the United States are experimenting with using artificial intelligence (AI) chatbots to create initial drafts of their incident reports.

Technology using the same generative AI model as ChatGPT pulls audio and radio traffic from a police body camera microphone and can spit out a report in eight seconds.

“It was a better report than I could have ever written, and it was 100 percent accurate. It flowed better,” said Matt Gilmore, an Oklahoma City Police sergeant.

The new tool could be part of a growing AI toolkit that U.S. police already use, which includes algorithms that read license plates, recognize suspects' faces or register gunshots.

Few guidelines for using AI reports

Rick Smith, CEO and founder of Axon, the company behind AI product Draft One, said AI has the potential to eliminate the paperwork that police have to do, leaving them more time to do their actual work.

But as with other AI technologies used by police, there are concerns, Smith acknowledged.

He said they mostly come from prosecutors who want to make sure police officers know the contents of their report in case they have to testify in criminal proceedings about what they saw at the crime scene.

“You never want to call an officer to the stand who says, 'The AI ​​wrote that, not me,'” Smith said.

The introduction of AI-generated police reports is so new that there is little to no guidance on their use.

In Oklahoma City, they showed the tool to local prosecutors, who, however, urged caution before using it in high-risk criminal cases.

But there are also examples from other cities in the US where officials can use the technology in any case and at their own discretion.

Concerns about racial bias in AI

Legal scholar Andrew Ferguson would like to see a greater public debate about the benefits and potential harms of this technology before it is introduced.

For one thing, the large language models behind AI chatbots tend to invent false information, a problem known as hallucination which could add convincing and barely detectable untruths to a police report.

“I fear that the automation and ease of use of the technology could cause police officers to be less careful in their writing,” said Ferguson, a law professor at American University who is working on what is expected to be the first law journal article on the emerging technology.

Ferguson said a police report is important to determine whether an officer's suspicions “justify the deprivation of a person's liberty.” Sometimes it's the only statement a judge sees, especially in minor offenses.

ADVERTISING

Human-generated police reports also have flaws, Ferguson said, but it is an open question as to which are more reliable.

Concerns that racial prejudice and bias in society could be woven into AI technology are just part of what Oklahoma City community activist Aurelius Francisco finds “deeply disturbing” about the new tool, he learned from AP.

He said automating these reports would “take away the ability of police to harass, surveil and inflict violence on community members. While this makes the police's job easier, it makes the lives of black and brown people more difficult.”