Police have to explain why report claims officer turned into a frog

Published 1 day ago
Source: metro.co.uk
People have been using ChatGPT for almost everything since its release in 2019 and now law enforcement has embraced it. However, it hasn’t all gone smoothly, with the police department in Heber City, Utah, forced to explain why a report declared that an officer had somehow shapeshifted into a frog. (Picture: Dan Kitwood/Getty Images)
This fairytale-eque ending appears to have stemmed from some unrelated background chatter after the department began testing two pieces of AI software – Draft One and Code Four. Code Four was created by George Cheng and Dylan Nguyen, both 19 years old and MIT dropouts. The software creates police reports from body camera footage with the aim of reducing paperwork and allowing officers to be out in the field more. (Picture: Matteo Della Torre/NurPhoto via Getty Images)
Police sergeant Rick Keel told FOX 13 News: ‘The body cam software and the AI report writing software picked up on the movie that was playing in the background, which happened to be The Princess and the Frog,’ referring to Disney’s 2009 musical comedy. ‘That’s when we learned the importance of correcting these AI-generated reports.’ (Picture: Getty Images)
But even a simple mock traffic stop meant to demonstrate what the tool is capable of turned didn’t go to plan and required numerous corrections, according to Fox 13. The AI generated a report with timestamps from the mock traffic stop, and works in both English and Spanish, as well as being able to track tone and sentiment. But despite the drawbacks, Keel told the outlet that the tool is saving him ‘six to eight hours weekly now’, adding: ‘I’m not the most tech-savvy person, so it’s very user-friendly.’ (Picture: LIONEL BONAVENTURE/AFP via Getty Images)
Draft One was first announced by police tech company Axon – the same firm behind the Taser, a popular electroshock weapon – last year. The software uses OpenAI’s GPT large language models to generate entire police reports from body camera audio. But experts have warned that incorrect details could fall through the cracks in these important documents. Speaking to AP last year, American University law professor Andrew Ferguson said: ‘I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing.’ (Picture: Getty)
Experts are concerned mistakes could slip into the official records long before anyone spots them. An officer turning into a frog is easy to spot as its absurd, but the risk is less funny when AI mishears a street name, misunderstands a chaotic scene or incorrectly notes a detail that could later be used in court. Critics also argue that the tool could be used to introduce deniability and make officers less accountable in case mistakes were to fall through the cracks. The Heber City police department has yet to decide whether it will keep using Draft One. (Picture: Getty)
Is using AI for police work too much of a risk?

Categories

NewsUSUS news