Artificial intelligence promises to increase efficiency in the workplace, but AI is also streamlining another activity: cyberattacks.
Local government networks saw a 42% surge in cyberattacks in 2025, a Motorola Solutions analysis found, a spike it attributed to AI proliferation.
“In the old days, attackers had to use their noggins to create and develop systems of attack,” said David Utzke, a cybersecurity expert. “With artificial intelligence technology, it’s being done for them.”
Utzke, a former cybercrimes technologist for the U.S. Treasury and current CEO and chief technology officer of MyKey Technologies, said some of the largest AI developers neglected to install guardrails that would prevent users from exploiting network vulnerabilities.
At the same time, Utzke said, governments often grant staff access to extensive data archives beyond the scope of their immediate roles — meaning a single compromised account can open numerous additional doors for an attacker.
“That’s one of the biggest problems in government,” Utzke said.
Multifactor authentication processes used by governments are also vulnerable, according to Utzke. MFA works by sending users a text with a code or going through a security app, but “if your system is compromised, [the attacker has] access to all of that,” Utzke said.
Captcha, a security measure to weed out bots using images or puzzles, is also largely compromised because AI can read pictures, according to Utzke.
“All of these things are broken, which is making ID [identification] one of the biggest vulnerabilities in getting into systems these days,” he said.
Utzke pointed to the recent cyberattack on medical technology company Stryker, which he said was made possible by stolen employee credentials.
Utzke said the most impactful guardrail local governments can implement is a zero-trust architecture, which limits the scope of employee access to files and data.
More robust cybersecurity measures are in the works to combat the new threats, according to Utzke. But right now, too many people have too much access to too much data, and exploiting that has perhaps never been easier, he said.
“They don’t even have to type anything,” Utzke said of cyberattackers, who can use voice commands available through AI tools like ChatGPT. “They don’t have to even touch a keyboard anymore. You can be mobile, and that’s the hard part of catching these folks.”