Virginia legislation calls for human oversight of AI-based court decisions
Virginia lawmakers want to regulate the use of artificial intelligence-based tools in the criminal justice system.
This is the Virginia Scope daily newsletter covering Virginia politics from top to bottom. Please consider becoming the ultimate political insider by supporting non-partisan, independent news and becoming a paid subscriber to this newsletter today.
Have a tip? You can reply to this email or reach out to me directly at Brandon@virginiascope.com.

Virginia legislation calls for human oversight of AI-based court decisions
By Stacy Watkins, VCU Capital News Service
RICHMOND, Va. — Virginia lawmakers want to regulate the use of artificial intelligence-based tools in the criminal justice system.
Del. Cliff Hayes Jr., D-Cheseapeake, introduced House Bill 1642, which reinforces human oversight in the criminal justice system while allowing AI to play a supporting role.
AI-generated recommendations cannot be the sole basis for key decisions related to pre-trial detention or release, prosecution, adjudication, sentencing, probation, parole, correctional supervision, or rehabilitation. Any use of AI in those decisions can be subject to a legal challenge or objection, according to the bill.
Hayes has worked in technology management for three decades and witnessed the rapid movement of AI. The dependence on AI has accelerated recently, he said.
“AI definitely offers great benefits,” Hayes said. “But there’s another side to that coin. In some cases, we know AI, when it’s not accurate, can be extremely damaging and harmful.”
Hayes questioned whether the government should be in this “sandbox” experimenting with people's court cases, which could have a significant effect on their livelihood.
“I think we need to continue to have human oversight in those cases, qualified human oversight,” Hayes said. “The people who today are qualified to make those judgments, those decisions, should be the same individuals to make those determinations, and not rely 100% on AI.”
At least 26 states allow law enforcement to run facial recognition searches against driver’s license and identification databases, according to data from the Center on Privacy & Technology at Georgetown Law. Virginia law currently allows for use of facial recognition technology.
Sixteen states allow the FBI to use the technology to find suspects in a “virtual lineup,” according to the data. Over 117 million American adults are included in these face recognition networks.
Black and Asian people are more likely to be misidentified than white people, at a rate ranging from 10 to 100 times higher, according to a study done by the National Institute of Standards and Technology.
“We’re a system that disproportionately incarcerates people of color, especially Black men,” said Steven Keener, assistant professor of criminology at Christopher Newport University, and director of the university’s Center for Crime, Equity, and Justice Research and Policy.
The goal of AI software is to reduce bias and racism in the system, according to Keener. But research has found many of these AI tools and algorithms are biased, he said. The data going into the software to create AI tools and algorithms could be biased, which impacts the data output used to make important decisions such as who is eligible for bail.
“What data set are you using to build the algorithm that determines who is safe and who is unsafe?” Keener said.
AI systems are not yet capable enough to make such tough decisions by themselves, according to Sanmay Das, a computer science professor at Virginia Tech, and associate director of AI for social impact at the Sangani Center for Artificial Intelligence and Data Analytics.
“I think the key point over there is accountability, right?” Das said.
Although humans may frequently use machines to help make decisions, ultimately a human is accountable, Das said.
“If you did not have human oversight, it’s really easy to blame the machine, or the algorithm,” he said.
Humans cannot replace bureaucracies with AI, even though some tools can be helpful, Das said.
The speed and scale at which AI operates, while making decisions that involve thousands of people could be catastrophic, according to Das.
“I think AI tools can be enormously helpful in many of these kinds of domains,” Das said. “But, I think that we’re going to need to deal with this challenge that people may be tempted to use them and apply them at really grand scales in order to save human time, to save human effort.”
The bill passed both chambers unanimously, with just one dissenter. The governor has until March 24 to review, amend, sign or veto the legislation.
This bill doesn’t do much, if anything. By its terms, objections to the use of AI in criminal proceedings derive from other sources of law, and ultimately, judges make these calls, with or without the assistance of AI. I guess saying that the AI’s recommendation can’t be the sole basis for a decision is substantive on some level, but who’s doing that now? There’s not some magic 8-ball that decides who gets a bond and who doesn’t.
I also think it’s pretty sloppy reporting to cite a 2019 study on bias in facial recognition technology without disclosing in the article that the study was published over five years ago.