


Fourth of four parts
Elaine Herzberg was walking a bicycle across the street one night in Tempe, Arizona, when an Uber vehicle crashed into her and killed her — one of more than 36,000 traffic deaths recorded in 2018.
What made her death different was the Uber vehicle was part of the company’s self-driving experiment. Herzberg became the first known victim of an AI-powered robot car.
It was seen as a watershed moment, comparable to the first known automobile crash victim in the late 1800s, and making concrete what until then had largely been hypothetical questions about killer robots.
Five years on, AI has gone mainstream, with applications in everything from medicine to the military. That has produced intense handwringing from some quarters about the pace of change and the dangers of dystopian movie-style runaway AI, with leading tech experts guessing there’s a significant chance that humans will be eradicated by the technology.
SEE ALSO: AI starts a music-making revolution and plenty of noise about ethics and royalties
At the same time, AI is already at work in doctor’s offices, helping with patient diagnosis and monitoring. AI can do a better job than a dermatologist in diagnosing skin cancer. And a new app hit the market this year that uses AI to help people with diabetes predict their glucose response to foods.
In short, AI is already saving countless lives, tipping the balance sheet clearly to the plus side.
“We’re far, far in the positive,” said Geoff Livingston, founder of Generative Buzz, which helps companies use AI.
Take traffic, where driver assistance systems such as keeping vehicles in a lane, warning of an impending collision and, in some cases, automatically braking, are already in use in millions of vehicles. Once most vehicles on the road are using them, it could save nearly 21,000 lives a year in the U.S. and prevent nearly 1.7 million injuries, according to the National Safety Council.
The benefits may be even bigger in medicine, where AI isn’t so much replacing doctors as assisting them in decision-making — what is sometimes called “intelligent automation.”
In his 2021 book by that name, Pascal Bornet and his fellow researchers said intelligent drones are delivering blood supplies in Rwanda, and IA applications are diagnosing burns and other skin wounds from smartphone photos of patients in countries with doctor shortages.
Combined with traffic safety IA, Mr. Bornet calculated that intelligent automation could reduce early deaths and extend healthy life expectancy by 10% to 30%. For a global population with some 60 million deaths a year, that works out to between 6 and 18 million early deaths that could be prevented each year.
Then there are the smaller enhancements.
AI personal trainers can improve home workouts. It could be used in food safety, flagging harmful bacteria. Scientists say it can make farming more efficient, reducing food waste. And the UN says AI has a role to play in combatting climate change, with earlier warnings of looming weather-related disasters and in reducing greenhouse gas emissions.
Of course, AI is also being used on the other side of the equation, too.
Israel is reportedly using AI to select retaliation targets in Gaza after Hamas’s murderous terror attack in October. Habsora, which is Hebrew for “the Gospel,” can produce far more targets than what human analysts were doing. It’s a fascinating high-tech response to Hamas’s initial low-tech assault, which saw terrorists use paragliders to get over the border.
Go a bit north and the Russia-Ukraine war has turned into a bit of an AI arms race, with autonomous Ukrainian drones striking Russian targets. Meanwhile, Russia uses AI to try to win the propaganda battle — and Ukraine uses AI in its response.
Trying to come up with an exact scorecard for deaths versus lives saved is impossible, experts said. That’s at least in part because so much of AI use is hidden.
“Frankly, I haven’t a clue how one would do such a tally with any confidence,” said one researcher.
But several agreed with Mr. Livingston that the positive side is winning right now. So why the lingering reticence?
Experts said scary science fiction scenarios have something to do with it. Clashes between AI-powered armies and underdog humans are a staple of the genre, though even less apocalyptic versions pose uneasy questions about human-machine interactions.
Big names in tech have fueled the fears with dire predictions.
Elon Musk, the world’s richest man, has been on a bit of a doom tour in recent years, warning of the possibility of “civilization destruction” from AI. And 42% of CEOs at a Yale CEO Summit in June said AI could eradicate humanity within five to 10 years, according to data shared with CNN.
An incident in May brought those concerns home.
Col. Tucker ‘Cinco’ Hamilton, the Air Force’s chief of AI test and operations, was delivering a presentation in London on future combat capabilities when he mentioned a simulated test asking an AI-enabled drone to destroy missile sites. The AI was told to give final go/no-go authority to a human, but also instructed that the missile site destruction was a priority.
After several instances of the human blocking an attack, the AI got fed up in the simulation.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Col. Hamilton said.
Fear and outrage ensued, with some outlets seemingly not caring that the colonel said it was a simulation.
And, it turns out, the Air Force says it wasn’t even a real simulation, but rather a “thought experiment” that Col. Hamilton was trying out on the audience.
The colonel, in a follow-up piece for the Royal Aeronautical Society in London, took the blame for the snafu, and said the story took off because people were primed by pop culture to expect “doom and gloom.”
“It is not something we can ignore, nor is it something that should terrify us. It is the next step in developing systems that support our progress as a species. It is just software code – which we must develop ethically and deliberately,” he wrote.
He gave an example of the Air Force using AI to help aircraft fly in formation. If the AI ever suggests a flight maneuver that is too aggressive, the software automatically cuts out the AI.
This approach ensures the safe and responsible development of AI-powered autonomy that keeps the human operator as the preeminent control authority.
Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology, said that when she heard about Col. Hamilton’s presentation she wasn’t shocked, but rather relieved.
“While it seems very scary, I thought this would be a good thing if they were testing it,” she said.
The goal of AI tools, she said, should be to give it increasing autonomy within parameters and boundaries.
“You want something that the human is able to understand how it operates sufficiently that they can rely on it,” she said. “But at the same time, you don’t want the human to be involved in every step. Otherwise, that defeats the purpose.”
She also said that the far extreme cases are less of a threat than “the very boring real harms it can cause today” — things like bias in algorithms or misplaced reliance.
“I’m worried about, say, if using an algorithm makes mishaps more likely because a human isn’t paying attention,” she said.
That brings us back to Herzberg’s death in 2018.
The National Transportation Safety Board’s review said the autonomous driving system noticed Herzberg 5.6 seconds before the crash but failed to identify her as a pedestrian and couldn’t predict where she was going. Too late, it realized a crash was imminent and relied on the human operator to take control.
But Rafaela Vasquez, the 44-year-old woman behind the wheel, had spent much of the Volvo’s ride looking down at her cell phone, where she was streaming a television show — reportedly talent show “The Voice” — which was against the company’s rules.
A camera in the SUV showed she was looking down for most of the six seconds before the crash, only looking up a second before hitting Herberg. She spun the steering wheel just two-hundredths of a second before the crash, and the Volvo plowed into Herzberg at 39 miles an hour.
In a plea deal, Vasquez was convicted of one count of endangerment — the Florida version of culpable negligence — and sentenced to three years probation.
NTSB Vice Chairman Bruce Landsberg said there was blame to go around, but was particularly struck by the driver’s complacency in trusting the AI. Vasquez spent more than a third of the time on the trip looking at her phone, and in the three minutes before the crash glanced at the phone 23 times.
“Why would someone do this? The report shows she had made this exact same trip 73 times successfully. Automation complacency!” Mr. Landsberg said.
Put another way, the problem wasn’t the technology but the reliance people wrongly put on it.
Mr. Livingston, the AI marketing expert, said that’s the more realistic danger lurking in AI right now.
“The caveat isn’t that the AI will turn on humans, it’s humans using AI on other humans,” he said.
• Stephen Dinan can be reached at sdinan@washingtontimes.com.