Insight, Opinion

Why autonomous cars won’t be autonomous

New rules for “self-driving cars” in California highlight a glaring misconception about how AI works.

An educated public understands that autonomous vehicles are amazing but that they are so far unable to fully take control of a passenger vehicle with no human behind the wheel.

It’s only a matter of time before today’s “safety drivers” — the humans who sit idle behind the wheel in case the artificial intelligence fails — get out of the car and leave the driving to the machines, right?

Whoa, slow your roll there, self-driving car narrative.

California just approved licenses for self-driving cars to in fact have no human driver behind the wheel, or no human in the vehicle at all (after dropping off a passenger, or for deliveries) with one caveat: The self-driving car companies must monitor and be able to take over driving remotely.

This rule matches other parts of the country where no-driver autonomous vehicles have been allowed. There’s no human in the car, or at least the driver’s seat, but remote monitoring and remote control make that possible, safe and legal.

I imagine NASA-like control rooms filled with operators and screens and traffic reports, where maybe a few dozen people are monitoring a few hundred vehicles, then taking control when they malfunction, freeze up or confront complex driving scenarios.

It’s also possible that remote monitoring could be done in call center-like cubicle farms.

Here are the autonomous vehicle companies I’m aware of that are public about building or have already built such control rooms:

  • Udelv
  • Nissan
  • Waymo
  • Zoox
  • Phantom Auto
  • Starsky Robotics

Which of the other autonomous vehicle companies will have such control rooms? All of them.

Come to think of it, it’s not just the autonomous cars that need human help when the AI isn’t intelligent enough to handle unexpected situations.

The world of AI: It’s made out of people

To compensate for the inadequacy of AI, companies often resort to behind-the-scenes human intervention.

The idea is that human supervisors make sure AI functions well, as well as taking a teaching role. When AI fails, human intervention is a guide for tweaks in the software. The explicit goal of this heuristic process is that eventually the AI will be able to function without supervision.

Remember Facebook M? That was a virtual assistant that lived on Facebook Messenger. The clear idea behind this project was to provide a virtual assistant that could interact with users as a human assistant might. The “product” was machine automation. But the reality was a phalanx of human helpers behind the scenes supervising and intervening. These people also existed to train the AI on its failures, so that it could eventually operate independently. Facebook imagined a gradual phaseout of human involvement until a point of total automation.

That point never arrived.

The problem with this approach seems to be that once humans are inserted into the process, the expected self-obsolescence never happens.

In the case of Facebook M, the company expected to evolve beyond human help, but instead had to cancel the whole Facebook M project.

Facebook is quiet about the initiative, but it probably figured out what I’m telling you here and now: AI that requires human help now will probably require human help indefinitely.

Many other AI companies and services function like this, where the value proposition is AI but the reality is AI plus human helpers behind the scenes.

In the world of AI-based services, vast armies of humans toil away to compensate for the inability of today’s AI to function as we want it to.

Who’s doing this work? Well, you are, for starters.

Google has for nine years used its reCAPTCHA system to authenticate users, who are asked to prove they’re human.

That proof involves a mixture of actual and fake tests, where humans recognise things that computers cannot.

At first, Google used reCAPTCHA to help computers perform optical character-recognition (OCR) on books and back issues of The New York Times.

Later, it helped Google’s AI to read street addresses in Google Street View.

Four years ago, Google turned reCAPTCHA into a system for training AI

Most of this training is for recognising objects in photographs — the kinds of objects that might be useful for self-driving cars or Street View. One common scenario is that a photograph that includes street signs is divided into squares and users are asked to “prove they’re human” by clicking on every square that contains street signs. What’s really happening here is that Google’s AI Is being trained to know exactly which parts of the visual clutter involve street signs (which must be read and taken into account while driving) and which parts are just visual noise that can be ignored by a navigation system.

But you’re an amateur (and unwitting) AI helper. Professional AI trainers and helpers all over the world spend their workdays identifying and labeling virtual objects or real-world objects in photographs. They test and analyse and recommend changes in algorithms.

The law of unintended consequences is a towering factor in the development of any complex AI system. Here’s an oversimplified example.

Let’s say you programmed an AI robot to make a hamburger, wrap it in paper, place it in a bag, then give it to the customer, the latter task defined as putting the bag within two feet of the customer in a way that enables the customer to grasp it.

Let’s say then that in one scenario, the customer is on the other side of the room and the AI robot launches the bag at high speed at the customer. You’ll note that the AI would have performed exactly as programmed, but differently than a human would have performed. Additional rules are required to make it behave in a civilised way.

This is a vivid and absurd example, but the training of AI seems to involve an endless series of these types of course corrections because, by human definition, AI isn’t actually intelligent.

What I mean is: A person who behaved like AI would be considered an idiot. Or we might say an employee who threw a bag of hamburgers at a customer lacked common sense. In AI, common sense doesn’t exist. It must be painstakingly programmed into every conceivable event.

Humans are required to programme a common-sense response into every conceivable event.

Why AI will continue to need human help

I’m doubtful that self-driving car companies will be able to move beyond the remote control-room scenario. People will supervise and remote-control autonomous cars for the foreseeable future.

One reason is to protect the cars from vandalism, which could become a real problem. Reports of people attacking or deliberately smashing into self-driving cars are reportedly on the rise.

I also believe that passengers will be able to press a button and talk to someone in the control room, for whatever reason.

One reason might be a nervous passenger: “Uh, control room, I’m going to take a nap now. Can you keep an eye on my car?”

But the biggest reason is that the world is big and complex. Weird, unanticipated things happen. Lightning strikes. Birds fly into cameras. Kids shine laser pointers at sensors. Should any of these things happen, self-driving cars can’t be allowed to freak out and perform randomly. They’re too dangerous.

I believe it will be decades before we can trust AI to be able to handle every conceivable event when human lives are at stake.

Why we believe artificial intelligence is artificial and intelligent

One phenomenon that feeds the illusion of AI supercompetence is called the Eliza effect, which emerged from an MIT study in 1966. Test subjects using the Eliza chatbot reported that they perceived empathy on the part of the computer.

Nowadays, the Eliza effect makes people feel that AI Is generally competent, when in fact it’s only narrowly so. When we hear about a supercomputer winning at chess, we think, “If they can beat smart humans at chess, they must be smarter than smart humans.” But this is the illusion. In fact, chess-playing robots are “smarter” than people at one thing, whereas people are smarter than that chess-optimised computer at a million things.

Self-driving cars don’t do one thing. They do a million things. That’s easy for humans, hard for AI

We don’t overestimate AI — AI is in fact amazing and groundbreaking and will transform human life, mostly for the better.

What we consistently do is underestimate human intelligence, which will remain vastly superior to computers at human-centric tasks for the remainder of our lifetimes, at least.

The evolution of self-driving cars is a perfect illustration of how the belief that the machines will function on their own in complex ways is mistaken.

They’ll function. But with our constant help.

AI needs humans to back them up when they’re not intelligent enough to do their jobs

 

 

Originally published on Computerworld. Click here to read the original story. Reprinted with permission from IDG.net. Story copyright 2024 International Data Group. All rights reserved.
Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines