Insight

Can we trust smart technology?

Automation is convenient and seemingly brilliant, but does that mean we should be handing over all responsibility to the machines?

smart tech

Did we learn nothing from Arthur C. Clarke’s 1968 sci-fi epic, 2001: A Space Odyssey?

In the film, astronauts on a mission to Jupiter discover that the HAL  9000 artificial intelligence computer that controls and automates all functions on the spacecraft starts glitching. The astronauts get worried, HAL gets paranoid and – *spoiler alert* – kills everyone on the ship.

The moral of the story is that when lives depend on fully automated systems, it’s a good idea to keep an eye on them.

How do you use something that’s fully automatic, anyway? What is the responsibility of the user? Can we just hand over control to the bots? Recent events in the news suggest that when it comes to using our automatic products and features, some people are doing it wrong.

Petnet is a $149 cloud-controlled smart feeder for dogs and cats that automatically dispenses pet food on a schedule. The feeder connects to the Internet via your home Wi-Fi network, and you control the feeder with an iOS app, and even dispense treats manually.

Petnet is also tied to a pet food delivery service, which can also be automated through Amazon’s Dash programme. You don’t have to order pet food; the feeder will do it for you. Crucially, Petnet monitors your pet’s food and water consumption to make sure it doesn’t get fat.

Sounds awesome, right?

But when a Google-provisioned service that the Petnet cloud depends on went down, some 10 percent of Petnet feeders stopped working properly for about 10 hours. Although the company claims automated feeding schedules were unaffected, users lost the ability to feed manually or change schedules. Some pets went hungry.

Petnet sent an email to customers, advising them to “please ensure that your pets have been fed manually.” But many customers are relying on Petnet to feed their pets while they’re on holiday.

This is just one example of how we’re rapidly screaming towards a future where many of our appliances, equipment, vehicles, gadgets and services are completely automated, controlled remotely by artificial intelligence or locally by algorithms.

It’s important for us, as consumers and users, to learn how to safely incorporate these technologies into our lives. What these events tell us is that the right way to use automation is to treat it like the convenience it is, and not a replacement for human awareness, monitoring and judgment.

Pets can’t be left in the care of a cloud service entirely. As with before the automatic pet feeder era, a human guardian who cares must be available to check on, feed, or pet-sit any pet when we go on vacation. We can’t turn pets’ lives and well-being completely over to an app.

Climate control, and by extension smoke and carbon monoxide detection, can’t be left entirely in the hands of the machines. The elderly, or others who might not be able to take care of themselves, can’t be placed in the hands of algorithms, because sometimes algorithms go haywire.

What we need is a set of cultural norms that make it clear to people that automating important things doesn’t and can’t replace a human being paying attention.

Because, in the words of HAL 9000, any harm that results from turning our loved ones’ safety over to the machines “can only be attributable to human error.”

Words of wisdom from HAL.

 

Originally published on IDG News Service. Reprinted with permission from IDG.net. Story copyright 2017 International Data Group. All rights reserved.
Previous ArticleNext Article

Leave a Reply

GET TAHAWUL TECH IN YOUR INBOX

The free newsletter covering the top industry headlines

Send this to a friend