06222018Headline:

Reno, Nevada

HomeNevadaReno

Email Steven J. Klearman Steven J. Klearman on Twitter Steven J. Klearman on Facebook Steven J. Klearman on Avvo
Steven J. Klearman
Steven J. Klearman
Attorney • (800) 880-5297

Tesla’s AutoPilot and Human Behavior: A Risky Mix?

0 comments

The overall safety of Tesla’s AutoPilot system is under scrutiny as it faces yet another high profile crash involving its driver-assist technology. In Utah, a driver of a $100,000 Tesla Model S crashed into the back of a stopped fire truck while in AutoPilot, a semi-autonomous driving mode designed to take over control of the car.

The driver admitted to looking at her phone and taking her hands off the steering wheel while the car drove 60 miles-an-hour, both of which are considered unacceptable by the automotive industry but may be unwittingly encouraged through the use of new technology.

Currently,  Autopilot—Tesla’s marketing term describing an array of driving capabilities under self-driving category—is promoted as being able to do everything from match speed to traffic patterns to self-park after exiting a freeway.

This semi-autonomous mode, which can be turned off and on by drivers, manages most of the driving and cues the driver to intervene as it encounters unfamiliar situations.

Even still, according to company owner Elon Musk, drivers must stay engaged and keep their hands on the steering wheel.

In spite of this company line, videos posted online reveal drivers seated in the backseat of moving Teslas’, a shocking breach of trust not commented on by the innovative entrepreneur.

Both the National Highway Traffic Safety Administration and the National Transportation Safety Board are looking into the Utah crash. The investigations come on the heels of a deadly Tesla crash in Mountain View Calif. in which the company’s car hit a center medium after Autopilot repeatedly alerted the driver to take over control of the vehicle.

His family says the system had acted erratically in the past along the same stretch of road.

So which is it: failed technology or fallible humans?

Bryan Reimer, a research scientist at the Massachusetts Institute for Technology, suggests it may be a little of both: “These systems are designed not only for ideal environments like good weather and clear lane markings, but also for rational behavior, and humans are predictably irrational…The question is, does the automation work so well and require so little of you that in essence a new safety problem is created while solving for another?”

Leave a Comment

Have an opinion? Please leave a comment using the box below.

For information on acceptable commenting practices, please visit Lifehacker's guide to weblog comments. Comments containing spam or profanity will be filtered or deleted.