News reports were everywhere that an autonomous taxi operated by a company called Cruise was driving through San Francisco with no headlights. The local constabulary tried to stop the vehicle and were a bit thrown that there was no driver. Then the car moved beyond an intersection and pulled over, further bemusing the officers.
The company says the headlights were due to human error and that the car had stopped at a light and then moved to a safe stop by design. This leads to the question of how people including police officers will interact with robot vehicles.
For Cruise’s part, they have a video informing law enforcement and others how to approach one of their vehicles (see second video, below). You have to wonder how many patrol cops have seen it though. We don’t think we’d get away with saying, “We mentioned our automatic defense system in our YouTube video.”
Honestly, we aren’t sure that in an emergency situation we would want to find our list of automatic vehicle companies to find the right number to call. At the very least, you’d expect to have the number prominently on the vehicle. Why the lights didn’t turn on automatically is an entirely different question.
We can’t imagine that as autonomous vehicles catch on that regulations aren’t going to be forthcoming. Just like fire departments have access to Knox boxes so they can let themselves into places, we are pretty sure a failsafe code that stops a vehicle dead and unlocks its doors regardless of brand is probably a good idea. Sure, a hacker could use it for bad purposes, but they can also break into Knox boxes. You’d have to make certain the stop code security was robust.
What do you think? What happens when a robot car gets pulled over? What happens when a taxi passenger has a heart attack? We’ve talked about the issues surrounding self-driving anomalies before. Some of the questions have no easy answers.
0 Commentaires