Self-driving cars promise to revolutionize driving by removing human error from the equation altogether. No more drunk or tired driving, great reductions in traffic, and even the possibility of being productive on the commute to work. But what are the consequences of relying on algorithms and hardware to accomplish this vision? Software can be hacked or tricked, electrical components can be damaged. Can we really argue that it is safer to relinquish control to a computer than to operate a motor vehicle ourselves? Ultimately, this question cannot be answered with confidence until we conduct far more testing. Data analysis is key to understanding how these vehicles will perform and specifically how they will anticipate and react to the kind of human error which they exist to eliminate. But “the verdict isn’t out yet” is hardly a satisfying answer, and for this reason I would argue that despite concerns about ‘fooling’ self-driving cars, this technology is safer than human drivers.
The article “Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms” details how researchers have tricked computer vision algorithm to misinterpret street signs. Researchers were able to achieve these results by training their classifier program with public road sign data, and then adding new entries of a modified street sign with their own classifiers. Essentially, the computer is “taught” how to analyze a specific image and, after numerous trial runs, will eventually be able to recognize recurring elements in specific street signs and match them with a specific designation / classifier. The article mainly serves to explore how these machines could be manipulated, but only briefly touches upon a key safety feature which would prevent real-world trickery. Notably, redundancy is key in any self-driving car. Using GPS locations of signs and data from past users could ensure that signs are not incorrectly classified by the computer vision algorithm.
The article “The Long, Winding Road for Driverless Cars” focuses less on the safety ramifications of self-driving vehicles, and instead on how practical it is that we will see fully autonomous cars in the near future. The author touches upon the idea that selling current vehicles (such as Tesla) with self-driving abilities as “autopilot” might be misleading, as these current solutions still require a human to be attentive behind the wheel. She presents the hurdle that in order to replace human drivers, self-driving vehicles cannot just be “better” than human drivers but near perfect. While these are all valid concerns, they will only result in benefits for consumers. Mistrust in new tech means that companies and regulatory authorities will go through rigorous trials to ensure that these vehicles are ready for the road and maintain consumer confidence. We have already accepted many aspects of car automation (stopping when an object is detected, hands-free parallel parking, and lane-detection) to make our lives easier, and perhaps some time in the near future self-driving cars will be fully tested and ready for mass deployment.