A prolific cybersecurity research firm says it has managed to admake Tesla self-driving feature deviate off course by sticking three small stickers on the road pavement.
Keen Lab, a two-time honoree of Tesla’s “bug bounty” hall of fame program, said that it found two ways to trick Autopilot’s lane recognition through changing the physical road surface.
The first attempt to confuse Autopilot used blurring patches on the left-lane line, which the team said was too difficult for someone to actually deploy in the real world and easy for Tesla’s computer to recognize.
“It is difficult for an attacker to deploy some unobtrusive markings in the physical world to disable the lane recognition function of a moving Tesla vehicle,” Keen said.
The researchers said they suspected that Tesla also handled this situation well because it’s already added many “abnormal lanes” in its training set of Autopilot miles. This gives Tesla vehicles a good sense of lane direction even without good lighting, or in inclement weather, they said.
Not deterred by the low plausibility of the first idea, Keen then set out to make Tesla’s Autopilot mistakenly think there was a traffic lane when one wasn’t actually present.
The researchers painted three tiny squares in the traffic lane to mimic merge striping and cause the car to veer into oncoming traffic in the left lane.
“Misleading the autopilot vehicle to the wrong direction [of traffic] with some patches made by a malicious attacker is sometimes more dangerous than making it fail to recognize the lane,” Keen said.
“If the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.”
“This is not a real-world concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should be prepared to do so at all times.”