Humans manage to drive in an acceptable fashion using just two eyes and two ears to sense the world around them. Autonomous vehicles are kitted out with sensor packages altogether more complex. They typically rely on radar, lidar, ultrasonic sensors, or cameras all working in concert to detect the road conditions ahead.
While humans are pretty wily and difficult to fool, our robot driving friends are less robust. Some researchers are concerned that LiDAR sensors could be spoofed, hiding obstacles and tricking driverless cars into crashes, or worse.
Where Did It Go?
LiDAR is so named as it is a light-based equivalent of radar technology. Unlike radar, though, it’s still typically treated as an acronym rather than a word in its own right. The technology sends out laser pulses and captures the light reflected back from the environment. Pulses returning from objects further away take longer to arrive back at the LiDAR sensor, allowing the sensor to determine the range of objects around it. It’s typically considered the gold-standard sensor for autonomous driving purposes. This is due to its higher accuracy and reliability compared to radar for object detection in automotive environments. Plus, it offers highly-detailed depth data which is simply not available from a regular 2D camera.
A new research paper has demonstrated an adversarial method of tricking LiDAR sensors. The method uses a laser to selectively hide certain objects from being “seen” by the LiDAR sensor. The paper calls this a “Physical Removal Attack,” or PRA.
The theory of the attack relies on the way LiDAR sensors work. Typically, these sensors prioritize stronger reflection over weaker ones. This means that a powerful signal sent by an attacker will be prioritized over a weaker reflection from the environment. LiDAR sensors and the autonomous driving frameworks that sit atop them also typically discard detections below a certain minimum distance to the sensor. This is typically on the order from 50 mm to 1000 mm away.
The attack works by firing infrared laser pulses that mimic real echoes the LiDAR device is expecting to receive. The pulses are synchronised to match the firing time of the victim LiDAR sensor, in order to control the perceived location of spoofed points by the sensor. By firing bright laser pulses to imitate echoes at the sensor, the sensor will typically ignore the weaker real echoes picked up from an object in its field of view. This alone may be enough to hide the obstacle from the LiDAR sensor, but would seem to create a spoofed object very close to the sensor. However, since many LiDAR sensors discard excessively close echo returns, the sensor will likely discard them entirely. If the sensor doesn’t discard the data, the filtering software running on its point cloud output may do so itself. The resulting effect is the LiDAR will show no valid point cloud data in an area where it should be picking up an obstacle.
The attack requires some knowledge, but is surprisingly practical to achieve. One need only do some research to target various types of LiDAR used on autonomous vehicles to whip up a suitable spoofing apparatus. The attack works even if the attacker is firing false echoes towards the LiDAR from an angle, such as from the side of the road.
This has dangerous implications for autonomous driving systems relying on LiDAR sensor data. This technique could allow an adversary to hide obstacles from an autonomous car. Pedestrians at a crosswalk could be hidden from LiDAR, as could stopped cars at a traffic light. If the autonomous car does not “see” an obstacle ahead, it may go ahead and drive through – or into – it. With this technique, it’s harder to hide closer objects than those that are farther away. However, hiding an object even for a few seconds might leave an autonomous vehicle with too little time to stop when it finally detects a hidden obstacle.
Outside of erasing objects from a LiDAR’s view, other spoofing attacks are possible too. Earlier work by researchers has involved tricking LiDAR sensors into seeing phantom objects. This is remarkably simple to achieve – one only need transmit laser pulses towards a victim LiDAR that indicate a wall or other obstacle ahead.
The research team note that there are some defences against this technique. The attack tends to carve out an angular slice from the LiDAR’s reported point cloud. Detecting this gap can indicate that a removal attack may be taking place. Alternatively, methods exist that involve comparing shadows to those expected to be cast by objects detected (or not) in the LiDAR point cloud.
Overall, protecting against spoofing attacks could become important as self-driving cars become more mainstream. At the same time, it’s important to contemplate what is and isn’t realistic to defend against. For example, human drivers are susceptible to crashing when their cars are hit with eggs or rocks thrown from an overpass. Automakers didn’t engineer advanced anti-rock lasers and super-wipers to clear egg smears. Instead, laws are enforced to discourage these attacks. It may simply be a matter of extending similar enforcement to bad actors running around with complicated laser gear on the side of the highway. In all likelihood, a certain amount of both approaches will be necessary.
0 Commentaires