Automotive engineers have already developed semi-autonomous vehicles. Fully autonomous vehicles are not far from reality. According to recent research, autonomous driving (AD) could generate revenues of $300 billion to $400 billion by 2035.
The self-driving car not only shows how advanced the technology is, but it is also the subject of controversy. There are valid concerns about security, faulty technology, hacking and the potential loss of driving jobs. On the other hand, the opposite may be true. AD can lead to safer travel, greater convenience, and more productivity or free time. Instead of wasting hours in traffic, the “drivers” of the future could spend their commute working, reading or watching a TV series.
A critical component of autonomous vehicles is sensor technology – heterogeneous sensors, to be precise. Sensor data is trained using artificial intelligence (AI) and machine learning (ML) to observe and respond to the environment. Based on AI and ML algorithms, a vehicle uses sensors to find the ideal route, decide where or not to drive, detect nearby objects, pedestrians or other vehicles to avoid collisions and react to unexpected scenarios.
There have been two main efforts in developing self-driving cars.
1. Using cameras and computer vision to drive
2. Employ sensor fusion (i.e., use heterogeneous sensors to make the car see, hear, and feel its surroundings)
Most engineers have determined that AD can only be successful with vehicle cameras and computer vision. Instead, sensor fusion is the safer and more reliable choice.
There are four main sensor technologies used in autonomous vehicles:
- Cameras
- TO LEAD
- Radar
- Sonar
Thanks to sensor fusion technology and rapidly improving AI, autonomous vehicles have begun to gain recognition as a real future possibility. It is predicted that by 2030, around 12% of vehicle registrations worldwide will be AD.
When it comes to the loss of driving jobs, similar concerns were raised when computers were introduced, and we know that this technology has created millions of jobs around the world. Autonomous vehicles are also likely to increase the need for skills-based jobs in the automotive industry.
Let's explore the sensor technologies that enable autonomous driving.
The camera
The cameras are already used in vehicles for reverse parking, reverse steering, adaptive cruise control and lane departure warnings. Autonomous vehicles use high-resolution color imaging cameras to get a 360-degree view of the environment around them. Images can be collected as multidimensional data, from different angles, and/or as video segments. Different image and video capture methods are currently being tested along with the use of AI technology. It is necessary to ensure that it is possible to make reliable decisions on the road for safe driving. These are resource-intensive tasks.
These cameras show potential, especially with advanced AI and ML. High-resolution cameras can properly detect and recognize objects, detect the movement of other vehicles, determine a route, and visualize the environment in 3D. They get closer to human eyes, allowing a vehicle to drive similar to a vehicle manned by a real person.
But there are disadvantages. For example, a camera's visibility depends on environmental conditions. Since the camera is a passive sensor, it is unreliable in low visibility conditions. Infrared cameras may be an option, but these images must be interpreted by AI and ML, which are still in development.
Two types of camera sensors are used for AD: mono or stereo camera. The mono camera has a single lens and image sensor. It can only capture two-dimensional images, which can recognize objects, people and traffic signs. However, 2D images are not useful for determining the depth or distance of objects. Doing so would require highly complex ML algorithms with questionable results.
A stereo camera has two lenses and two image sensors. Two images are taken simultaneously from different angles. After processing the images, the camera can determine the depth or distance of an object, making it the best choice for AD – except for visibility issues in low light.
Some developers are combining mono cameras with distance measurement techniques such as LIDAR or radar and sensor fusion to accurately predict traffic conditions.
Cameras certainly play an important role in AD. However, they will need help.
TO LEAD
LIDAR is one of the prominent technologies that enables autonomous vehicles. It's an imaging technology used for geospatial sensing since the 1980s. Self-driving cars would typically have a rotating LIDAR sensor mounted on the roof.
Two types of LIDAR sensors can be used for AD. One of them is the mechanically rotating LIDAR system mounted on the roof of a vehicle. But these systems are typically expensive and sensitive to vibrations. Solid-state LIDAR is another option that does not require rotation. They are the preferred choice for self-driving cars.
A LIDAR sensor is an active sensor. It works on the principle of time of flight, emitting thousands of infrared laser beams into its surroundings and detecting the reflected pulses using a photodetector. The LIDAR system measures the time elapsed between the emission of the laser beam and its detection by the photodetector.
Based on the time taken between emission and detection, the distance is calculated as the laser beam travels at the speed of light. A three-dimensional point cloud is created based on the distance traveled by different pulses. The reflected pulses are recorded as point clouds (i.e., a set of points in space representing a 3D object).
These LIDAR systems are highly accurate and can detect extremely small objects. However, like visible light cameras, LIDAR is unreliable in poor visibility conditions as the reflection of laser pulses can be affected by weather conditions. Another disadvantage is the cost, which is in the thousands.
But LIDAR still holds promise for AD as new developments are tried and tested.
Radar
Radar sensors are already used in many vehicles for adaptive cruise control, driver assistance, collision avoidance and automatic braking. Typically, a 77 GHz radar is used for long-range detection or a 24 GHz radar for short-range detection. The short-range radar (24 GHz) goes up to 30 meters. It is economical for collision avoidance and parking assistance. Long range (77 GHz) goes up to 250 meters. It is used for object detection, adaptive cruise control and assisted braking.
Radar is excellent for detecting metallic objects. It can be used with cameras to accurately monitor the movement of surrounding vehicles and detect possible obstructions.
Radar has limited autonomous driving capabilities because it is unable to classify objects. Radar data can detect objects but cannot recognize them. At best, low-resolution radar can support mono and LIDAR cameras or stereo cameras to handle low-visibility situations.
Sonar
Sonar technologies are also being tested for AD. Passive sonar listens to sounds from surrounding objects and estimates an object's distance from them. Active sonar emits sound waves and detects echoes to estimate the distance to nearby objects based on the time-of-flight principle.
Sonar can operate in low visibility, but it has more disadvantages than advantages for autonomous vehicles. The speed of sound limits real-time sonar operation for safe AD. Additionally, sonar can give false positives. Lastly, it can detect large objects at close range, but cannot recognize or classify them. Sonar is only useful for avoiding collisions in unexpected conditions.
Inertial sensors
Inertial sensors such as accelerometers and gyroscopes are highly useful in enabling autonomous driving. Inertial sensors can be used to track a vehicle's movement and orientation. They can be used to signal a vehicle to steady on uneven roads or to take action to avoid a potential accident.
GPS
A self-correcting GPS is a vital requirement for self-driving. Using a satellite-based triangulation technique, GPS allows a vehicle to accurately locate the car in three-dimensional space.
Sometimes GPS signals are not available or are interfered with due to obstacles or spoofing. In these cases, autonomous vehicles must rely on a local cellular network and data from inertial sensors to accurately track the car's position.
Conclusion
Autonomous vehicles typically use multiple heterogeneous sensors. One advantage of using many sensors is backup – if one sensor fails, another can compensate. A sensor fusion technique will be required for fully autonomous vehicles, using data from different sensors to determine the environment.
Currently, multiple approaches are being tested in AD development. It relies on stereo cameras to fully enable autonomous driving. Another approach uses mono cameras to provide a 360-degree view, incorporating LIDAR or radar technology to detect distance. A third approach uses stereo cameras with radar sensors.
Cameras with sensors will likely be needed for AD to classify and recognize objects effectively. Radar and LIDAR technologies can assist in using sensor fusion to create a weather-resistant autonomous driving solution. They can be added to a 3D element, ensuring a better understanding of the driving environment.
Sonar or ultrasonic sensors will also play a key role as they are weather resistant and reasonably cost-effective, providing an effective solution for avoiding collisions and dealing with emergencies. Self-driving cars will ultimately depend on some combination of all these technologies.
(tagsToTranslate)pic