Figure 1: Image of the Lexus RX450h Google driverless car
Driving a car is an art, perhaps that's why the government waits until you're 18 (or 16 in some countries) to give you the chance to get a driver's license. Not only must you be able to drive the car, but you must also have a great sense of safety. However, all the rules and regulations applied to ensure safety cannot eradicate the occurrence of accidents. On the other hand, small details such as speed, gear shifting and fuel consumption can affect the environment around you in the form of gas emissions. When more fuel is burned than necessary, there is more incomplete combustion of gases, causing the emission of gases like carbon monoxide (not to mention you also get lower mileage). This happens at very low and very high speeds. Therefore, driving a car in ideal conditions involves a delicate balance of forces. Requires good understanding and experience in operating a car. But what if a team of brilliant engineers developed a way to automate this complicated process?
Automation
Fifteen years ago, if you had to withdraw money, you would have to travel a long way to a bank and stand in line with a receipt to withdraw money. But lately, the same is being done even in remote locations by machines (ATMs – Automated Teller Machines) without any complications. In industries, most heavy machinery is automated (totally or partially) allowing for more precise and faster production. Automation is a growing trend in the current era. It is truly an engineering marvel as it involves the coming together of different branches of engineering such as computer science, electronics and mechanics to develop this technology. Electronic sensors sense the environment and send their response to microcontrollers that are programmed to provide the desired response and to actuators that actually perform the responding action.
Google Project
We know them as the most popular search engine on the World Wide Web. They are much more than that. They are pioneers in inspiring innovations in the field of technology that are beyond their time. Google's driverless car is a Google project that involves developing technology to make a car self-sufficient. The software that Google uses to automate cars is known as “Google Chauffeur”. They do not produce a separate car, but install the necessary equipment on a regular car. The project is currently being led by Google engineer Sebastian Thurin (who is also director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View) under the Google X umbrella.
Figure 2: Google driverless car installed in Toyota Prius and Audi TT
Google equipped a group of ten cars with its technology consisting of three Lexus RX450h, an Audi TT and six Toyota Prius. The tests were carried out with experienced drivers in the driver's seat and Google engineers in the passenger seat. They traveled great distances in varying topographical locations and traffic densities across the United States of America. Speed limits are stored in the brain of the control systems and the car comes with a manual override that passes control to the driver in the event of a malfunction. In August 2012, Google announced that it had completed 500,000 km of road testing. As of December 2013, four US states have established laws allowing the use of self-driving cars: California, Florida, Nevada and Michigan
How should it work?
The aim of the project is to duplicate the actions of the ideal driver. So let's first note the variables at play when a human drives a car.
· The most important of the senses is vision. The first data we encounter is what we see around us. With this data we control the acceleration and deceleration of the car according to the visual data.
· Once we receive this data from our eyes, it is sent to the brain via the optic nerve. The brain examines this data and determines whether or not any action is necessary and, if necessary, what action is necessary.
· Action or stimulus data is sent to the hands and legs that control the steering, accelerator pedal, brakes and clutch.
· After the action is applied, our eyes observe the data again and send it to the brain. The brain decides if what we see is what we want and sends correction data back to the limbs, this process is called “Feedback”.
Here we observe that the primary sense is vision or any way in which we are aware of the observable environment around us. For example, suppose we observe a person crossing the road and we are driving at a speed of 40 km/h. If the distance between the car and the person is, say, 10 m (small), we brake sharply or make a sharp turn to avoid hitting the person. If the person crossing the street is 100m away, we apply light brakes and reduce speed so that the person crosses the street.
How it works?
Apply this principle to design an electronic control system and the result will be an autonomous car. Although it seems simple, the interaction of software and hardware in a large system like a car is actually quite sophisticated. The accuracy and dynamic range required in such a system are high. These were the achievements of the winners of the 2005 $2 million DARPA grand challenge for the “Stanley” robotic vehicle. Let's now take a look at how various sensors and controllers achieve this.
The main device that monitors the environment is the “Laser Range Finder” (a Velodyne 64 beam LIDAR – for light detection and ranging). The laser generates a detailed 3D image of what it observes around you. It measures the 3D environment and then compares it to high-resolution maps of the real world. Laser rangefinders are similar to those found in laser scanners, but with greater range and greater accuracy. These lasers must have a 360 0 view of the surroundings and without optical obstacles (windshields and mirrors). Therefore, the ideal place would be the roof of the car.
Figure 3: Image of the Velodyne 64 Beam LIDAR system
The car is also equipped with four radars designed to keep watch far enough away (beyond laser range) that fast oncoming traffic can be detected. They are useful on highways where fast-moving traffic is prevalent and a strong sense of awareness is essential.
A camera is positioned in the rearview mirror facing forward. The objective is to detect traffic signs. The data received from the camera is programmed to provide outputs according to the received inputs which can be red, yellow or green light.
A GPS (global positioning system) locates the latitude and longitude position of the car which is used to place it on a satellite map. GPS is primarily used to set a course predetermined by the user. The route data guides the vehicle to follow a necessary path to reach the prescribed location.
An inertial measurement unit measures the inertial force exerted on the vehicle. The vehicle's wheels contain odometers that measure the rotational speed of the wheels (RPM). The same data can be measured to calculate engine load (i.e. Brake Horse Power BHP). These sensors work collectively to monitor the vehicle's speed and movements.
Figure 4: Representational image of a typical driverless car
These sensors are responsible for collecting data from vehicle variables. The work of analyzing this data and producing an appropriate response is the task of “Artificial Intelligence”. This is a field where human intelligence is transferred to machines or software by computer science and electronics. This field is multidisciplinary, such as Computer Science, Neuroscience, Psychology, Linguistics and Philosophy. It allows the device or software to make decisions based on certain inputs. Therefore, the AI unit determines the following parameters based on inputs from the hardware and Google Maps
· How fast to accelerate the vehicle.
· When slowing down or stopping.
· When driving the vehicle.
The objective of the Artificial Intelligence unit is to take the passenger to their destination safely and legally, following traffic rules and regulations. Google has already passed the tests, although there was one incident where an accident occurred that involved one of Google's self-driving cars, to which the company claims the car was being operated manually at the time of the accident. Is this the future of road travel? Or is Google trying to reach an unreachable goal? Whatever market Google may have in store in the future, the technology has certainly lived up to the innovation and brand of one of the biggest technology companies in the world.