3 min read

Getting to know Self Driving Cars

Getting to know Self Driving Cars

In tis blog, we'll learn some basics of self-driving cars and how they function. We'll also see the comparison between top companies in this space.

Driverless cars, also known as Self-Driving Cars (SDC), need little up to no human intervention in sensing the environment and navigating. These cars use sensors to perceive the environment.

Why Self-Driving Cars?

According to McKinsey & Company, the widespread use of robotic cars in the US could save up to $180 billion annually in healthcare and automotive maintenance alone based on a realistic estimate of a 90% reduction in crash rates. Driverless cars have the potential to reduce risky and dangerous behaviour of the driver.

Also, full automation offers more personal freedom. These vehicles enhance independence for seniors, people with disabilities and could also reduce costs of personal transportation. These also have the potential to reduce fuel use and carbon emissions, thus having a positive impact on the environment.

The foundational technology for SDCs are Deep Neural Networks. Neural Networks enable the cars to learn how to drive by imitating the behaviour of human driving.

5 core components of Self-Driving Cars

  1. Computer vision: Through this the car can “see” and make sense of what the world around looks like.
  2. Sensor fusion: In this, the data from various sensors such as RADAR, LIDAR, and LASER are used to gain a deeper understanding of the surroundings.
  3. Localization: This is used to understand where we are in the world, after we have knowledge of what the world around looks like.
  4. Path Planning: Path planning is used for building the trajectory execution. By using this, we can chart the course of our travel.
  5. Control: This helps with turning the steering wheel, changing the car's gears, and applying the brakes.

Levels of Autonomy

There are different levels of autonomy in Self-Driving Cars. The Society of Automotive Engineering's (SAE's) autonomy scale determines various levels of autonomous capacity:

Level 0 – Manual Cars

In manual cars, the driver controls both the steering and the speed of the car. The vehicle itself doesn’t take any action.

Level 1 – Driver Support

In level 1 autonomy, the driver has most of the control of the features of the car, like looking at all the surrounding environments, acceleration, braking, and steering.
If the vehicle gets too close to another vehicle, the car will apply the brake automatically.

Level 2 – Partial Automation

In this, a few basic tasks of the driver are eliminated, while the vehicle is partially automated. The vehicle can take over the steering or speed acceleration. The driver monitors the critical safety functions and environmental factors.

Level 3 – Conditional Automation

In Conditional Automation, the vehicle performs all environmental monitoring using different sensors. The vehicle drives in autonomous mode in certain situations, but the driver takes over when the vehicle might exceed it’s control limit.

Level 4 – High Automation

In this level, the vehicle controls the steering, brakes, and acceleration of the car. It also monitors the vehicle itself, as well as pedestrians and the roads and highway. The driver takes over in uncontrollable situations, such as in crowded places such as cities and streets.

Level 5 – Complete Automation

In this level, no human driver is required. All the critical tasks such as steering, the brakes, and the pedals are controlled by the vehicle. It monitors the environment and identifies and reacts to all unique driving conditions such as traffic jams.

How is Tesla different from Google’s Waymo?

Tesla and Waymo are the top companies working in the SDC space. Although both are attempting to collect and process enough data to create a car that can drive itself, they’re approaching those problems in very different ways.

Tesla collects real world data from the hundreds of thousands of cars it has on the road with Autopilot, its current semi-autonomous system. On the other hand, Waymo uses powerful computer simulations and feeds what it learns from those into a smaller real-world fleet.

Tesla and Waymo are collecting data at different scales, and they’re also collecting different data. Waymo uses three different types of LIDAR sensors, five radar sensors, and eight cameras. While Tesla uses: eight cameras, 12 ultrasonic sensors, and one forward-facing radar. Tesla doesn’t use LIDAR. Elon Musk disagrees that it is necessary.

In Tesla, processing data, training against that data, and having the vehicle learn effectively from the data is a challenge. Whereas in Waymo simulations, it re-creates full computer models of the cities it’s testing in, and sends 25,000 “virtual self-driving cars” through them each day. This creates a tight feedback loop where multiple variations of a scenario can be run. The data is then downloaded back into Waymo’s test cars.