Most companies working on self-driving cars are using lidar. Tesla, however, has rejected lidar technology, calling it "stupid." The company's Model 3 andModel Y systems, which envision fully autonomous driving, are camera-based, while the Model S and Model X rely on lidar. However, the company has declared that all future models will be camera-based, dubbed "Tesla Vision. So how did Tesla complete its self-driving capabilities with cameras, while other self-driving cars like Uber and Toyota use lidar?
Lidar (LightDetection and Ranging) is a technology that measures the distance, direction of velocity, temperature, and concentration of atmospheric substances of a target object, and is used for weather observation, terrain mapping, and airplane landing guidance, among other things. In the field of autonomous driving, it is used as a key sensor for creating three-dimensional images.
The main difference between LIDAR and the cameras in Tesla's self-driving cars is how they perceive and interpret their surroundings.
LIDAR is a remote sensing technology that uses a laser sensor to create a 3D map of a car's surroundings. By emitting a laser beam that bounces off objects and back to a lens, a LIDAR sensor can accurately measure an object's distance or position and shape.
The camera feature ofTesla's autonomous vehicles is a computer vision system that uses multiple cameras installed on the vehicle to capture visual data of the environment. The captured images are then processed by machine learning algorithms to detect and identify objects. In other words, it reverse-engineers human vision.
A key advantage of LIDAR is that it can accurately detect and measure objects even in low light or bad weather. LIDAR also provides a high level of detail and precision; it specializes in responding to small objects that cameras alone might miss, like debris in the road.
On the other hand, cameras are relatively inexpensive and smaller than LIDAR sensors. Cameras are better able to detect and identify visual cues such as road signs and traffic lights.
LIDAR measures distance by sending out a laser and detecting the time it takes to return, and it's extremely accurate. It can detect objects up to a millimeter away.Computer vision, on the other hand, is a branch of artificial intelligence that trains computers to understand what they perceive visually.
Tesla removed lidar from the MODEL 3 and MODEL Y in 2021, followed by radar from the MODEL S andMODEL X in 2022, marking the beginning of the transition to Tesla Vision.
Tesla claims that compared to vehicles equipped with lidar, MODEL 3 and MODEL Y with Tesla vision have maintained or improved their active safety ratings in the U.S. and Europe.They even performed better in terms of pedestrian automatic emergency braking(AEB).
Tesla's cameras are placed in appropriate locations around the car, including the front, sides, and rear, to provide a comprehensive view of its surroundings. This allows the car to detect and react to potential hazards from multiple angles and avoid collisions.
Tesla's self-driving cars are able to drive without LIDAR thanks to the advanced computer vision technology and machine learning algorithms used in the Autopilot system. TheAutopilot system consists of cameras, radar, and ultrasonic sensors that work together to recognize and interpret the vehicle's surroundings. Strategically placed cameras recognize things like other vehicles, pedestrians, and obstacles.
The camera feeds then use deep learning algorithms to identify and track objects and predict their movements. The results are used to plan a safe path for the vehicle, which is the key to Tesla's self-driving cars.
At AI DAY 2022, Tesla unveiled improvements to its fully self-driving (FSD) platform. On that day,Tesla demonstrated the improvements to FSD in person with a case study involving oddly positioned cars at an intersection.
In the past, TeslaVision has recognized cars that were waiting to cross an intersection as stopped. To counteract scenarios like this, Tesla created a tool to identify incorrectly predicted objects, which involves correcting labels and classifying video clips into an evaluation set.
To do this, Tesla built a data evaluation set of 126 test videos and trained it with data mined from other videos. As a result, FSD no longer predicts crossing vehicles a sparked, but correctly identifies them as waiting to cross.
There are a lot of variables on the road. Fully autonomous driving requires reacting to situations like traffic on curvy roads or tight parking lots. There may even be a large vehicle, such as a bus, in front of you. Tesla has built its solution based on a vast amount of data, and the solution to a problem in one vehicle is built to be implemented in all of Tesla's vehicles that support FSD without changing the architecture of the model itself.
Tesla knew it was imperative to improve FSD's neural network for upcoming Level 4-5 autonomous driving, so they developed their own supercomputer called Dojo.
Dojo features a system tray that connects six training tiles within and between cabinets, allowing for uniform communication. Interface processors can feed data to the training tiles. It also has full-bandwidth memory and high-speed Ethernet for communication.
Each system tray can hold 54 petaflops and has a total memory bandwidth of 800 GB per second. Two of these assemblies will be placed in a single cabinet, and seven ExaPODs will be housed in Tesla's Palo Alto facility.
Tesla says Dojo will allow it to train models faster and at a lower cost.
Lidar is a technology used by many companies to implement self-driving features. However, Tesla opted for a camera-based approach. Tesla's CEO, Elon Musk, has emphasized that cameras, not lidar, are the key to perfecting self-driving cars.
They have argued that this approach is more cost-effective and suitable for autonomous driving.
In fact, camera-based systems are generally cheaper than LIDAR-based systems. LIDAR sensors are expensive and require specialized skills and equipment to install and calibrate. Cameras, on the other hand, are readily available and easier to maintain than LIDAR sensors, which require frequent replacement.
Because cameras are smaller and less bulky than LIDAR sensors, they can simplify car design and reduce weight. This can improve a car's fuel economy and range, making it more competitive in the marketplace.
Using camera vision can also reduce manufacturing costs, making Tesla's self-driving cars more affordable for consumers.
Everything we see on the road is visual information, and while LIDAR specializes in detecting stationary objects like signs, recognizing moving objects is a different story.A plastic bag darting out at you on the highway is no big deal, but LIDAR can't see it that way. Suddenly, cars will stop and the road will be a mess. This situation has been an ongoing issue since self-driving cars were introduced.
Tesla has stated that their cameras can detect what an object is, unlike LIDAR. When an object comes into view, the camera can first make a judgment about what it is, and then the car can react to the situation accordingly.
The biggest problem with LIDAR is its lack of adaptability. Most are implemented in a way that relies heavily on maps, and very few have been tested on real-world roads - and even if they have, it's only on large, highly mapped roads. But the average person doesn't drive on those roads every day, so they're not very practical.
Because people have driven more than a billion miles in Tesla cars, we've accumulated a huge amount of unpredictable road data. Because Tesla Vision is a way to learn and improve from this data, we knew it would be much more meaningful than LIDAR.
Unfortunately,Tesla's innovation has been criticized as being at the wrong time. Elon Musk claimed that putting eight cameras on the car instead of LIDAR was enough, but engineers thought differently: relying solely on cameras without LIDAR would expose the car to errors just by being covered by raindrops or bright sunlight, a problem that could directly lead to a car crash.
Tesla's Autopilot is rated "Level 2" because it requires active driver supervision and doesn't make the car autonomous. Level 3 advanced driver assistance systems allow the car to take full control of itself.