“The throttle you step on now is all the capital you need to brag about later”
A few days ago, Tesla CEO Elon Musk retweeted on Twitter about the current FSD Beta, revealing that FSD Beta V9.0 is basically ready and will have significant improvements compared to the existing system. And the most important thing in Musk’s paragraph is the last sentence: Pure vision, no radar (pure vision, no radar).
Some netizens immediately asked Musk: Does this mean that the only radar sensor currently equipped on Tesla cars will be eliminated? Or will it still be used as a backup? Musk’s answer was also very firm: Remove (cancel).
As we all know, Musk has always been against radar, saying that pure vision mode is much more advanced than radar. And the outside world has been critical of “purely visual mode of work”, which is considered a regression.
So is purely visual mode really a regression? Why is Musk so opposed to radar?
1 simulate the human eye with a camera
Pure vision challenges the human brain
To figure out this problem, first of all, we must know what is pure vision mode.
A core technology of car self-driving is that the vehicle can sense and identify the road, surrounding vehicles, buildings and all other factors that may affect the safety of car driving, and then the system makes a judgment based on this data to complete the purpose of controlling the car.
To sense this information on the road, the traditional approach is to use radar. Radar, also known as “radio detection and ranging,” is an electronic device that uses electromagnetic waves to detect targets. Radar emits electromagnetic waves to irradiate the target and receives its echoes, thereby obtaining information such as distance, rate of change of distance (radial velocity), bearing, and altitude from the target to the point of emission of the electromagnetic waves.
A simple understanding is that, for example, I have a radar in my car and emit an electromagnetic wave forward, if there is also a car in front of me, then this electromagnetic wave will turn back when it hits the car in front, and using this information, the distance between the two cars can be inferred.
And the pure visual mode, using the camera. The camera captures the surrounding image, and then the system calculates the surrounding vehicles, roads, and other information based on the information from the image. The most intuitive analogy is equivalent to the human eye. After our human eye sees the information in front of us, the brain will make a judgment on this information, and brake when the distance is close, and can accelerate to follow when the distance is far. Pure vision is actually the use of cameras, system algorithms and other ways to imitate what people are doing.
Of course, people only have two eyes, and cars can be fitted with many cameras, and when there are enough cameras, 360° coverage around the vehicle can be completed, so that using the system’s powerful algorithms, the road conditions around the vehicle can be calculated in real time when it is driving, to complete automatic driving.
At first glance, it seems like both methods are possible, but one reason many people oppose pure vision is that pure vision mode may not be as safe as radar when dealing with unexpected conditions and extreme situations. For example, sudden cars, pedestrians, animals, and extreme weather, etc. People can also be unable to react, and it is a huge challenge for machines to do better than humans.
2 Tesla Pure Vision FSD
Just how good is it now?
What exactly has the pure vision Tesla FSD done? The pure vision, radar-free version of FSD Beta 9.0 mentioned by Musk has not yet been officially launched, so the latest version that can be experienced is FSD Beta V8.2. This version should be the latest version of pure vision mode that Tesla has officially launched.
Previously, many foreign users have experienced such a version, from their review video can be seen, this version of pure visual mode, seems to have been very complete.
First of all, at the intersection when the light is green, it will automatically slow down, identify the traffic lights, and then drive through.
When the intersection meets a red light, it follows the car in front of it, waits until the light is green, and then drives through the intersection.
When turning to merge into the main road, it can identify passing vehicles in advance when there is no traffic light, and wait for safety before making the turn.
On internal roads such as shopping mall parking lots, it can also automatically identify passing vehicles and avoid them.
As you can see, for the basic driving part, the pure vision mode of FSD Beta V8.2 can basically meet the needs of use, but there are still some special road conditions that can be handled badly. For example, after a left turn, there are cars parked in the left lane of the road, FSD will follow these cars by default, but the car stops in the middle of the intersection, which is very dangerous. At this time, the next lane is empty, and the FSD does not switch lanes, so it can only intervene manually to operate.
In general,,, FSD Beta V8.2’s purely visual mode of work can only be said to be basically available, a little more complex situations, the handling is not very good. But Musk also said that the upcoming version 9.0, there will be a very big progress, in the end, how, but also can only wait for the real launch to judge.
3 using pure vision program is not only Tesla a
In fact, for now, the combination of vision + radar program, will be more mature than pure vision, so why Musk is bent on doing pure vision, abandoning radar?
The first is the high cost, autonomous driving used radar, lidar and ultrasonic sensors, the current price is generally high. At the end of last year, Huawei released a 96-line vehicle-grade LIDAR, the next goal is to quickly develop a 100-line LIDAR, and the future plans to reduce the cost of LIDAR to $ 200, or even $ 100. It can be seen that the LIDAR good everyone knows, the key is too expensive.
Tesla recently obtained a new patent for “estimating object properties using visual image data”. The patent description mentions that as the number and type of sensors increase, the complexity and cost of the system increases because each additional sensor increases the input bandwidth requirements of the autonomous driving system. Therefore, an optimal configuration of sensors on the vehicle needs to be found. This configuration should limit the total number of sensors without limiting the amount and type of data captured to accurately describe the surrounding environment and safely control the vehicle.
In addition to Tesla, two others doing pure vision-based autonomous driving are Mobileye and Apollo Lite.
The SuperVision system is a comprehensive intelligent driving system built by Mobileye. In terms of sensors, the only difference between SuperVision and Tesla Autopilot HW2.0 is that Tesla’s front trinocular solution has become a front dual camera (not the traditional “binocular” camera principle), which plays the role of narrow field of view at a distance and wide field of view at a short distance respectively. To put it simply, by removing one camera, it is possible to achieve the need to see both wide and far. So, Mobileye has completed the 360° coverage with 7 cameras.
In addition, Mobileye is equipped with 4 surround view cameras in the left and right rear view mirrors and front and rear, for 360 images and automatic parking perception. Last year, Link Auto unveiled its first all-electric model, Zero concent, equipped with SuperVision system from Mobileye on the eve of Beijing Auto Show.
In addition, Baidu’s Apollo Lite also uses pure vision technology.
In terms of front trinocular and rear cameras, Apollo Lite is no different from Tesla Autopilot, but in terms of side perception, Apollo Lite adds two surround-view cameras to Tesla’s two side perception cameras on the left and right to complement perception.
At present, autonomous driving or assisted driving, are growing rapidly, I always believe that the real arrival of autonomous driving will not be too long. And on the way to achieve this goal, whether it is pure vision or radar, in fact, is not so important, as long as the technology is mature enough, is a good product.
For consumers, it is also preferable to see products with more different approaches, so that the choice is also greater. With competition, the price will also be lower and lower, which is good for the average consumer.
Recent Comments