Stanford Research Shows How Tesla Vehicles Can See Through Fog

0 21

In a new video by Dr. Know It All on YouTube, he explains that he wanted to know whether or not Tesla cars could see through fog. He cited a Stanford research article that shows just how it can see through the fog. The article, published last year, noted that optical imaging techniques such as light detection and ranging (LiDAR) were essential tools in remote sensing, robotic vision, and autonomous driving. Personally, this has me a bit curious since Tesla doesn’t use LiDAR and will soon no longer use radar, instead relying on “pure vision,” as Elon Musk recently shared on Twitter.

In his video, Dr. Know-It-All noted that as he recapped the research paper, he didn’t think Tesla was using all of the research from that paper. “The research is very specialized and it needs a laser system and also some sort of cascade diode sensor.” He explained that those things are more LiDAR based rather than vision based, but thinks that there could be a passive AI way of utilizing the research to help create the self-driving ability in extreme conditions — such as fog, heavy rain, snow, and even dust — for those who live in dusty environments.

“The amazing part about this research is that basically they can see through a one inch or a two and a half centimeter piece of foam. I mean, it’s basically something that you literally can’t look through,” he said.

What he suggested is that Tesla vehicles are not going to be able to do that. What Tesla can do is look through something like fog, though.

Recap of the Research Paper

In the video, Dr. Know -It-All goes over a few quotes from the research paper and shares his thoughts. Below are the quotes from the paper.

“We introduce a technique that co-designs single-photon avalanche diodes, ultra-fast pulsed lasers, and a new inverse method to capture 3d shape through scattering media.”

“Our technique, confocal diffuse tomography, may be of considerable value to the aforementioned applications.”

“Current LiDAR systems ail in adverse conditions where clouds, fog, dust, rain, or murky water induce scattering. This limitation is a critical roadblock for 3D sensing and navigation systems, hindering robust and safe operation.”

“Here we introduce a technique for noninvasive 3D imaging through scattering media: confocal diffuse tomography (CDT). We apply this technique to a complex and challenging macroscopic imaging regime, modeling and inverting the scattering of photons that travel through a thick diffuser approximately equal to 6 transport mean free paths, propagate through free space to a hidden object, and scatter back again through the diffuser.

“Our insight is that a hardware design specifically patterned after confocal scanning systems (such as commercial LiDARs), combining emerging single-photon-sensitive, picosecond-accurate detectors, and newly developed signal processing transforms, allow for an efficient approximate solution to this challenging inverse problem.”

In essence, he noted that the researchers send a pulse laser out at the picosecond level. 1 picosecond equals 0.000000000001 seconds. It’s a really short pulse laser beam. Both the sensor and laser beam are confocal. This means that they are both at the same place in space. Kind of like pizza with pineapples being in my belly.

Next, it goes out and hits the piece of foam. Most of the photons scatter off, but some eventually scatter through and hit the material on the other side. Some of those photons scatter backwards then, and a few of those go through the foam on the other side.

“What they’re very cleverly doing is slicing this up — and that’s tomography. If you’ve ever had a cat scan, that’s positron emission tomography and tomography means just like writing with slices. So basically you take these slices at a really fast speed and then you reconstruct those to create the actual image you’re looking for.”

“CDT enables noninvasive 3D imaging through thick scattering media, a problem which requires modeling and inverting diffusive scattering and free-space propagation of light to a hidden object and back. The approach operates with low computational complexity at relatively long range for large, meter-sized imaging volumes.”

He explained that the researchers are talking about how they are able to create this image using a very, very short pulse laser with a cascading ultra sensitive diode, which is, in essence, a super sensitive camera that is very fine tuned. The quote below is where Dr. Know-It-All point out where Tesla might use this:

“We introduce an efficient approximation to this model, which takes advantage of our confocal acquisition procedure, where the illumination source and detector share an optical path, and measurements are captured by illuminating and imaging a grid of points on the surface of the scattering medium.”

“This approximation results in a simplified convolutional image formation model.”

He pointed out that the researchers are talking about a convolutional image formation model. He’s done videos on convolutional neural networks and pointed out that those who have seen those may see how this is syncing up.

“We seek to recover the hidden object albedo p. In this case, a closed-form solution exists using the Wiener deconvolution filter and a confocal inverse filter A-1 used in the non-line-of-sight imaging.”

“Notably, the computational complexity of this method is O(N3 log N) for an NxNxN measurement volume, where the most costly step is taking the 3D Fast Fourier Transform.”

The NxNxN represents a three-dimensional measurement volume. Dr. Know It All noted that what they are saying in the quotes above is that the researchers have created a way you can solve this problem in a timely manner. This is important since Tesla is focusing on complete vision and doesn’t have LiDAR systems, lasers, or cascading diodes. However, Tesla can still use this research to improve its vision in extreme weather circumstances such as fog, dust, or heavy rain.

“What I’m thinking is that they are able to take a temporal tomography as opposed to a spatial one. So basically what they can do is they can slice their vision up. They can pretty much probably look at things just in front of them because that’s the direction of travel of the car.”

Another thing he noted is that during these extreme conditions, the car will most likely be driving slowly, as a human would if they were operating the vehicle, and while it’s driving slowly, the camera could be slicing up all of the views and be looking at what it can see.

“As it moves through space, effectively what you’re doing is moving the relationship between the car and let’s say there’s a deer in the road. That’s a really bad scenario. You’ve got the car here, you’ve got a deer here and you’ve got fog in between. What happens here is that the car moves and it’s changing its relative orientation and location relative to this deer. And what it can do by taking the different cameras is slice up the views and it can start to get a few photons at a time which describe that there is something there and it can start to resolve that object. And my prediction is that when we take a look at this version 9 of the Full Self Driving Beta that these cars will be able to see further through rain and fog and etc. than human beings can.”

Expanding on that thought, he noted that these photons are able to hold onto the memory of all the pixels that are coming in slowly and it’s able to form an image of that object over time. Humans, by comparing, are dependent on being able to see it instantaneously.

Humans aren’t able to see through dense fog, because our eyes don’t work like that. He noted that these cameras on the Tesla vehicles can work like a telescope. He used the Hubble telescope as an example. It takes hours of photos to create those beautiful space photos.

“What the Tesla can do, obviously in a more real-time fashion, is it can pull in the few pixels that are coming back. The few photons that are coming back to its detector and it can basically reconstruct a three-dimensional scene from very little data. Humans have driven about 30 million miles on Teslas — have driven through dust storms, have driven through fog, have driven through blizzards, have driven through heavy rain, etc. We’re providing that data to Tesla. It’s able to curate the data.”

He noted that through this you can train the entire AI system. As you’re driving, the cameras see the deer, then when you see it, you brake. Essentially, Tesla would be doing pseudo-CDT, he explained.  You can watch the full video here.

 



 


Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.

 

Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.

New Podcast: Cruise Talks Autonomous Driving Tech, Regulations, & Auto Design

New Podcast: Battery Mineral Mining Policies & Regional Trends

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More