Nvidia uses AI to make it snow on streets that are always sunny

The sunny weather in California is ideal for training self-driving cars, but it does have its drawbacks. After all, if your autonomous vehicle has only ever driven in perfect visibility, what happens when it runs into a bit of rain or snow? Researchers at Nvidia might have a solution, publishing details this week of an AI framework that lets computers imagine what a sunny street looks like when it’s raining, snowing, or even pitch-black outside. That’s important information for self-driving cars, but the work could have many more applications besides.

The research is based on an AI method that’s particularly good at generating visual data: a generative adversarial network, or GAN. GANs work by combining two separate neural networks — one that makes the data, and another that judges it; rejecting samples that don’t look accurate. In this way, the AI teaches itself to generate better and better results over time. This sort of program is common in the industry, and has been used to create all sorts of imagery, from fake celebrity faces to new clothing designs to nightmarish cats.

Nvidia’s research, though, has one big advantage over existing GANS: it learns with much less supervision. Generally, programs of this sort need labelled datasets to generate data. As Nvidia researcher Ming-Yu Liu explained to The Verge, this means that if you’re making a GAN that turns a daytime scene into a nighttime one, you’d need to feed it pairs of images taken at the same location at night and day. It would then study the difference between the two to generate new examples.

But Nvidia’s new program doesn’t need this prep-work — it works without labelled datasets, but manages to produce results of similar quality. This could be a major advantage for AI researchers, as it frees up time they would otherwise have to dedicate to sorting their training data.

“We are among the first to tackle the problem,” Ming-Yu told The Verge. “[And] there are many applications. For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.”

And the program doesn’t just work translating pictures of streets, of course. Ming-Yu and his colleagues also tested it on pictures of cats and dogs, turning pictures of one breed into another; and used it to change the expression of peoples’ faces in photographs. It’s similar to the technology used in face-changing apps like FaceApp, and, like other research in this area, raises fears about AI being used to create fake imagery that will trick people online.

“This work can be used for image editing,” suggests Ming-Yu, although he adds that there are no concrete applications for the program just yet. “We’re making this research available to our product teams and customers. I can’t comment on the speed or extent of their adoption.”

You can read the research paper in full here, and the work is also being presented this week at the NIPS AI conference in Long Beach, California.

Comments

While this seems pretty interesting for vision based stuff, one of the worst parts of snow with autonomous vehicles is with the way the snow refracts or reflects the light beam, which this doesn’t appear to address.

Not to mention the road conditions won’t match the vision.

How are the cars going to address lack of traction on a snow covered road? Snow along side the road isn’t going to affect the physical driving of the car.

It would make way more sense to test self-driving cars in a place with actual snow, like say…the Motor City? (Not that it has snowed yet this year, but it’s bound to eventually..)

I don’t think they’re saying this can replace actual testing in those conditions, but instead that this allows the data they already capture to do "double duty". I believe Google/Waymo is testing in Detroit right now, I assume others will as well.

The comments here don’t seem to understand what the point is. The idea is creating vision training data for weather patterns that aren’t common to an area. If it doesn’t snow for 50 years, you still need that training data. The technology isn’t going to wait 50 years for the weather pattern to happen. Traction is a separate issue and not addressed visually.

I get that. However, when it snows, it doesn’t just snow neatly to each side of the road.

This is supplementary data, it’s not the only input the car is recieving, nor is it the only snow training the AI will get.

Other than making sure the sensors can still detect the road/signs with snow how does this data help? Are they able to trust the GPS to maintain position on the road when it isn’t visible?

Don’t dismiss the importance of recognizing road signs in inclement weather. Add to that that the AI does need to be able to recognize trees, people, parked cars, sidewalks, breakdown lanes, curbs, and all sorts of other things that don’t sit right in the roadway.

These are learning AI’s, not just being programmed with if-then’s. We want to feed in as much information as possible, and if we can double-up on existing data, why wouldn’t we add that to the algorithms training?

This is what one of the so called leaders of AI come up with? Oh and why turn a deciduous tree into an evergreen?

What if snow, sludge, etc. obscures the vehicles outside sensors?

View All Comments
Back to top ↑