A new dataset from Google shows the features on the surface of the Earth in near real time, the company announced Thursday. The tool, called Dynamic World, uses deep learning and satellite imagery to develop a high-resolution land cover map that shows which bits of land have features like trees, crops, or water.
Land cover maps usually take a long time to produce, and there are big gaps between the time images are taken and when the data is published. They also often don’t have a detailed breakdown of what’s on the ground in a particular area — a city would be classified as “built-up” (a designation for human-altered landscapes) even if there are big sections with parks, for example.
Dynamic World classifies the land cover type for every 1,100 square feet, Google said. It shows how likely it is that the sections are covered by one of nine cover types: water, flooded vegetation, built-up areas, trees, crops, bare ground, grass, shrub / scrub, and snow / ice. Google detailed its system, developed with the World Resources Institute, in a paper published in Nature’s Scientific Data.
The above screenshot of New York City, for example, shows that most of the area is built-up (red). But there are pockets of grass (green) and shrub / scrub (yellow) for the city’s major parks.
The Dynamic World model produces over 5,000 images a day, and the land cover data is continuously updated. That lets researchers and policymakers quickly see the impacts of things like fires or hurricanes and help better respond to changes.
“If the world is to produce what is needed from land, protect the nature that remains and restore some of what has been lost, we need trusted, near real-time monitoring of every hectare of the planet,” Craig Hanson, vice president of food, forests, water, and the ocean at the World Resources Institute, said in Google’s announcement.