How Google Earth works

Google Earth begins with photography – an awful lot of it. Google mixes satellite imagery taken by its partners with more conventional aerial shots and knits it into one huge “virtual texture” that covers the globe. The photography used in this “base layer” varies.Resolutions will reach as high as 0.15m per pixel in the most detailed areas. England is now down to 0.5m per pixel, but Antarctica is captured with a tenth of that fidelity.

it_photo_17645

The imagery is regularly updated, as new providers or satellites come online, but it can be one to three years before an image is processed and added to the base layer. The clever bit is how this imagery is mapped onto the virtual globe on your PC. Mapping the Earth is no small feat. According to calculations by one of the engineers who built Google Earth, if you stored only one pixel of colour data for every square km of the earth’s surface, you’d still end up with a 2.4GB image measuring 40,000 x 20,000 pixels, and this would only be able to resolve features 2km wide. No PC or graphics card could cope with something of this size and, when you bear in mind that Google Earth needs to resolve features of less than one metre, you begin to comprehend the scale of the storage, streaming and processing.

So how does Google Earth manage it? The software takes the giant multi-terabyte base layer texture and uses a feature derived from mip-mapping to stream only the most relevant elements of it to your PC. Mip-mapping arose in the early days of 3D graphics as a way of saving bandwidth and processing power during the real-time rendering process. When you see a mip-mapped surface close up, you see the texture in all its high-resolution glory. However, as that surface recedes into the distance, the process swaps out the texture for fewer detailed variants, saving the graphics card from drawing a texture you wouldn’t see properly anyway.

Google Earth does something similar, using a stack of mip-maps representing the surface of the earth as seen from different distances, and only drawing a narrow column of this stack at any time. Around the focal point of the window, you get the highest resolution texture when zoomed in, but outside that focal point it intelligently uses the lower-resolution mip-maps in a way that balances image quality and performance. As your viewpoint moves across the planet, the algorithm works out where you need the largest virtual textures and pages them from the server to the hard disk to system memory to texture memory. This ensures only the most useful levels of detail are sent to the graphics card at any time.

it_photo_17642

While this is happening, Google Earth overlays the vector graphics, text and bitmap graphics contained in the various data layers. These overlays are controlled by KML, a variant of XML featuring the same tag-based structure, nested elements and attributes. KML can specify icons and labels, locations and even viewpoints, and can tell the Google Earth app to overlay an image, fetch data from an online database or web page, or display a textured 3D object. These features allow Google to integrate data from its own search databases, but they also allow third parties to work with their own databases or online content. As KML isn’t a huge jump for those used to working with XML, it also enables advanced users to employ Google Earth as a basis for their own geographic information systems, apps or mashups. See http://earth.google.com/gallery/index.html for some examples.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos