Neural network cities look like confusing places to live

There’s something delightfully impressionistic about Jasper van Loenen’s generative cities, as if the blur of buildings have been scraped together in oils, or a thousand magazine cutouts.

Neural network cities look like confusing places to live

These strange environments are actually the result of a neural network, which has been trained on images from Google’s Street View to learn about the shapes and colours of urban surroundings, then map its own versions onto a 3D map of virtual blocks.

“Reading about [neural networks], I found it interesting how they seem to teach the computer something without telling them what to do directly,” van Loenen told me. “It feels like they are forming their own view on a specific topic. I was wondering what it would think about a city when all you show them is an abstract representation of it.”

giphy-2.gif

An artist and programmer, van Loenen pulled panoramic images from Google Street View, coupled with depth maps the company made using a laser scanner. “These are basically black-and-white images showing the silhouettes of all objects in the image, with objects that are close to the camera drawn in white, and silhouettes becoming darker as their objects are farther away,” he explained.

Combining depth data with photographs, and deciding which cases showed the clearest match-up, he fed these images into the pix2pix model. After about half a day of training, the network was able to generate its own creations. Van Loenen then used a separate program to generate a 3D “city”, which represents buildings as cubes. He could explore this environment like a video game, with each frame fed back to the network for it to create an image based on what it knows about the relation between depth maps and photos.

Welcome to glitchtown

The Street View data doesn’t tag separate regions, such as streets, roads or people, and so it’s up to the neural network to judge where these should be. The result is a hazy mesh of doorways and windows. Van Loenen also tried training the network using Cityscapes Dataset, which does tag different aspects of the environment – leading to a much less chaotic cityscape.

“This set gives a much cleaner result in terms of the 3D space it generates – buildings stand out much more clearly from their surroundings – but it was also missing some of the more interesting glitch aesthetics seen in the Street View version,” he said. “Also, I don’t like how much work went into manually tagging all these images. As I’m looking at what a computer ‘sees’, I don’t really want to use source material that needs so much editing by humans.”

giphy.gif

I asked van Loenen whether he could see these techniques being used by game developers, and he told me that there’s certainly potential for creating generative environments using real-world images, but that it currently stands as an aesthetic experiment. “I would love to get a Vive and get this to work in real-time in VR so you could actually explore the space, but I think I’ll need to find a way to improve the data first. The quality of Street View’s depth data is just too low.”

The data available – let alone the speed of computing – might not yet be good enough to explore van Loenen’s hallucinogenic streets in real-time, but the project remains a striking visual experience – like remembering a city you used to live in, a long time ago.

You can read more about Jasper van Loenen’s work on his website.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos