Draw the cats of your nightmares using machine learning
Neural networks are an ever-reliable source of nightmares. Google’s DeepDream brought computer-generated phantasms into the public consciousness, and since then programmers and artists have played around with training AI systems to spawn everything from imaginary cities to beaches full of genitalia.
The latest experiment doing the online rounds is Christopher Hesse’s Image-to-Image Demo – a set of machine-learning trials that allow users to turn doodles into buildings, cats, shoes and handbags respectively.
Hesse made the browser-based tool using a model called pix2pix, the same used by Mario Klingemann’s experiment to map French singer Françoise Hardy’s face around Kellyanne Conway’s words. There’s a full account of Hesse’s methods here, but the basic idea is that the network is trained on large amounts of stock photos, and then makes judgements from sketched inputs about what image should be generated.
The “edges2cats” demo, for example, was trained on 2,000 pictures of cats, as well as auto-generated sketches. Hesse notes that the outputs aren’t always… anatomically correct: “Some of the pictures look especially creepy, I think because it’s easier to notice when an animal looks wrong, especially around the eyes. The auto-detected edges are not very good and in many cases didn’t detect the cat’s eyes, making it a bit worse for training the image-translation model.”
Making your own furry monstrosities is fun, but the most convincing demo is “facades” – which turns Mondrian-style blueprints of windows, balconies, columns and doors into plausible buildings. Less terrifying, yes, but potentially more useful for artists or game developers, looking to build a believable city on the cheap.
You can play around with the demos in-browser here.