Microsoft’s AI-powered Sketch2Code builds websites and apps from drawings
Called Sketch2Code, Microsoft AI’s senior product manager Tara Shankar Jana explained that the tool aims to “empower every developer and every organisation to do more with AI”. It was born out of the “intrinsic” problem of sending a picture of a wireframe or app designs from whiteboard or paper to a designer to create HTML prototypes.
To break this process Microsoft developed a web-based application which cuts out the extra human element (in this case the designer). Instead, images taken of sketches are sent to AI servers based on Microsoft’s Azure cloud infrastructure.
Drag-and-drop website building apps are nothing new. Lots of companies offer a service to move custom designs into a digital workspace, but this is the first to use artificial intelligence to complete the design.
Sketch2Code’s AI works by running images sent to it against a pre-build AI model that creates an HTML code base followed by a resulting app. At the centre of this system is something called a “custom vision object prediction model” which is essentially an image recognition tool specifically trained with datasets of hand-drawn images.
The model identifies the basic HTML elements such as buttons, labels and text boxes, allowing it to predict when those elements are present in any given image. It also can recognise handwritten text within the boxes to create a fully formed app or a webpage.
The tool is available for developers on GitHub and its code is independent of HTML, according to Shankar Jana, and can be extracted with XMAL and Universal Windows Platform.