The technology helping blind people to see

Earlier this week, Facebook updated its iOS app offering voice descriptions of photographs uploaded by its users. A big step forward for accessibility, but it’s far from the only company looking to make the world more inclusive to the visually impaired.

In fact, rapid advances in artificial intelligence, machine vision and image-recognition technology are opening up the digital world to the blind and visually impaired – and helping them to interact with their surroundings.facebook_visually_impaired_users_see_pictures

Smart Liquids

One interesting example is Austrian start-up BLITAB, which has created the first ever tactile tablet for blind and visually impaired people, dubbed “the iPad for the blind”. As Kristina Tsvetanova, co-founder & CEO at BLITAB Technology, explains, the device looks similar to an ebook but displays small physical bubbles instead of using a screen, which means users can view whole pages of braille text at once, without any mechanical elements.

“It offers a completely new user experience for braille and non-braille readers via touch navigation, text-to-speech output and Perkins-style keyboard application. It also enables the direct conversion of any text file into braille and obtains information via NFC tags. BLITAB is not just a tablet, it is a platform for all existing and future software applications for blind readers,” she says.

Although coy about the exact details, Tsvetanova reveals that the device works by using innovative smart liquid technology that she claims offers “fundamentally new capabilities of re-forming materials” and that the tablet “rethinks traditional perceptions of actuation based on mechanics”.technology_for_the_blind_blitab

A key limitation of existing braille displays is that they provide just one line of braille text – much less useful for reading books or long documents – and don’t allow users to “view” other tactile applications such as images or graphs. According to Tsvetanova, it is often prohibitively expensive to convert a textbook containing tabular data and pictorial information to braille, with costs reaching $40,000 (£28,000).

As well as reproducing 14 lines in braille text, BLITAB will also output images and graphics and combine the tactile experience with text-to-speech capabilities. This function directly addresses all blind users that are non-braille readers and who want to experience digital tactile images for the first time.

Last month, BLITAB demonstrated a new keyboard-free user interface with a text-to-speech navigation enabled touch screen. Tsvetanova says the company eventually plans to create a high-end model with more features, costing in the region of €2,500 (£2,025), and a lower end device priced at €500 (£405) that focuses solely on the braille display.blitab_technology_for_visually_impaired

Seeing AI

As well as smaller startup companies, some of the larger global players are also actively considering how they can move into this space. One example is the social networking giant Facebook which is currently combining image recognition technology with MemNets (Memory Networks) to develop ground breaking new applications for visually impaired users.

MemNets are novel applications that add a sort of short-term memory to the convolutional neural networks that power the company’s deep-learning systems and in the process, enable them to comprehend language in ways similar to humans. According to Facebook, a demo MemNets system trained to read and answer questions about a short synopsis of The Lord of the Rings has now been scaled up to process data sets exceeding 100,000 questions. In the future, the company also hopes to use the technology to allow people to ask questions about the content of photographs, stopping the visually impaired from being left out when friends share photos.

Elsewhere, another tech giant, Microsoft, is working on state of the art visual assistive technology as part of its ongoing Seeing AI research initiative. The project, described in detail in a video released during last month’s Build conference in San Francisco, is working on fusing computer vision with natural language processing capabilities to describe a person’s immediate surroundings – as well as “read text, answer questions and even identify emotions on people’s faces”. Although it has not yet settled on a release date, the company says it sees Seeing AI eventually being used as a mobile phone app or even accessed via smart glasses from Pivothead.

Neural Networks

Another early front runner is Aipoly, an iPhone app that employs artificial intelligence to identify items in real time and help blind and visually impaired people recognise the physical world through their smartphone. Starting with a “lightweight” app that can recognise 1,000 items, the company, founded at the Singularity University at the NASA Ames Research Center in Silicon Valley, is now on track to release a version that can recognise 5,000 items.technology_for_the_blind_aipoly

Via the app, users can build a mental picture of a new area or room by scanning it and listening to a voice describing objects in their surroundings. They can also use it to find light switches and plugs or to access facilities in public bathrooms, and children with vision impairment have used it to learn objects without the assistance of a supervisor.

“With time, Aipoly will be able to identify many more items and, in the future, also produce full sentences that outline the position and state of items. We are also considering integrating it into eyewear,” says Alberto Rizzoli, co-founder at Aipoly.software_for_the_blind_aipoly

According to Rizzoli, the technology is based on a convolutional neural network trained by Teradeep Deep Learning Software on a 10 million image dataset. A neural network is an architecture of processes inspired by the human brain and, in this case, the animal visual cortex. This allows computers to see and think like humans, and gives them a sense of creativity. Neural networks have been under the spotlight for the past five years and have recently been able to produce art using the style of famous painters, “imagining” and producing faces, or even outperforming humans in guessing simple sketches.

“Thanks to neural networks, Aipoly can understand the concept of objects, such as chairs, and identify all types of chairs without having to see them first. This breakthrough technology takes us one step closer to finding a replacement for guide dogs or assistants for people with visual impairment,” says Rizzoli.

“In a few years it might out-perform humans, who knows?”

READ NEXT: How Facebook is using AI to read pictures to the blind

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.