Skip to content

Global Accessibility Awareness Day

Well, I was asked the other day if I could think of a relevant quote that could be used for the Global Accessibility Awareness Day that is on May the 16th this year. I immediately thought of the following which I would like to share as I think it is a statement that is quite significant for people losing their eye sight who have now got to rely on more technology for everyday functions:

“Artificial Intelligence revolutionises image description for the visually impaired.”

When you think about this, the statement has been true for quite some time now, as we have been using artificial intelligence for some considerable time but have not really thought about it as artificial intelligence. In many ways, we have been “fore-runners”, at the head of people using it on a regular basis.

 

Think back to those days of KNFB Reader on a Nokia phone or the brilliant image descriptions provided by TapTap See. Then quickly these were followed by those personal digital assistants of “Siri”, “Okay Google” and of course “Alexa”. These were certainly beneficial to us being voice input and audio output. Then we have got the development of all those apps that we now regularly use on a daily basis such as “Seeing AI”, “Envision”, “Look Out” and “Be My AI”. The world is now beginning to get a bit friendlier for identifying things we cannot see.

 

But artificial Intelligence is much more than a vehicle for interpreting image descriptions: it can now convert text to speech, speech to text, as well as convert images to text and text to images. Certainly the discussion about AI could now take us down numerous rabbit-holes as there is a lot more that could be said but I want to focus on the potential benefits for visually impaired people.

 

As many of you might know by now, two of the foundational principals of Outlookers are cost effective solutions and “try it before you buy it”. We certainly want affordable technology that people can use and we want people wherever possible, to try it out before you buy it, so that it is something that you can benefit from and use. Many of the solutions that are designed specifically for us are expensive and fall out of the reach of many of us, whilst things that are more inclusively designed for the majority are less expensive and can often be afforded. The temptation would be to give you some examples now but I’ve already done that in a previous blog post and I think my reasoning will become more apparent in what I have to say.

 

In many ways, the two biggest issues when losing your eyesight, depending on how much sight you have lost, is around identifying what you cannot see and getting around independently without sighted assistance. It’s more than just reading text or having the mobility skills to travel on your own. It is estimated that between 70 and 80% of communication is visual, which is a significant disadvantage if you are visually impaired. Not everything visual can be translated easily into audio so we may have to rely on other senses too but artificial intelligence may be the next major step to assist us in gaining the information we require. There have been some significant hardware and software developments recently that, although they are still in beta-test mode, show some excellent signs of how they could benefit us when they have matured and become more reliable.

 

For a while now, we have had some brilliant specially designed products that have led the way in reading text and image description. The Envision AI Glasses and the OrCam suite of products  come to mind but not everyone who could benefit from them is in a position to purchase them. The ARX headset, using bone conducting headphones linked to an Android phone with a camera to take photos and videos is a more affordable solution and now that its second reiteration links to Seeing AI and Navilens, offers a solution that not only reads text but scene and image description on the move.

 

Two other products that are important are currently in beta-testing in the US are the Rabbit R1 and the Humane AI Pin. Without doing a full description of these, their significance is that they offer a similar potential to use the camera for photos and videos on the move and are similar to having a smart speaker to wear or in your pocket that you can ask questions using artificial intelligence that could help us with image and scene description and lots more besides. Reviews are mixed and not too favourable currently, but it is the potential that they might have for our use in the future. They are not specially designed products nor could one say that they have been inclusively designed but both are showing promising signs that they could be a valuable cost effective solution in getting round some of the daily barriers we face.

 

Wearable solutions either with glasses or a pin that can use artificial intelligence on the move could be potentially the next major leap forward for visually impaired people subsequent to the development of the smart phone some 15 years ago. I personally have always wanted a pair of sunglasses with a camera and speakers that could link to the apps I use on a regular basis on my phone. This has been a long term desire and of course, came to fruition with the Envision AI glasses that use the Google Glass hardware, which is like having a miniature computer on the right arm of your sunglasses.  These solely link to the Envision AI app on your phone and are becoming quite remarkable in how they can be used. However, because of their design, I could not see me wearing them out and about on the street, as they stand out as something quite special and I would fear that one quick swipe across my face could see them disappear without me knowing where or who had taken them. On the other hand, as a specialist product, they are quite expensive and are also following a subscription model that would make it difficult for me to justify their purchase. A similar, much more affordable product is being developed in Canada called the Celeste Glasses. These follow a subscription model but after the payment of $100 Canadian and subsequent $50 per month, the glasses are yours for as long as you keep using them.

 

However, another solution is also being beta-tested in the US and Canada using a pair of Ray-Ban Smart Glasses linked to Meta AI. After purchasing the glasses, you can take pictures and short videos and they can provide short image descriptions and read text. You can use the AI to ask further questions similar to the Envision and Celeste Glasses and as long as you have a good service connection with your phone, can use them quite effectively whilst you are out and about. Demonstrations of these glasses by visually impaired people in the US highlight how they can already be used and I have already committed myself to this path despite the software not yet being available in the UK. However, I still want and wish for the flexibility to use the glasses with other apps on my phone as this could render greater value to me in the future.

 

If wearable AI technology can help us identify what we cannot see, then perhaps it might also be able to assist us to identify where we want to go and assist us with our mobility in getting there. GPS software is now pretty good in assisting us with points of interest around us and routes to get from “a” to “b”, but it’s always been that 10 metres at the end of the journey to locate the door or house number to get where we want to get to safely that has been the problem. Perhaps the camera on the wearable can help us without having to get the phone out? Perhaps we might be also able to use Lidar mapped indoor and outdoor routes safely without having to walk around with the phone in front of you taking pictures at regular intervals too.

 

The potential value of artificial intelligence is perhaps only in its infancy for how we might be able to benefit from its use and is much greater than my initial thought of image description.

 

David Quarmby (Chair of Trustees)

Related blog posts