One aspect of modern technological life that might help us to keep some faith in humanity are the comprehensive assistive technologies that are built into, or free to download for mobile computing devices. Accessibility features, as they are loosely called, are a range of tools designed to support non-standard users of the technology. If you can’t see the screen very well you can magnify text and icons (1) or use high contrast (2). If you can’t see the screen at all you can have the content read back to you using a screen-reader (3). There are options to support touch input (4, 5) and options to use devices hands free (6). Finally there also some supports for deaf and hard of hearing (HoH) people like the ability to switch to mono audio or visual of haptic alternatives to audio based information.
their mobile operating system iOS Apple do accessibility REALLY well and this is
reflected in the numbers. In the 2018 WebAim Survey of Low
Vision users there were over 3 times
as many iOS users as Android users. That is almost the exact reverse of the
general population (3 to 1 in favour of Android). For those with Motor Difficulties it
was less significant but iOS was still favoured.
So what are Apple doing right? Well obviously, first and foremost, the credit would have to go to their developers and designers for producing such innovative and well implemented tools. But Google and other Android developers are also producing some great AT, often highlighting some noticeable gaps in iOS accessibility. Voice Access, EVA Facial Mouse and basic pointing device support are some examples, although these are gaps that will soon be filled if reports of coming features to iOS 13 are to be believed.
Rather than being just about the tools it is as much, if not more, about awareness of those tools: where to find them, how they work. In every Apple mobile device you go to Settings>General>Accessibility and you will have Vision (1, 2, 3), Interaction (4, 5, 6) and Hearing settings. I’m deliberately not naming these settings here so that you can play a little game with yourself and see if you know what they are. I suspect most readers of this blog will get 6 from 6, which should help make my point. You can check your answers at the bottom of the post 🙂 This was always the problem with Android devices. Where Apple iOS accessibility is like a tool belt, Android accessibility is like a big bag. There is probably more in there but you have to find it first. This isn’t Google’s fault, they make great accessibility features. It’s more a result of the open nature of Android. Apple make their own hardware and iOS is designed specifically for that hardware. It’s much more locked down. Android is an open operating system and as such it depends on the hardware manufactured how accessibility is implemented. This has been slowly improving in recent years but Google’s move to bundle all their accessibility features into the Android Accessibility Suite last year meant a huge leap forward in Android accessibility.
What’s in Android Accessibility Suite?
Use this large on-screen menu to control gestures, hardware buttons, navigation, and more. A similar idea to Assistive Touch on iOS. If you are a Samsung Android user it is similar (but not as good in my opinion) as the Assistant Menu already built in.
Select to Speak
Select something on your screen or point your camera at an image to hear text spoken. This is a great feature for people with low vision or a literacy difficulty. It will read the text on screen when required without being always on like a screen reader. A similar feature was available inbuilt in Samsung devices before inexplicably disappearing with the last Android update. The “point your camera at an image to hear text spoken” claim had me intrigued. Optical Character Recognition like that found in Office Lens or SeeingAI built into the regular camera could be extremely useful. Unfortunately I have been unable to get this feature to work on my Samsung Galaxy A8. Even when selecting a headline in a newspaper I’m told “no text found at that location”.
Interact with your Android device using one or more switches or a keyboard instead of the touch screen. Switch Access on Android has always been the poor cousin to Switch Control on iOS but is improving all the time.
TalkBack Screen Reader
Get spoken, audible, and vibration feedback as you use your device. Googles mobile screen reader has been around for a while, while apparently, like Switch Access it’s improving, I’ve yet to meet anybody who actually uses it full time.
So to summarise, as well as adding features that may have been missing on your particular “flavour” of Android, this suite standardises the accessibility experience and makes it more visible. Also another exciting aspect of these features being bundled in this way is their availability for media boxes. Android is a hugely popular OS for TV and entertainment but what is true of mobile device manufacturer is doubly so of Android Box manufacturers where it is still very much the Wild West. If you are in the market for an Android Box and Accessibility is important make sure it’s running Android Version 6 or later so you can install this suite and take advantage of these features.
Over the course of history there have always been single named women who have influenced our lives and Culture: Cleopatra, Maggie, Madonna, and now it’s the turn of Alexa! I have been curious and intrigued by the benefits of technological assistants with regards my disability, so I was very excited when Enable Ireland gave me an opportunity to try out Alexa in the form of the Amazon Echo.
How easy is it to get the Echo up and running?
initial setup of the Amazon Echo is very simple to carry out. You need to
download the Amazon Alexa app to your smartphone (get used to downloading apps
on your phone), the app will search for the device, the app will then connect
to the device through the devices own Wi-Fi signal, you then connect your
device to your home broadband, and hey presto within a few minutes your Amazon
Echo is up and running.
What can Alexa do on its own?
initial benefits of the Amazon Echo for a person with a disability are very
limited. You can ask Alexa what the weather will be like, what time it is, to
set reminders, and some other quirky less useful questions: “Alexa, tell
me a joke”, “What’s the capital of Finland?”, or more randomly
“Alexa, beatbox for me”.
the Alexa app you can enable other skills to assist you in your daily
activities. If you are into music you can add 🙂 your Spotify profile to Alexa,
this is very simple to do if you can use a smartphone. Alexa will then play
your playlists through its impressive speakers. This is very handy, even for
someone who is not into music much, as it means I don’t need to listen to music
through my basic phone speakers nor do I have to call someone to change a cd in
my stereo. It is great for podcasts as well, though as Alexa sometimes has
difficulty understanding people you might be better off setting up a playlist
through your Spotify app first if any of your favourite podcasts have quirky
names like my favourite Arsenal podcast Arsecast by Arseblog!
you have vision impairment, have difficulty holding a book, or you just like
Audiobooks you can quickly add your Audible account too, tilt back in your
chair and listen to your favourite book or a new release. It can also update
you with the latest news, traffic, and weather for your area as well.
you have trouble with your memory because of a head injury, or you just have a
head like a sieve as I do, the reminders and timers could be very useful. I
normally add reminders to my phone as I can’t write them down but just
immediately calling them out is useful as sometimes I go to add them to my
phone and get distracted by Twitter and the likes. The timers are useful if
you’re cooking and the chicken needs just five minutes more.
What can Alexa do using IOT – The Internet Of Things?
For someone with a physical disability this is where it
really sparked my interest. I struggle with some aspects of technology and to
physically control my environment so I thought I would benefit from Alexa and
Smart WeMo Plug
Firstly I decided to set up the lamp in my sitting room. In order to use Alexa to switch on your light you either need a smart plug or you need smart bulbs and a Wi-Fi hub. Enable Ireland had also provided me with a WeMo smart plug in this instance. The setup for the WeMo smart plug was very similar to the initial setup of the Amazon Echo: download the app, connect to the devices own Wi-Fi, and connect the device to your home broadband.
Once you have that done you can control the lamp directly
from your smartphone only if you wanted, in order to connect it to the Alexa
you need to go back to the Alexa app and pair the Alexa with the WeMo smart
plug from there.
Overall it is very simple System and process and once you
have it up all you have to do is say “Alexa, turn on the lamp”. This
was a complete success and over the time I had the devices this is the one that
proved most simple to use and most consistent. It was lovely if I was on my own
for a little while coming toward evening, I could give that simple command and
“Let there be light!”
The other devices I had to connect to the Echo were related to the TV. I use an Amazon fire stick to play games on my TV and also to watch Netflix. I knew from watching YouTube videos that you could pair your Amazon Echo with your fire stick and use Alexa to open Netflix and play your movies and shows.
Unfortunately this was not so easy to carry out. It seemed simple at first, get your Alexa device to scan your Wi-Fi for compatible devices and when you see the Firestick click connect. Unfortunately this is where I ran into some problems. In order to get the Alexa to carry out these procedures I had to enable its TV skills through the app. I had to do something similar to set up my Spotify account so I wasn’t too worried at first. Frustratingly when I went into the app to enable that TV skill the screen went blank and gave me no options to enable it. After numerous attempts to carry this out and searches on the internet to find a solution I eventually contacted Amazon’s online support and having gone through three advisors I found the solution by enabling it through my laptop and my Amazon account on the Desktop site. Phew!
The results of that is I can come into sitting room in the
morning, with the TV turned off, and ask Alexa to open Netflix. If you know the
name of the movie or show you want to watch you can ask Alexa to open it
directly. You can play, pause and fast forward or rewind whatever you are
watching. This has been very helpful for me is the remote for my fire stick is
tiny and the buttons are incredibly difficult to press. If you are a movie buff
and have difficulties using small remotes then this solution is probably worth
all the hassle it took to set it up in the first place!
In the package from Enable Ireland there was also a Logitech
Harmony Hub. At first, I had no idea what it was. I had never heard of it
before. A bit of Googling revealed that it is a universal remote control. A bit
of YouTubing revealed that it could be paired with Alexa to turn on and control
a whole host of electronic devices including your TV, Stereo System, or Sky
This is a complex setup. You set up the Harmony hub much the same way as you do the other devices. So again that means you need to download another app to connect it to your Wi-Fi, I hope you have enough space on your smartphone! Once it is set up and ready to go you need to use the Alexa app to enable the Harmony Hub skill so Alexa can communicate with the Harmony Hub. Now use the Harmony App to scan for smart devices that may be on your Wi-Fi already, like a smart TV. If you have something that is not smart like my Sky box, you simply search in the app for the product and add it to your list of devices. Right, now that you have your devices listed and the Hub and Alexa can talk to one another what can you tell them to do?
Using the Harmony app you can set up a range of
“activities”. These are relatively easy to set up as you follow a step by step
process through the app. Quite quickly I had it set up so that I could tell
Alexa to turn on the TV, it would turn on the TV and set it to the Sky TV
extension immediately. I also set it up so I could increase and decrease the
volume of the TV and I could change the ordinary terrestrial channels on the TV.
I have seen that you can change channels on your Sky box and set “favourite
channels” to tune to quickly but, frustratingly, while I can do that through
the Harmony app on my phone I haven’t been able to do that using Alexa despite
numerous and persistent attempts. Apparently, it is possible if you set an “activity”
for each individual channel but life is too short!
If you are technically proficient enough and you have a big
enough budget there are whole host of other devices you could use with the
Alexa to smarten up your home whether it is to control your heating or even to
unlock your door!
Are there Privacy Issues?
There are some concerns about privacy and the Alexa. Some of
the stories surrounding this issue I’m sure have been exaggerated for headlines
but there is a basis to some of the concern too with Amazon admitting that
staff listen to people’s interactions with Alexa (I think they’ll get a laugh
from some of my frustrated interactions where Alexa was called everything under
the sun while I tried in vain to control the Sky box via Alexa).
download the Alexa app. This sort of sets the tone for what to expect with
I know from my experience with the Alexa that there have
been some strange happenings. During conversations in the same room as the
Alexa the blue light that indicates Alexa is listening has come on. On another
occasion Alexa has piped up with search results that were not asked for in the
middle of a conversation. Nothing too sinister I’m sure but something I’m
personally not too comfortable with.
It’s up to you whether you’re willing to give up that sense
of personal privacy in place of the benefits Alexa provides.
I was very excited to try out the Amazon Echo and Alexa. I
felt this was my opportunity to finally make up my mind on whether to purchase
one or not, a decision I had been debating over for some time.
Alexa promises so much to help me with my physical
disability. Overall in this aspect it did live up to expectation. It was
frustrating that I couldn’t manage to set it up to operate my Sky box but I was
able to set it up to use most the functions on my TV, and the Alexa in
conjunction with the WeMo plug gave the most satisfying and consistent function
of switching my sitting room lamp on and off. If I were to purchase an Echo I
would consider investing further into the other devices that could do as the
WeMo plug did.
The other aspects of the Echo were less beneficial to me as
they didn’t involve improving my access to my physical environment. That does
not take away from the fact that they could be hugely beneficial for someone
with a different disability such as a sensory disability: reminders, timers,
your Spotify, and your Audiobooks through Alexa would simplify so many parts of
a person’s life.
For someone with a high level disability or someone who has difficulty using a smartphone the set up process of the Echo itself may be a little complex. The set up process for some of the “activities” on the Harmony Hub would take the most seasoned of smartphone users to the point where they just give up (ie. me 🙂
The initial cost of the Amazon Echo is very affordable.
However, if someone with a disability wishes to use the Echo and Alexa to its
full potential to make their lives more independent then they will need to
spend a lot more. A quick Google suggested that a Wi-Fi plug similar to the
WeMo plug is €22 each while a Harmony Hub remote is available for approximately
€120. So if you’re hoping to live in a completely smart home it’s going to be
difficult if you’re sole source of income is your Disability Allowance.
All that being said, that decision I have been debating over
for some time, have I made it? Well, in a sense I have. I am fortunate to be
able to use my mobile phone without much difficulty so in the short term I
think I will get a Harmony Hub which will allow me to carry out most of what
Alexa has been doing for me on this trial but through my phone and without the
worry of Amazon employees listening in on me. In the medium to long term I’m sure
I’ll revisit Alexa or even the Google equivalent!
Dragon NS provides a means voice to text production not only in word processing applications but also to control your computer operations. This for me is the main advantage over other voice to text programmes- which are often “in app” such as the microphone in the Pages app.
Dragon NS versus other voice to text software- my take on it!
Dragon NaturallySpeaking for the PC is much more powerful than the built-in voice recognition software in android or within the iPhone (Siri) i.e. less inaccuracies and more time efficient. It can dramatically cut down the time it takes to create email, word documents and other correspondence on your PC.
It Learns. Dragon NaturallySpeaking actually improves through use. It learns about how you speak, how you sound, what words you use and it creates a database called a voice profile. This voice profile matures over time and allows Dragon NaturallySpeaking to become very accurate with regular use.
Dragon NaturallySpeaking on the PC has “regional accent modelling”. This makes the program far more accurate than basic mobile device speech recognition which uses a generic accent model.
Dragon NaturallySpeaking adapts to your specific vocabulary. Siri or the Google android speech recognition application do not do this, they run off a generic limited vocabulary.
Amount Processed. Free speech software on your phone can only process 30-second chunks of speech. Dragon speech recognition on the PC is continuous for a long as you can talk and doesn’t need a continuous internet connection.
The not so good
Good flow of speech is important, even if just for short passages. Dragon NS writes everything you say; even inflections of speech such as “mmmm” and “eh”. If a user tends to use these inflections in speech, it will type these inflections. Continuously deleting them can be time consuming and frustrating. Training oneself not to use these inflections can be very tricky.
The user needs to be very cognitively able to command the system with their voice, planning out the actions and remembering specific commands.
Fantastic software for the right client, especially if for any reason direct access is not an option. Even if a form of direct access is an option for the client, Dragon NS is still a nice option for long passages of text production. For the wrong client, this software would be more of a hindrance and a frustration than a help.
Motorised blinds and curtains have been around for many years, providing easy access to control blinds and curtain rails. Control of these motorised devices was usually with the use of a radio remote control, which made such devices particularly of interest for people with mobility issues. Internet of Things (IoT) has now extended its internet connectivity to these everyday objects. Embedded with technology, these devices can communicate and interact over the internet, and they can be remotely monitored and controlled.
An example of this is Somfy motorization systems. These systems consist of a range of motorised blinds, curtains and roller shutters. The Somfy myLink™ is a device that turns your smartphone or tablet into a remote control for motorized products featuring Somfy Radio Technology. For voice control, Alexa now works with myLink!
Purchasing new blinds or curtains rails could work out to be quite expensive, and possible wasteful if you already have good blinds in place, however, there are a number of options available to retrofit existing blinds to also consider. They are able to transform your standard home blinds into smart electric blinds and do so at an affordable price. Like the Somfy blinds, they also provide a way to raise, lower and choose an intermediate position of the blind.
The Brunt Blind Engine can motorize your existing blinds and connect to your smartphone, allowing remote control and scheduling of your blinds anywhere, anytime.
It is designed to be compatible with most roll-type blinds available on the market,
allowing blinds of all different shapes and sizes to be successfully fitted.
The Blind Engine comes with two different gears designed to accommodate string cords and ball chains. With the Brunt App, you can raise and lower multiple blinds at the same time. (No extra monthly charge for the Brunt application)
You can use the Brunt Blind Engine with various voice recognition speakers. Cost online $129
AXIS Gear is an affordable and easy way to motorize your window shades. Gear is a smart device that lets you easily control and schedule when your shades open and close. Axis say the install and setup of Gear takes minutes and guarantees to fit your shades or your money back.
Included are a solar panel and backup AA batteries. The App allows the creation of schedules and smart home integration.
The SOMA Smart Shades is designed to fit your existing shades and curtains with a continuous-cord. Continuous-cord shades have one looped string or beaded chain that allows you to raise and lower the bottom of the shades. Attach the device to your shades or blinds with a beaded chain or string, download the mobile app, follow the instructions and you’re ready to go. Automated schedules can be created and it is possible to control multiple windows from one mobile app. It is Android & iOS supported. Smart Shades are solar powered with a built-in lithium battery. By installing the SOMA Connect, you can control your shades with your voice, as it works with Amazon Alexa and Apple HomeKit.
Microsoft has been making huge strides in the realm of accessibility with each successive update to Windows and have invested in updates to improve the user experience for people with disabilities. The improvements in their Ease of Access features include eye tracking, the narrator, low vision features, and reading and writing improvements.
Eye Control delivers new exciting updates and new tools. For users who can’t use a mouse or keyboard to control their computer, Eye Control presents a convenient entry point to a windows computer using eye-tracking technology. Having access to your computer via Eye Control gives individuals a way to communicate, the ability to stay in the workforce, and so much more!
What began as a hack project during a One Week Hackathon, has become a product concept for the Windows team. Microsoft has introduced Eye Control, which empowers people with disabilities to use a compatible eye tracker, such as a Tobii Eye Tracker, to operate an on-screen mouse, keyboard, and text-to-speech in Windows 10 using only their eyes.
Microsoft Learning Tools
The New Learning Tools capabilities within Microsoft Edge Microsoft Learning Tools are a set of features designed to make it easier for people with learning differences like dyslexia to read. In this update, a user can mow simultaneously highlight and listen to text in web pages and PDF documents to read and increase focus.
Now with the addition of the Immersive Reader functionality of Learning Tools you can photograph a document, export it to immersive reader and immediately use the tools to support your understanding of the text.
Narrator will include the ability to use artificial intelligence to generate descriptions for images that lack alternative text. For websites or apps that don’t have alt-text built in, this feature will provide descriptions of an image. Narrator will now also include the ability to send commands from a keyboard, touch or braille display and get feedback about what the command does without invoking the command. Also, there will be some Braille improvements – Narrator users can type and read using different braille translations. Users can now perform braille input for application shortcuts and modifier keys.
Desktop Magnifier is also getting an option to smooth fonts and images, along with mouse wheel scrolling to zoom in and out. It is now possible to use Magnifier with Narrator, so you can zoom in on text and have it read aloud.
This feature already allowed people to speak into their microphone, and convert using Windows Speech Recognition into text that appears on the screen. In the Windows 10 Update, a person can now use dictation to convert spoken words into text anywhere on your PC
To start dictating, select a text field and press the Windows logo key + H to open the dictation toolbar. Then say whatever’s on your mind.
As well as dictating text, you can also use voice commands to do basic editing or to input punctuation. (English only)
If it’s hard to see what’s on the screen, you can apply a color filter. Color filters change the color palette on the screen and can help you distinguish between things that differ only by color.
To change your color filter, select Start > Settings > Ease of Access > Color & high contrast . Under Choose a filter, select a color filter from the menu. Try each filter to see which one suits you best.
You know a particular technology is fast approaching mainstream when every manufacturer seems to be developing add-ons to make their products work with it.
From Samsung’s SmartThings to August Smart Home Locks, 3rd-party developed skills are voice experiences that add to the capabilities of any Alexa-enabled device (such as the Echo). For example “Alexa, set the Living Room lights to warm white” or “Alexa, lock the front door.” These skills are available for free download. Skills are continuously being added to increase the capabilities available to the user.
he Amazon Echo is a smart speaker developed by Amazon. It is tall cylinder speaker with a built-in microphone. The device connects to the voice-controlled intelligent personal assistant service Alexa, which answers to the name “Alexa”. The device is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real-time information
However, it can also control many smart devices using itself as a home automation hub.
The videos below give an example of using your voice with smart home products. https://youtu.be/V7WfxI3ecVI https://youtu.be/pH8fg1noIj0
The good: As far as price goes, the Amazon Echo comes in various forms, the
Amazon Echo Dot costs £44.99 which seems affordable. All the Amazon skills that add to the capabilities of any Alexa-enabled device are free.
The not so good: Requires internet connection to work. If your internet goes down then your ability to control the devices around you also does too.
The verdict: A good way to dip your toe in the Internet of Things waters, more capabilities on the way.
Today May 18th is Global Accessibility Awareness Day and to mark the occasion Apple have produced a series of 7 videos (also available with audio description) highlighting how their products are being used in innovative ways by people with disabilities. All the videos are available in a playlist here and I guarantee you, if you haven’t seen them and you are interested in accessibility and AT, it’ll be the best 15 minutes you have spent today! Okay the cynical among you will point out this is self promotion by Apple, a marketing exercise. Certainly on one level of course it is, they are a company and like any company their very existence depends on generating profit for their shareholders. These videos promote more than Apple however, they promote independence, creativity and inclusion through technology. Viewed in this light these videos will illustrate to people with disabilities how far technology has moved on in recent years and make them aware of the potential benefits to their own lives. Hopefully the knock on effect of this increased awareness will be increased demand. Demand these technologies people, it’s your right!
As far as a favorite video from this series goes, everyone will have their own. In terms of the technology on show, to me Todd “The Quadfather” below was possibly the most interesting.
This video showcases Apple’s HomeKit range of associated products and how they can be integrated with Siri.
My overall favorite video however is Patrick, musician, DJ and cooking enthusiast. Patrick’s video is an ode to independence and creativity. The technologies he illustrates are Logic Pro (Digital Audio Workstation software) with VoiceOver (Apple’s inbuilt screen-reader) and the object recognizer app TapTapSee which although has been around for several years now, is still an amazing use of technology. It’s Patrick’s personality that makes the video though, this guy is going places, I wouldn’t be surprised if he had his own prime time TV show this time next year.
There is of course some cross over between the different AT highlights of 2016 I have included here. An overall theme running through all the highlights this year is the mainstreaming of AT. Apple, Google and Microsoft have all made significant progress in the areas previously mentioned: natural language understanding and smart homes. This has led to easier access to computing devices and through them the ability to automate and remotely control devices and services that assist us with daily living tasks around the house. However these developments are aimed at the mainstream market with advantages to AT users being a welcome additional benefit. What I want to look at here are the features they are including in their mainstream products specifically aimed at people with disabilities with the goal of making their products more inclusive. Apple have always been strong in this area and have lead the way now for the last five years. 2016 saw them continue this fine work with new features such as Dwell within MacOS and Touch Accommodations in iOS 10 as well as many other refinements of already existing features. Apple also along with Siri have brought Switch Control to Apple TV either using a dedicated Bluetooth switch or through a connected iOS device in a method they are calling Platform Switching. Platform Switching which also came out this year with iOS 10 “allows you to use a single device to operate any other devices you have synced with your iCloud account. So you can control your Mac directly from your iPhone or iPad, without having to set up your switches on each new device” (need to be on the same WiFi network). The video below from Apple really encapsulates how far they have come in this area and how important this approach is.
Not to be outdone Microsoft bookended 2016 with some great features in the area of literacy support, an area they had perhaps neglected for a while. They more than made up for this last January with the announcement of Learning Tools for OneNote. I’m not going to go into details of what Learning Tools offers as I have covered it in a previous post. All I’ll say is that it is free, it works with OneNote (also free and a great note taking and organisation support in its own right) and is potentially all many students would need by way of literacy support (obviously some students may need additional supports). Then in the fourth quarter of the year they updated their OCR app Office Lens for iOS to provide the immersive reader (text to speech) directly within the app.
Finally Google who would probably have the weakest record of the big 3 in terms of providing inbuilt accessibility features (to be fair they always followed a different approach which proved to be equally effective) really hit a home run with their Voice Access solution which was made available for beta testing this year. Again I have discussed this in a previous post here where you can read about it in more detail. Having tested it I can confirm that it gives complete voice access to all Android devices features as well as any third party apps I tested. Using a combination of direct voice commands (Open Gmail, Swipe left, Go Home etc.) and a system of numbering buttons and links, even obscure apps can be operated. The idea of using numbers for navigation while not new is extremely appropriate in this case, numbers are easily recognised regardless of voice quality or regional accent. Providing alternative access and supports to mainstream Operating Systems is the corner stone of recent advances in AT. As the previous video from Apple showed, access to smartphones or computers gives access to a vast range of services and activities. For example inbuilt accessibility features like Apple’s Switch Control or Google’s Voice Access open up a range of mainstream Smart Home and security devices and services to people with alternative access needs where before they would have to spend a lot more for a specialist solution that would have probably been inferior.
Speech Recognition has been around a long time by technology standards however up until about 2010 most of it was spent languishing in Gartner’s wonderfully named “Trough of Disillusionment”. This was partly because the technology hadn’t matured enough and people were frustrated and disappointed when it didn’t live up to expectations, a common phenomenon identified by the previously alluded to Hype Cycle. There are a couple of reasons why Speech Recognition took so long to mature. It’s a notoriously difficult technical feat that requires sophisticated AI and significant processing power to achieve consistently accurate results. The advances in processing power were easy enough to predict thanks to Moore’s Law. Progress in the area of AI was a different story entirely. Speech Recognition relies first on pattern recognition, but that only takes it so far. To improve the accuracy of speech recognition improvements in the broader area of natural language processing were needed. Thanks to the availability of massive amounts of data via the World Wide Web, much of it coming from services like YouTube we have seen significant advances in recent years. However there is also human aspect to the slow uptake of speech driven user interfaces, people just weren’t ready to talk to computers. 2016 is the year that started to change.
Siri (Apple) who was first on the scene and is now 5 years old and getting smarter all the time came to MacOS and AppleTV this year. Cortana (Microsoft) who started on Windows Phone, then to the desktop with Windows 10, made her way onto Xbox One, Android and iOS and is soon to be embodied in all manner of devices according to reports. Unlike Siri, Cortana is a much more sociable personal digital assistant, willing to work and play with anyone. By this I mean Microsoft have made it much easier for Cortana to interact with other apps and services and will be launching the Cortana Skills Kit early next year. As we’ve seen in the past it’s this kind of openness and interoperability that takes technologies in directions not envisaged and often leads to adaption and adoption as personal AT. If there was a personal digital assistant of the year award however, Amazon Echo and Alexa would get it for 2016. Like Microsoft, Amazon have made their Alexa service easy for developers to interact with and many manufacturers of Smart Home products have jumped at the opportunity. It is the glowing reviews from all quarters however that makes the Amazon Echo stand out (a self-proclaimed New Yorker Luddite to the geeks at CNET). Last but not least we have Google. What Google’s personal digital assistant lacks in personality (no name?) it makes up for with stunning natural language capabilities and an eerie knack of knowing what you want before you do. Called Google Now on smartphones (or just Google App? I’m confused!), similar functionality without some of the context relevance is available through Voice Search in Chrome. They also offer voice to text in Google Docs which this year has been much improved with the addition of a range of editing commands. There is also the new Voice Access feature for Android currently in beta testing but more on that later. In the hotly contested area of the Smart Home Google also have a direct competitor to Amazons Echo in their Google Home smart speaker. Google are a strong player in this area, my only difficulty (and it is an actual difficulty) is saying “ok Google”, rather than rolling off the tip of my tongue it kind of catches at the back requiring me to use muscles normally reserved for sucking polo mints. Even though more often than not I mangle this trigger phrase it always works and that’s impressive. So who is missing? There is one organisation conspicuous by their absence with the resources in terms of money, user data and technology who are already positioned in that “personal” space. Facebook would rival Google in the amount of data they have at their disposal from a decade of video, audio and text, the raw materials for natural language processing. If we add to this what Facebook knows about each of its users; what they like, their family, friends and relationships (all the things they like), calendar, history, interests… you get more than a Personal Digital Assistant, maybe Omnipersonal Digital Assistant would be more accurate. The video below which was only released today (21/12/16) is of course meant as a joke (there are any number of things I could add here but I’ll leave it to the Guardian). All things considered however it’s only a matter of time before we see something coming out of Facebook in this area and it will probably take things to the next level (just don’t expect it to be funny).
What does this all mean for AT? At the most basic level Speech Recognition provides an alternative to the keyboard/mouse/touchscreen method of accessing a computer or mobile device and the more robust and reliable it is the more efficiently it can be used. It is now a viable alternative and this will make a massive difference to the section of our community who have the ability to use the voice but perhaps for any number of reasons cannot use other access methods. Language translation can be accurately automated, even in real time like the translation feature Skype launched this year. At the very least this kind of technology could provide real-time subtitling but the potential is even greater. It’s not just voice access that is benefiting from these advances however, Personal Digital Assistants can be interacted with using text also. Speech Recognition is only a part of the broader area on Natural Language Processing. Advances in this area lead directly to fewer clicks and less menu navigation. Microsoft have used this to great effect in their new “Tell me what you want to do” feature in their Office range. Rather than looking through help files or searching through menus you just type what tool you are looking for, in your own words, and it serves it right up!
Natural Language Processing will also provide faster and more accurate results to web searches because there is a better understanding of actual content rather than a reliance on keywords. In a similar way we are seeing this technology working to provide increased literacy supports as the computer will be able to better understand what you mean from what you type. Large blocks of text can be summerised, alternative phrasing can be suggested to increase text clarity. Again the new Editor feature in Microsoft Word is made possible by this level of natural language understanding.
As we approach the end of 2016 it’s an appropriate time to look back and take stock of the year from an AT perspective. A lot happened in 2016, not all good. Socially, humanity seems to have regressed over the past year. Maybe this short term, inward looking protectionist sentiment has been brewing longer but 2016 brought the opportunity to express politically, you know the rest. While society steps and looks back technology continues to leap and bound forward and 2016 has seen massive progress in many areas but particularly areas associated with Artificial Intelligence (AI) and Smart Homes. This is the first in a series of posts examining some technology trends of 2016 and a look at how they affect the field of Assistive Technology. The links will become active as the posts are added. If I’m missing something please add it to the comments section.
So although 2016 is unlikely to be looked on kindly by future historians… you know why; it has been a great year for Assistive Technology, perhaps one of promise rather than realisation however. One major technology trend of 2016 missing from this series posts is Virtual (or Augmented) Reality. While VR was everywhere this year with products coming from Sony, Samsung, Oculus and Microsoft its usefulness beyond gaming is only beginning to be explored (particularly within Education).
So what are the goals for next year? Well harnessing some of these innovations in a way where they can be made accessible and usable by people with disabilities at an affordable price. If in 2017 we can start putting some of this tech into the hands of those who stand to benefit most from its use, then next year will be even better.