Accessibility Checker for Word Tutorial

The Accessibility Checker feature has been part of Microsoft Office for the last few iterations of the software package. It provides a fast and easy way to check whether the content you are producing is accessible to users of assistive technology. By making accessibility accessible Microsoft have left no room for excuses like “I didn’t know how…” or “I didn’t have time..”. You wouldn’t send a document to all your colleagues full of misspellings because you were in a hurry would you? The one criticism that could have been leveled at Microsoft was perhaps they didn’t provide enough support to new users of the tool. As I said above it’s easy to use but sometimes users need a little extra support, especially when you are introducing them to something that may be perceived as additional work. Thankfully Microsoft have filled that gap with a 6 part tutorial video which clearly explains why and how to get started using Accessibility Checker. Part 1 is a short introduction (embedded below) followed by a video on each important accessibility practice; Alternative Text, Heading Styles, Hyperlinks, File naming and Tables. Each video is accompanied by a short exercise to allow you put your new skill into practice immediately. The whole tutorial can be completed in under 20 minutes. This tutorial should be a requirement for anybody producing documents for circulation to the public. Have a look at the introduction video below.

Global Accessibility Awareness Day – Apple Accessibility – Designed for everyone Videos

Today May 18th is Global Accessibility Awareness Day and to mark the occasion Apple have produced a series of 7 videos (also available with audio description) highlighting how their products are being used in innovative ways by people with disabilities. All the videos are available in a playlist here and I guarantee you, if you haven’t seen them and you are interested in accessibility and AT, it’ll be the best 15 minutes you have spent today! Okay the cynical among you will point out this is self promotion by Apple, a marketing exercise. Certainly on one level of course it is, they are a company and like any company their very existence depends on generating profit for their shareholders. These videos promote more than Apple however, they promote independence, creativity and inclusion through technology. Viewed in this light these videos will illustrate to people with disabilities how far technology has moved on in recent years and make them aware of the potential benefits to their own lives. Hopefully the knock on effect of this increased awareness will be increased demand. Demand these technologies people, it’s your right!

As far as a favorite video from this series goes, everyone will have their own. In terms of the technology on show, to me Todd “The Quadfather” below was possibly the most interesting.

This video showcases Apple’s HomeKit range of associated products and how they can be integrated with Siri.

My overall favorite video however is Patrick, musician, DJ and cooking enthusiast. Patrick’s video is an ode to independence and creativity. The technologies he illustrates are Logic Pro (Digital Audio Workstation software) with VoiceOver (Apple’s inbuilt screen-reader) and the object recognizer app TapTapSee which although has been around for several years now, is still an amazing use of technology. It’s Patrick’s personality that makes the video though, this guy is going places, I wouldn’t be surprised if he had his own prime time TV show this time next year.

FlipMouse – Powerful, open and low cost computer access solution

The FLipMouse (Finger- and Lip mouse) is a computer input device intended to offer an alternative for people with access difficulties that prevent them using a regular mouse, keyboard or touchscreen. It is designed and supported by the Assistive Technology group at the UAS Technikum Wien (Department of Embedded Systems) and funded by the City of Vienna (ToRaDes project and AsTeRICS Academy project). The device itself consists of a low force (requires minimal effort to operate) joystick that can be controlled with either the lips, finger or toe. The lips are probably the preferred access method as the FlipMouse also allows sip and puff input.

man using a mounted flipmouse to access a laptop computer

Sip and Puff is an access method which is not as common in Europe as it is in the US however it is an ideal way to increase the functionality of a joystick controlled by lip movement. See the above link to learn more about sip and puff but to give a brief explanation, it uses a sensor that monitors the air pressure coming from a tube. A threshold can be set (depending on the user’s ability) for high pressure (puff) and low pressure (sip). Once this threshold is passed it can act as an input signal like a mouse click, switch input or keyboard press among other things. The Flipmouse also has two jack inputs for standard ability switches as well as Infrared in (for learning commands) and out (for controlling TV or other environmental controls). All these features alone make the Flipmouse stand out against similar solutions however that’s not what makes the Flipmouse special.

Open Source

The Flipmouse is the first of a new kind of assistive technology (AT) solution, not because of what it does but because of how it’s made. It is completely Open Source which means that everything you need to make this solution for yourself is freely available. The source code for the GUI (Graphical User Interface) which is used to configure the device and the code for the microcontroller (TeensyLC), bill of materials listing all the components and design files for the enclosure are all available on their GitHub page. The quality of the documentation distinguishes it from previous Open Source AT devices. The IKEA style assembly guide clearly outlines the steps required to put the device together making the build not only as simple as some of the more advanced Lego kits available but also as enjoyable. That said, unlike Lego this project does require reasonable soldering skills and a steady hand, some parts are tricky enough to keep you interested. The process of constructing the device also gives much better insight into how it works which is something that will undoubtedly come in handy should you need to troubleshoot problems at a later date. Although as stated above Asterics Academy provide a list of all components a much better option in my opinion would be to purchase the construction kit which contains everything you need to build your own FlipMouse, right down to the glue for the laser cut enclosure, all neatly packed into a little box (pictured below). The kit costs €150 and all details are available from the FlipMouse page on the Asterics Academy site. Next week I will post some video demonstrations of the device and look at the GUI which allows you program the FlipMouse as a computer input device, accessible game controller or remote control.

FlipMouse construction kit in box

I can’t overstate how important a development the FlipMouse could be to the future of Assistive Technology. Giving communities the ability to build and support complex AT solutions locally not only makes them more affordable but also strengthens the connection between those who have a greater requirement for technology in their daily life and those with the creativity, passion and in-depth knowledge of emerging technologies, the makers. Here’s hoping the FlipMouse is the first of many projects to take this approach.

Tobii Tracker 4C

Tobii Gaming, the division of Swedish technology firm Tobii responsible for mainstream (and therefore low cost) eye trackers have released a new peripheral called the Tracker 4C (pictured below).

contents of Tracker 4 C box. Eye tracker, documentation, 2 magnetic mounts

Before getting into the details of this new device I first want to highlight that although this eye tracker can be used as a computer access solution for someone with a disability (it already works with Optikey and Project IRIS), it is not being marketed as such. What this means in practice is that it may not provide the reliability that their much costlier Assistive Technology (AT) eye trackers such as the Tobii PC Eye Mini do. So if eye tracking is your only means of communication or computer access and you have the funds I would recommend spending that extra money. That said, many people don’t have the funds or perhaps they have other more robust means of computer access and just want to use eye tracking for specific tasks like creating music or gaming. For those people the Tracker 4 C is really good news as it packs a lot into the €159 price tag and overcomes many of the weaknesses of its predecessor the Tobii Eye X. The big improvement over the Tobii Eye X is the inclusion of the EyeChip. The EyeChip which was previously only included in the much more expensive range of Tobii AT eye trackers takes care of most of the data processing before sending it on to the computer. The result of this is much less data is being passed to the computer from the eyetracker (100KB/S compared to 20MB/S) and a much lower CPU (Central Processing Unit) load (1% compared to 10%). This allows it to work over an older USB 2 connection and means most (even budget) computers should have no problem running this device (unlike the Eye X which required a high end PC).

All this must have come as some compromise in performance right? Wrong. The Tracker 4C actually beats the Eye X in almost every category. Frequency has risen from 70Hz to 90Hz, slightly longer operating distance is possible .95m, and the max screen size has increased by 3” to 30”. This last stat could be the deciding factor that convinces Tobii PC Eye Mini users to buy the Tracker 4 C as a secondary device as the Mini only works with a max screen size of 19”. The Tracker 4 C also offers head tracking but as I haven’t tested the device I’m unsure of how this works or of it is something that could be utilised as AT. Watch this space, the Tracker 4 C is on our shopping list and I’ll post an update as soon as we get to test whether it’s as impressive in real life as it seems on paper.

The table below compares specs for all Tobii’s current range of consumer eye trackers. In some areas where information was not available I have added a question mark and if appropriate a speculation. I am open to correction.

  Gaming Assistive Technology
Eye Tracker Models  Tobii Eye Tracker 4C Tobii EyeX* Tobii PC Eye Explore Tobii PC Eye Mini
Cost €159 €109 €680 €2000
Size 17 x 15 x 335 mm
(0.66 x 0.6 x 13.1 in)
20 x 15 x 318 mm
(0.8 x 0.6 x 12.5 in)
20 x 15 x 318 mm
(0.8 x 0.6 x 12.5 in)
170 mm × 18 mm × 13 mm

6.69“ × 0.71“ × 0.51“

Weight 91 grams 91 grams 69 grams 59 grams
Max Screen Size 27 inches with 16:9 Aspect Ratio
30 inches with 21:9 Aspect Ratio
27 inches 27 inches 19 Inches
Operating Distance 20 – 37″ / 50 – 95 cm 20 – 35″ / 50 – 90 cm 18-32 “/ 45-80 cm 45 cm – 80 cm 18” – 32″
Track Box Dimensions  16 x 12″ / 40 x 30 cm at 29.5″ / 75 cm 16 x 12″ / 40 x 30 cm at 29.5″ / 75 cm 19 x 15” / 48 x 39cm >35 cm × 30 cm ellipse
>13.4” × 11.8”
Tobii EyeChip Yes No No Yes
Connectivity USB 2.0 (integrated cord, USB 2.0 BC 1.2) USB 3.0 (separate cord) USB 3.0 USB 2.0
USB Cable Length 80 cm 180 cm 180 cm (short, extension needed in some situations)
Head Tracking Yes (not powered by EyeChip) No No No
OS Compatibility Windows 7, 8.1 and 10 (64-bit only) Windows 7, 8.1 and 10 (64-bit only) Windows 7, 8.1 and 10 (64-bit only) Windows 7 Windows 8.1 Windows 10
CPU Load 1%* 10% 10% ? (unconfirmed but similar to Tracker 4 C)
Power Consumption 1.5 Watt 4.5 Watt ? (unconfirmed but suspect same as Eye X) 1.5 Watt
USB Data Transfer Rate 100KB/s 20MB/s ? (unconfirmed but suspect same as Eye X) ? (unconfirmed but similar to Tracker 4 C)
Frequency 90 Hz 70 Hz 55 Hz 60Hz
Illuminators Near Infrared (NIR 850nm) Only Backlight Assisted Near Infrared
(NIR 850nm + red light (650nm))
? (unconfirmed but suspect same as Eye X) ?
Tracking Population 97% 95% ? (unconfirmed but suspect same as Eye X) ?
Additional Software Tobii Eye Tracking Core Software Tobii Eye Tracking Core Software Gaze Point (mouse emulation software) Windows Control

 

* The specs given here are taken from those listed on https://help.tobii.com/hc/en-us/articles/212814329-What-s-the-difference-between-Tobii-Eye-Tracker-4C-and-Tobii-EyeX- accessed 08/03/2017. Because the weight listed is 91 grams I suspect these specs are for the first generation Tobii Eye X (as it weighs 91 grams, the more recent Eye X weighs 69 Grams). The current Eye X specs are probably similar to the PC Eye Explore but I cannot confirm this.

GazeSpeak & Microsoft’s ongoing efforts to support people with Motor Neuron Disease (ALS)

Last Friday (February 17th) New Scientist published an article about a new app in development at Microsoft called GazeSpeak. Due to be released over the coming months on iOS, GazeSpeak aims at facilitating communication between a person with MND (known as ALS in the US, I will use both terms interchangeably) and another individual, perhaps their partner, carer or friend. Developed by Microsoft intern, Xiaoyi Zhang, GazeSpeak differs from traditional approaches in a number of ways. Before getting into the details however it’s worth looking at the background, GazeSpeaker didn’t come from nowhere, it’s actually one of the products of some heavyweight research into Augmentative and Alternative Communication (AAC) that has been taking place at Microsoft over the last few years. Since 2013, inspired by football legend and ALS sufferer Steve Gleason (read more here) Microsoft researchers and developers have put the weight of their considerable collective intellect to bear on the subject of increasing the ease and efficiency of communication for people with MND.

Last year Microsoft Research published a paper called ”
AACrobat: Using Mobile Devices to Lower Communication Barriers and Provide Autonomy with Gaze-Based AAC” (abstract and pdf download at previous link) which proposed a companion app to allow an AAC user’s communication partner assist (in an non-intrusive way) in the communication process. Take a look at the video below for a more detailed explanation.

This is an entirely new approach to increasing the efficiency of AAC and one that I suggest, could only have come from a large mainstream tech organisation who have over thirty years experience facilitating communication and collaboration.

Another Microsoft research paper published last year (with some of the same authors at the previous paper) called “Exploring the Design Space of AAC Awareness Displays” looks at importance of a communication partners “awareness of the subtle, social, and contextual cues that are necessary for people to naturally communicate in person”. There research focused on creating a display that would allow the person with ALS express things like humor, frustration, affection etc, emotions difficult to express with text alone. Yes they proposed the use of Emoji, which are a proven and effective way a similar difficulty is overcome in remote or non face to face interactions however they went much further and also looked at solutions like Avatars, Skins and even coloured LED arrays. This, like the other one above, is an academic paper and as such not an easy read but the ideas and solutions being proposed by these researchers are practical and will hopefully be filtering through to end users of future AAC solutions.

That brings us back to GazeSpeak, the first fruits of the Microsoft/Steve Gleason partnership to reach the general public. Like the AACrobat solution outlined above GazeSpeak gives the communication partner a tool rather than focusing on tech for the person with MND. As the image below illustrates the communication partner would have GazeSpeak installed on their phone and with the app running they would hold their device up to the person with MND as if they were photographing them. They suggest a sticker with four grids of letters is placed on the back of the smart phone facing the speaker. The app then tracks the persons eyes: up, down, left or right, each direction means the letter they are selecting is contained in the grid in that direction (see photo below).

man looking right, other person holding smartphone up with gazespeak installed

Similar to how the old T9 predictive text worked, GazeSpeak selects the appropriate letter from each group and predicts the word based on the most common English words. So the app is using AI in the form of machine vision to track the eyes and also to make the word prediction. In the New Scientist  article they mention that the user would be able to add their own commonly used words and people/place names which one assumes would prioritize them within the prediction list. In the future perhaps some capacity for learning could be added to further increase efficiency. After using this system for a while the speaker may not even need to see the sticker with letters, they could write words from muscle memory. At this stage a simple QR code leading to the app download would allow them to communicate with complete strangers using just their eyes and no personal technology.

Create inclusive content with Office Mix and Sway

Here in Enable Ireland AT service we have been investigating using the Office Mix plugin for PowerPoint to create more engaging and accessible eLearning content. While we are still at the early stages and haven’t done any thorough user testing yet, so far it shows some real promise.

From the end user perspective it offers a number of advantages over the standard YouTube style hosted video. Each slide is marked out allowing the user to easily skip forward or back to different sections. So you can skip forward if you are comfortable with a particular area of the presentation or more importantly revisit parts that may have not been clear. The table of contents button makes this even easier by expanding thumbnail views of all the slides which directly link to the relevant sections of the video. There is also the ability to speed up or slow down the narration. Apart from the obvious comic value of this it is actually a very useful accessibility feature for people who may be looking at a presentation made in a language not native to them or by someone with a strong regional accent. On the flip side it’s also a good way to save time, the equivalent of speed reading.

From the content creator’s perspective it is extremely user friendly. Most of us are already familiar with PowerPoint, these additional tools sit comfortably within that application. You can easily record your microphone or camera and add to a presentation you may have already created. Another feature is “Inking”, the ability to write on slides and highlight areas with different colour inks. You can also add live web pages, YouTube videos (although this feature did not work in my test), questions and polls. Finally the analytics will give you a very good insight as to what areas of your presentation might need more clarification as you can see if some chooses to look at a slide a number or times. You can also see if slides were skipped or questions answered incorrectly.

Below is a nice post outlining some ways to create inclusive content using Office Mix and Sway, Microsoft’s other new(ish) web based presentation platform. Below that is a much more detailed introduction to Office Mix using… yes you guessed it Office Mix.

How Office Mix and Sway can help with student inclusion – Gerald Haigh

The Big Life Fix

Just when we thought 2016 couldn’t get any better (in an AT sense) BBC make a prime time TV show with a huge focus on the design and construction of bespoke AT solutions. Although aired in December on BBC due to regional restrictions it’s not available to many on this side of the Irish Sea on iPlayer so you may not have had the chance to see full episodes yet. The good news is full episodes are beginning to make their way onto YouTube and are well worth a look. The general theme of the Big Life Fix would be how technology has the power to improve lives. Although not just about what we call assistive technology, it is more broad in scope covering many different types of technology challenge with the goal of democratising and demystifying solutions. AT does play a big part in many of the challenges however.

The first episode (a clip of which I’ve embedded below) introduces us to James, a young photographer who is having difficulty operating his SLR camera. The solution created for James features all the exciting technology and techniques being utilised every day by Makers around the world: Arduino microprocessor, 3D printing, AppInventor as well as some good old fashioned hardware hacking. The iterative nature of the design process is well illustrated with James critically evaluating the initial prototype and providing insights which significantly change the direction of the design.The other AT related challenge in this first episode features a graphic designer called Emma who due to tremors which are a symptom of her Parkinson’s, is unable to draw or sign her name. After a number of prototypes and lots of research a very clever solution is arrived at which seems to be extremely effective, leading to a rather emotional scene (have the hankies ready).

The Big Life Fix beautifully portrays both the potential of AT to improve the quality of life as well as the personal satisfaction a maker might get from participating in a successful solution. I can see this show sowing the seeds for a strong and equitable future for assistive technology.

Finally, the icing on the cake is that all the solutions featured on the show are Open Source with all the source code, design files and build notes that were used to print, shape and operate the solutions publicly available on GitHub. Nice work BBC. Take a look at the clip below (UPDATE: Full Episodes now on YouTube. Not sure how long they will stay there though).

Inbuilt Accessibility – AT in mainstream technology

There is of course some cross over between the different AT highlights of 2016 I have included here. An overall theme running through all the highlights this year is the mainstreaming of AT. Apple, Google and Microsoft have all made significant progress in the areas previously mentioned: natural language understanding and smart homes. This has led to easier access to computing devices and through them the ability to automate and remotely control devices and services that assist us with daily living tasks around the house. However these developments are aimed at the mainstream market with advantages to AT users being a welcome additional benefit. What I want to look at here are the features they are including in their mainstream products specifically aimed at people with disabilities with the goal of making their products more inclusive. Apple have always been strong in this area and have lead the way now for the last five years. 2016 saw them continue this fine work with new features such as Dwell within MacOS and Touch Accommodations in iOS 10 as well as many other refinements of already existing features.  Apple also along with Siri have brought Switch Control to Apple TV either using a dedicated Bluetooth switch or through a connected iOS device in a method they are calling Platform Switching. Platform Switching which also came out this year with iOS 10 “allows you to use a single device to operate any other devices you have synced with your iCloud account. So you can control your Mac directly from your iPhone or iPad, without having to set up your switches on each new device” (need to be on the same WiFi network). The video below from Apple really encapsulates how far they have come in this area and how important this approach is.

Not to be outdone Microsoft bookended 2016 with some great features in the area of literacy support, an area they had perhaps neglected for a while. They more than made up for this last January with the announcement of Learning Tools for OneNote. I’m not going to go into details of what Learning Tools offers as I have covered it in a previous post. All I’ll say is that it is free, it works with OneNote (also free and a great note taking and organisation support in its own right) and is potentially all many students would need by way of literacy support (obviously some students may need additional supports). Then in the fourth quarter of the year they updated their OCR app Office Lens for iOS to provide the immersive reader (text to speech) directly within the app.

Finally Google who would probably have the weakest record of the big 3 in terms of providing inbuilt accessibility features (to be fair they always followed a different approach which proved to be equally effective) really hit a home run with their Voice Access solution which was made available for beta testing this year. Again I have discussed this in a previous post here where you can read about it in more detail. Having tested it I can confirm that it gives complete voice access to all Android devices features as well as any third party apps I tested. Using a combination of direct voice commands (Open Gmail, Swipe left, Go Home etc.) and a system of numbering buttons and links, even obscure apps can be operated. The idea of using numbers for navigation while not new is extremely appropriate in this case, numbers are easily recognised regardless of voice quality or regional accent. Providing alternative access and supports to mainstream Operating Systems is the corner stone of recent advances in AT. As the previous video from Apple showed, access to smartphones or computers gives access to a vast range of services and activities. For example inbuilt accessibility features like Apple’s Switch Control   or Google’s Voice Access open up a range of mainstream Smart Home and security devices and services to people with alternative access needs where before they would have to spend a lot more for a specialist solution that would have probably been inferior.

Dawn of the Personal Digital Assistants

Speech Recognition has been around a long time by technology standards however up until about 2010 most of it was spent languishing in Gartner’s wonderfully named “Trough of Disillusionment”. This was partly because the technology hadn’t matured enough and people were frustrated and disappointed when it didn’t live up to expectations, a common phenomenon identified by the previously alluded to Hype Cycle. There are a couple of reasons why Speech Recognition took so long to mature. It’s a notoriously difficult technical feat that requires sophisticated AI and significant processing power to achieve consistently accurate results. The advances in processing power were easy enough to predict thanks to Moore’s Law. Progress in the area of AI was a different story entirely. Speech Recognition relies first on pattern recognition, but that only takes it so far. To improve the accuracy of speech recognition improvements in the broader area of natural language processing were needed. Thanks to the availability of massive amounts of data via the World Wide Web, much of it coming from services like YouTube we have seen significant advances in recent years. However there is also human aspect to the slow uptake of speech driven user interfaces, people just weren’t ready to talk to computers. 2016 is the year that started to change.

Siri (Apple) who was first on the scene and is now 5 years old and getting smarter all the time came to MacOS and AppleTV this year. Cortana (Microsoft) who started on Windows Phone, then to the desktop with Windows 10, made her way onto Xbox One, Android and iOS and is soon to be embodied in all manner of devices according to reports. Unlike Siri, Cortana is a much more sociable personal digital assistant, willing to work and play with anyone. By this I mean Microsoft have made it much easier for Cortana to interact with other apps and services and will be launching the Cortana Skills Kit early next year. As we’ve seen in the past it’s this kind of openness and interoperability that takes technologies in directions not envisaged and often leads to adaption and adoption as personal AT. If there was a personal digital assistant of the year award however, Amazon Echo and Alexa would get it for 2016. Like Microsoft, Amazon have made their Alexa service easy for developers to interact with and many manufacturers of Smart Home products have jumped at the opportunity. It is the glowing reviews from all quarters however that makes the Amazon Echo stand out (a self-proclaimed New Yorker Luddite to the geeks at CNET). Last but not least we have Google. What Google’s personal digital assistant lacks in personality (no name?) it makes up for with stunning natural language capabilities and an eerie knack of knowing what you want before you do. Called Google Now on smartphones (or just Google App? I’m confused!), similar functionality without some of the context relevance is available through Voice Search in Chrome. They also offer voice to text in Google Docs which this year has been much improved with the addition of a range of editing commands. There is also the new Voice Access feature for Android currently in beta testing but more on that later. In the hotly contested area of the Smart Home Google also have a direct competitor to Amazons Echo in their Google Home smart speaker. Google are a strong player in this area, my only difficulty (and it is an actual difficulty) is saying “ok Google”, rather than rolling off the tip of my tongue it kind of catches at the back requiring me to use muscles normally reserved for sucking polo mints. Even though more often than not I mangle this trigger phrase it always works and that’s impressive. So who is missing? There is one organisation conspicuous by their absence with the resources in terms of money, user data and technology who are already positioned in that “personal” space. Facebook would rival Google in the amount of data they have at their disposal from a decade of video, audio and text, the raw materials for natural language processing. If we add to this what Facebook knows about each of its users; what they like, their family, friends and relationships (all the things they like), calendar, history, interests… you get more than a Personal Digital Assistant, maybe Omnipersonal Digital Assistant would be more accurate. The video below which was only released today (21/12/16) is of course meant as a joke (there are any number of things I could add here but I’ll leave it to the Guardian). All things considered however it’s only a matter of time before we see something coming out of Facebook in this area and it will probably take things to the next level (just don’t expect it to be funny).

What does this all mean for AT? At the most basic level Speech Recognition provides an alternative to the keyboard/mouse/touchscreen method of accessing a computer or mobile device and the more robust and reliable it is the more efficiently it can be used. It is now a viable alternative and this will make a massive difference to the section of our community who have the ability to use the voice but perhaps for any number of reasons cannot use other access methods. Language translation can be accurately automated, even in real time like the translation feature Skype launched this year. At the very least this kind of technology could provide real-time subtitling but the potential is even greater. It’s not just voice access that is benefiting from these advances however, Personal Digital Assistants can be interacted with using text also. Speech Recognition is only a part of the broader area on Natural Language Processing. Advances in this area lead directly to fewer clicks and less menu navigation. Microsoft have used this to great effect in their new “Tell me what you want to do” feature in their Office range. Rather than looking through help files or searching through menus you just type what tool you are looking for, in your own words, and it serves it right up!

Natural Language Processing will also provide faster and more accurate results to web searches because there is a better understanding of actual content rather than a reliance on keywords. In a similar way we are seeing this technology working to provide increased literacy supports as the computer will be able to better understand what you mean from what you type. Large blocks of text can be summerised, alternative phrasing can be suggested to increase text clarity. Again the new Editor feature in Microsoft Word is made possible by this level of natural language understanding.

2016 – Technology Trends and Assistive Technology (AT) Highlights

As we approach the end of 2016 it’s an appropriate time to look back and take stock of the year from an AT perspective. A lot happened in 2016, not all good. Socially, humanity seems to have regressed over the past year. Maybe this short term, inward looking protectionist sentiment has been brewing longer but 2016 brought the opportunity to express politically, you know the rest. While society steps and looks back technology continues to leap and bound forward and 2016 has seen massive progress in many areas but particularly areas associated with Artificial Intelligence (AI) and Smart Homes. This is the first in a series of posts examining some technology trends of 2016 and a look at how they affect the field of Assistive Technology. The links will become active as the posts are added. If I’m missing something please add it to the comments section.

Dawn of the Personal Digital Assistants

Game Accessibility

Inbuilt Accessibility – AT in mainstream technology 

Software of the Year – The Grid 3

Open Source AT Hardware and Software

The Big Life Fix

So although 2016 is unlikely to be looked on kindly by future historians… you know why; it has been a great year for Assistive Technology, perhaps one of promise rather than realisation however. One major technology trend of 2016 missing from this series posts is Virtual (or Augmented) Reality. While VR was everywhere this year with products coming from Sony, Samsung, Oculus and Microsoft its usefulness beyond gaming is only beginning to be explored (particularly within Education).

So what are the goals for next year? Well harnessing some of these innovations in a way where they can be made accessible and usable by people with disabilities at an affordable price. If in 2017 we can start putting some of this tech into the hands of those who stand to benefit most from its use, then next year will be even better.