Enable Ireland’s Garden, ‘Beyond Boundaries’ was an award winner at Bloom in the Park this year. With a focus very much on Access for All, we wanted to see how we could make the garden more easily accessible to Bloom visitors with vision impairment. So we decided to make a tactile book with a small selection of the plants featured in the garden, printed using a 3 D Printer. Here are the results. We got a lot of really good feedback from visitors, and now the book is located in our Garden Centre in Sandymount, where customers can check it out for themselves.
What do you think of this idea? Have you used 3 D printing to enhance access to other services/facilities? We’d love to learn from your experience!
Tactile book cover with map of Enable Ireland Bloom Garden
Last week we were visited in Enable Ireland, Sandymount, by two of the most experienced practitioners working in the area of assistive music technology. Dr Tim Anderson http://www.inclusivemusic.org.uk/ and Elin Skogdal (SKUG) dropped by to talk about the new eyegaze music software they have been developing and to share some tips with the musicians from Enable Ireland Adult’s Services. Tim Anderson has been developing accessible music systems for the last 25 years. E-Scape which he developed, is the only MIDI composition and performance software designed from the ground up for users of alternative input methods (Switch, Joystick and now Eyegaze). Tim also works as an accessible music consultant for schools and councils. Elin Skogdal is a musician and educator based at the SKUG Centre. She has been using Assistive Music Technology in music education since 2001 and was one of those responsible for establishing the SKUG Centre. The SKUG Centre is located in Tromsø, Northern Norway. SKUG stands for “Performing Music Together Without Borders”, and the aim of the Centre is to provide opportunities for people who can’t use conventional instruments to play and learn music. SKUG is part of the mainstream art school of Tromsø (Tromsø Kulturskole), which provides opportunities for SKUG students to collaborate with other music and dance students and teachers. SKUG have students at all levels and ages – from young children to university students. If you would to like to know more about Elin’s work at SKUG click here to read a blog post from Apollo Ensemble.
Following the visit and workshop they sent us some more detailed information about the exciting new eyegaze music software they are currently developing Eye-Touch. We have included this in the paragraphs below. If you are interested in getting involved in their very user lead development process you can contact us here (comments below) and we will put you in touch with Tim and Elin.
‘Eye-touch’ (Funded by ‘NAV Hjelpemidler og tilrettelegging’ in 2017, and Stiftelsen Sophie’s Minde in 2018) is a software instrument being developed by the SKUG centre (Part of ‘Kulturskolen i Tromsø’), in collaboration with Dr. Tim Anderson, which enables people to learn and play music using only their eyes. It includes a built-in library of songs called ‘Play-screens’, with graphical buttons which play when you activate them.
Buttons are laid out on screen to suit the song and the player’s abilities, and can be of any size and colour, or show a picture. When you look at a button (using an eye-gaze tracking system such as Tobii or Rolltalk) it plays its musical content. You can also play buttons in other ways to utilise the screen’s attractive look: you can touch a touch-screen or smartboard, press switches or PC keys, or hit keys on a MIDI instrument.
The music within each button can either be musical notes played on a synthesised instrument, or an audio sample of any recorded sound, for example animal noises or sound effects. Sound samples can also be recordings of people’s voices speaking or singing words or phrases. So a child in a class group could play vocal phrases to lead the singing (‘call’), with the other children then answering by singing the ‘response’.
Pictured above, a pupil in Finland is trying out playing a screen with just three buttons, with musical phrases plus a sound effect of a roaring bear (popular with young players!). She has been using the system for just a few minutes, and was successfully playing the song, which proved very enjoyable and motivating for her.
SKUG’s experience from their previous prototype system has led to the incorporation of some innovative playing features, which distinguish it from other eyegaze music systems, and have been shown to enable people to play who couldn’t otherwise. These features provide an easy entry level, and we have found that they enable new users to start playing immediately and gain motivation. These support features can also be changed or removed by teachers to suit each player’s abilities, and most importantly, be able to evolve as a player practises and improves. One feature is to have the buttons in a sequence which can only be played in the right order, so the player can ‘look over’ other buttons to get to the next ‘correct’ button.
Here are two examples: The Play-screen below has buttons each containing a single note, arranged as a keyboard with colouring matching the Figurenotes scheme. A player with enough ability could learn a melody and play it by moving between the buttons in the empty space below. But by putting the buttons into a sequence order, the player is able to learn and play the melody far more easily – they can look over buttons to get to the next ‘correct’ button (note) of the song, without playing the buttons in between.
As well as illustrating a general theme, the facility to add pictures gives us many more possibilities. The Play-screen below left has buttons which show pictures and play sounds and music relating to J.S. Bach’s life story. The buttons could be played freely, but in this case have been put into a sequence order to illustrate his life chronologically. As before, a player can move through the buttons to play then in order, even though they are close together. But we may want to make them even bigger, and make the player’s job even easier, by setting to only display the ‘next’ button in the sequence (below right). So the other buttons are hidden, and the player only sees the button which is next to play, and can then move onto it.
There is also an accompanying text to tell the story which, if desired, can be displayed on screen via a built in ‘song-sheet’. Teachers can also make their own Play-screens by putting their own music into buttons – by either playing live on a MIDI keyboard, or recording their own sound samples. To further personalise a Play-screen for a pupil, people can also organise and edit all the visual aspects including adding their own pictures.
The Eye-Touch software is also very easy to install and operate – we have found it quick and easy to install it on school pupils’ eye-gaze tablets, and it worked for them straight away.
In January 2018 the SKUG team started a project to further develop Eye-Touch to expand the ways of playing, the creating and editing facilities for teachers, and the range of songs provided in the library.
Through our Community Design Challenge, we’ve worked in partnership with Adult Expert AT Users and Product Design students on a variety of projects, all with the shared aim of finding innovative solutions to daily living challenges.
One of our recent projects involved the creation of an accessible banking solution for a woman with vision impairment:
Barclay’s Bank in the UK is leading the way in accessible banking. See how they’re doing it here
Microsoft has been making huge strides in the realm of accessibility with each successive update to Windows and have invested in updates to improve the user experience for people with disabilities. The improvements in their Ease of Access features include eye tracking, the narrator, low vision features, and reading and writing improvements.
Eye Control delivers new exciting updates and new tools. For users who can’t use a mouse or keyboard to control their computer, Eye Control presents a convenient entry point to a windows computer using eye-tracking technology. Having access to your computer via Eye Control gives individuals a way to communicate, the ability to stay in the workforce, and so much more!
What began as a hack project during a One Week Hackathon, has become a product concept for the Windows team. Microsoft has introduced Eye Control, which empowers people with disabilities to use a compatible eye tracker, such as a Tobii Eye Tracker, to operate an on-screen mouse, keyboard, and text-to-speech in Windows 10 using only their eyes.
Microsoft Learning Tools
The New Learning Tools capabilities within Microsoft Edge Microsoft Learning Tools are a set of features designed to make it easier for people with learning differences like dyslexia to read. In this update, a user can mow simultaneously highlight and listen to text in web pages and PDF documents to read and increase focus.
Now with the addition of the Immersive Reader functionality of Learning Tools you can photograph a document, export it to immersive reader and immediately use the tools to support your understanding of the text.
Narrator will include the ability to use artificial intelligence to generate descriptions for images that lack alternative text. For websites or apps that don’t have alt-text built in, this feature will provide descriptions of an image. Narrator will now also include the ability to send commands from a keyboard, touch or braille display and get feedback about what the command does without invoking the command. Also, there will be some Braille improvements – Narrator users can type and read using different braille translations. Users can now perform braille input for application shortcuts and modifier keys.
Desktop Magnifier is also getting an option to smooth fonts and images, along with mouse wheel scrolling to zoom in and out. It is now possible to use Magnifier with Narrator, so you can zoom in on text and have it read aloud.
This feature already allowed people to speak into their microphone, and convert using Windows Speech Recognition into text that appears on the screen. In the Windows 10 Update, a person can now use dictation to convert spoken words into text anywhere on your PC
To start dictating, select a text field and press the Windows logo key + H to open the dictation toolbar. Then say whatever’s on your mind.
As well as dictating text, you can also use voice commands to do basic editing or to input punctuation. (English only)
If it’s hard to see what’s on the screen, you can apply a color filter. Color filters change the color palette on the screen and can help you distinguish between things that differ only by color.
To change your color filter, select Start > Settings > Ease of Access > Color & high contrast . Under Choose a filter, select a color filter from the menu. Try each filter to see which one suits you best.
Tamas and Peter from route4u.org called in last week to tell us about their accessible route finding service. Based on Open Street Maps, Route4u allows users to plan routes that are appropriate to their level and method of mobility. Available on iOS, Android and as a web app at route4u.org/maps, Route4u is the best accessible route planning solution I have seen. Where a service like Mobility Mojo gives detailed accessibility information on destinations (business, public buildings), route4u concentrates more on the journey, making them complementary services. When first setting up the app you will be given the option to select either pram, active wheelchair, electronic wheelchair, handbike or walking (left screenshot below). You can further configure your settings later in the accessibility menu selecting curb heights and maximum slopes etc. (right screenshot below)
Further configure your settings in Accessibility
You are first asked to select your mobility method
This is great but so far nothing really groundbreaking, we have seen services like this before. Forward thinking cities with deep pockets like London and Ontario have had similar accessibility features built into their public transport route planners for the last decade. That is a lot easier to achieve however because you are dealing with a finite number of route options. Where Route4u is breaking new ground is that it facilitates this level of planning throughout an entire city. It does this by using the technology built into smartphones to provide crowdsourced data that constantly updates the maps. If you are using a wheelchair or scooter the sensors on your smartphone can measure the level of vibration experienced on a journey. This data is sent back to route4u who use it to estimate the comfort experienced on that that journey, giving other users access to even more information on which to base their route choice. The user doesn’t have to do anything, they are helping to improve the service by simply using it. Users can also more proactively improve the service by marking obstacles they encounter on their journey. The obstacle can be marked as temporary or permanent. Temporary obstacles like road works or those ubiquitous sandwich boards that litter our pavements will remain on the map helping to inform the accessibility of the route until another user confirms they have been removed and enters that information.
Example of obstacle added by user –
Example of obstacle added by user
If you connect route4u to your FaceBook account you get access to a points based reward system. This allows you compete with friends and have your own league table. In Budapest where they are already well established they have linked with sponsors who allow you cash points in for more tangible rewards like a free breakfast or refreshment. These gamification features should help encourage users less inclined towards altruism to participate and that is key. Route4u when established relies on its users to keep information up to date. This type of service based on crowdsourced data is a proven model, particularly in the route planning sphere. It’s a bit of a catch 22 however as a service needs to be useful first to attract users. It is early days for Route4u in Dublin and Tamas and Peter acknowledge that a lot of work needs to be done before promoting the service here. Over the next few months their team will begin mapping Dublin city centre, this way, when they launch there will be the foundation of an accessible route finding service which people can use, update and build upon. While route4u has obvious benefits for end users with mobility difficulties there is another beneficiary of the kind of data this service will generate. Tamas and Peter were also keen to point out how this information could be used by local authorities to identify where infrastructure improvements are most needed and where investment will yield the most return. In the long run this will help Dublin and her residents tackle the accessibility problem from both sides making it a truly smart solution.
The Accessibility Checker feature has been part of Microsoft Office for the last few iterations of the software package. It provides a fast and easy way to check whether the content you are producing is accessible to users of assistive technology. By making accessibility accessible Microsoft have left no room for excuses like “I didn’t know how…” or “I didn’t have time..”. You wouldn’t send a document to all your colleagues full of misspellings because you were in a hurry would you? The one criticism that could have been leveled at Microsoft was perhaps they didn’t provide enough support to new users of the tool. As I said above it’s easy to use but sometimes users need a little extra support, especially when you are introducing them to something that may be perceived as additional work. Thankfully Microsoft have filled that gap with a 6 part tutorial video which clearly explains why and how to get started using Accessibility Checker. Part 1 is a short introduction (embedded below) followed by a video on each important accessibility practice; Alternative Text, Heading Styles, Hyperlinks, File naming and Tables. Each video is accompanied by a short exercise to allow you put your new skill into practice immediately. The whole tutorial can be completed in under 20 minutes. This tutorial should be a requirement for anybody producing documents for circulation to the public. Have a look at the introduction video below.
Here in Enable Ireland AT service we have been investigating using the Office Mix plugin for PowerPoint to create more engaging and accessible eLearning content. While we are still at the early stages and haven’t done any thorough user testing yet, so far it shows some real promise.
From the end user perspective it offers a number of advantages over the standard YouTube style hosted video. Each slide is marked out allowing the user to easily skip forward or back to different sections. So you can skip forward if you are comfortable with a particular area of the presentation or more importantly revisit parts that may have not been clear. The table of contents button makes this even easier by expanding thumbnail views of all the slides which directly link to the relevant sections of the video. There is also the ability to speed up or slow down the narration. Apart from the obvious comic value of this it is actually a very useful accessibility feature for people who may be looking at a presentation made in a language not native to them or by someone with a strong regional accent. On the flip side it’s also a good way to save time, the equivalent of speed reading.
From the content creator’s perspective it is extremely user friendly. Most of us are already familiar with PowerPoint, these additional tools sit comfortably within that application. You can easily record your microphone or camera and add to a presentation you may have already created. Another feature is “Inking”, the ability to write on slides and highlight areas with different colour inks. You can also add live web pages, YouTube videos (although this feature did not work in my test), questions and polls. Finally the analytics will give you a very good insight as to what areas of your presentation might need more clarification as you can see if some chooses to look at a slide a number or times. You can also see if slides were skipped or questions answered incorrectly.
Below is a nice post outlining some ways to create inclusive content using Office Mix and Sway, Microsoft’s other new(ish) web based presentation platform. Below that is a much more detailed introduction to Office Mix using… yes you guessed it Office Mix.
There is of course some cross over between the different AT highlights of 2016 I have included here. An overall theme running through all the highlights this year is the mainstreaming of AT. Apple, Google and Microsoft have all made significant progress in the areas previously mentioned: natural language understanding and smart homes. This has led to easier access to computing devices and through them the ability to automate and remotely control devices and services that assist us with daily living tasks around the house. However these developments are aimed at the mainstream market with advantages to AT users being a welcome additional benefit. What I want to look at here are the features they are including in their mainstream products specifically aimed at people with disabilities with the goal of making their products more inclusive. Apple have always been strong in this area and have lead the way now for the last five years. 2016 saw them continue this fine work with new features such as Dwell within MacOS and Touch Accommodations in iOS 10 as well as many other refinements of already existing features. Apple also along with Siri have brought Switch Control to Apple TV either using a dedicated Bluetooth switch or through a connected iOS device in a method they are calling Platform Switching. Platform Switching which also came out this year with iOS 10 “allows you to use a single device to operate any other devices you have synced with your iCloud account. So you can control your Mac directly from your iPhone or iPad, without having to set up your switches on each new device” (need to be on the same WiFi network). The video below from Apple really encapsulates how far they have come in this area and how important this approach is.
Not to be outdone Microsoft bookended 2016 with some great features in the area of literacy support, an area they had perhaps neglected for a while. They more than made up for this last January with the announcement of Learning Tools for OneNote. I’m not going to go into details of what Learning Tools offers as I have covered it in a previous post. All I’ll say is that it is free, it works with OneNote (also free and a great note taking and organisation support in its own right) and is potentially all many students would need by way of literacy support (obviously some students may need additional supports). Then in the fourth quarter of the year they updated their OCR app Office Lens for iOS to provide the immersive reader (text to speech) directly within the app.
Finally Google who would probably have the weakest record of the big 3 in terms of providing inbuilt accessibility features (to be fair they always followed a different approach which proved to be equally effective) really hit a home run with their Voice Access solution which was made available for beta testing this year. Again I have discussed this in a previous post here where you can read about it in more detail. Having tested it I can confirm that it gives complete voice access to all Android devices features as well as any third party apps I tested. Using a combination of direct voice commands (Open Gmail, Swipe left, Go Home etc.) and a system of numbering buttons and links, even obscure apps can be operated. The idea of using numbers for navigation while not new is extremely appropriate in this case, numbers are easily recognised regardless of voice quality or regional accent. Providing alternative access and supports to mainstream Operating Systems is the corner stone of recent advances in AT. As the previous video from Apple showed, access to smartphones or computers gives access to a vast range of services and activities. For example inbuilt accessibility features like Apple’s Switch Control or Google’s Voice Access open up a range of mainstream Smart Home and security devices and services to people with alternative access needs where before they would have to spend a lot more for a specialist solution that would have probably been inferior.
Speech Recognition has been around a long time by technology standards however up until about 2010 most of it was spent languishing in Gartner’s wonderfully named “Trough of Disillusionment”. This was partly because the technology hadn’t matured enough and people were frustrated and disappointed when it didn’t live up to expectations, a common phenomenon identified by the previously alluded to Hype Cycle. There are a couple of reasons why Speech Recognition took so long to mature. It’s a notoriously difficult technical feat that requires sophisticated AI and significant processing power to achieve consistently accurate results. The advances in processing power were easy enough to predict thanks to Moore’s Law. Progress in the area of AI was a different story entirely. Speech Recognition relies first on pattern recognition, but that only takes it so far. To improve the accuracy of speech recognition improvements in the broader area of natural language processing were needed. Thanks to the availability of massive amounts of data via the World Wide Web, much of it coming from services like YouTube we have seen significant advances in recent years. However there is also human aspect to the slow uptake of speech driven user interfaces, people just weren’t ready to talk to computers. 2016 is the year that started to change.
Siri (Apple) who was first on the scene and is now 5 years old and getting smarter all the time came to MacOS and AppleTV this year. Cortana (Microsoft) who started on Windows Phone, then to the desktop with Windows 10, made her way onto Xbox One, Android and iOS and is soon to be embodied in all manner of devices according to reports. Unlike Siri, Cortana is a much more sociable personal digital assistant, willing to work and play with anyone. By this I mean Microsoft have made it much easier for Cortana to interact with other apps and services and will be launching the Cortana Skills Kit early next year. As we’ve seen in the past it’s this kind of openness and interoperability that takes technologies in directions not envisaged and often leads to adaption and adoption as personal AT. If there was a personal digital assistant of the year award however, Amazon Echo and Alexa would get it for 2016. Like Microsoft, Amazon have made their Alexa service easy for developers to interact with and many manufacturers of Smart Home products have jumped at the opportunity. It is the glowing reviews from all quarters however that makes the Amazon Echo stand out (a self-proclaimed New Yorker Luddite to the geeks at CNET). Last but not least we have Google. What Google’s personal digital assistant lacks in personality (no name?) it makes up for with stunning natural language capabilities and an eerie knack of knowing what you want before you do. Called Google Now on smartphones (or just Google App? I’m confused!), similar functionality without some of the context relevance is available through Voice Search in Chrome. They also offer voice to text in Google Docs which this year has been much improved with the addition of a range of editing commands. There is also the new Voice Access feature for Android currently in beta testing but more on that later. In the hotly contested area of the Smart Home Google also have a direct competitor to Amazons Echo in their Google Home smart speaker. Google are a strong player in this area, my only difficulty (and it is an actual difficulty) is saying “ok Google”, rather than rolling off the tip of my tongue it kind of catches at the back requiring me to use muscles normally reserved for sucking polo mints. Even though more often than not I mangle this trigger phrase it always works and that’s impressive. So who is missing? There is one organisation conspicuous by their absence with the resources in terms of money, user data and technology who are already positioned in that “personal” space. Facebook would rival Google in the amount of data they have at their disposal from a decade of video, audio and text, the raw materials for natural language processing. If we add to this what Facebook knows about each of its users; what they like, their family, friends and relationships (all the things they like), calendar, history, interests… you get more than a Personal Digital Assistant, maybe Omnipersonal Digital Assistant would be more accurate. The video below which was only released today (21/12/16) is of course meant as a joke (there are any number of things I could add here but I’ll leave it to the Guardian). All things considered however it’s only a matter of time before we see something coming out of Facebook in this area and it will probably take things to the next level (just don’t expect it to be funny).
What does this all mean for AT? At the most basic level Speech Recognition provides an alternative to the keyboard/mouse/touchscreen method of accessing a computer or mobile device and the more robust and reliable it is the more efficiently it can be used. It is now a viable alternative and this will make a massive difference to the section of our community who have the ability to use the voice but perhaps for any number of reasons cannot use other access methods. Language translation can be accurately automated, even in real time like the translation feature Skype launched this year. At the very least this kind of technology could provide real-time subtitling but the potential is even greater. It’s not just voice access that is benefiting from these advances however, Personal Digital Assistants can be interacted with using text also. Speech Recognition is only a part of the broader area on Natural Language Processing. Advances in this area lead directly to fewer clicks and less menu navigation. Microsoft have used this to great effect in their new “Tell me what you want to do” feature in their Office range. Rather than looking through help files or searching through menus you just type what tool you are looking for, in your own words, and it serves it right up!
Natural Language Processing will also provide faster and more accurate results to web searches because there is a better understanding of actual content rather than a reliance on keywords. In a similar way we are seeing this technology working to provide increased literacy supports as the computer will be able to better understand what you mean from what you type. Large blocks of text can be summerised, alternative phrasing can be suggested to increase text clarity. Again the new Editor feature in Microsoft Word is made possible by this level of natural language understanding.
As we approach the end of 2016 it’s an appropriate time to look back and take stock of the year from an AT perspective. A lot happened in 2016, not all good. Socially, humanity seems to have regressed over the past year. Maybe this short term, inward looking protectionist sentiment has been brewing longer but 2016 brought the opportunity to express politically, you know the rest. While society steps and looks back technology continues to leap and bound forward and 2016 has seen massive progress in many areas but particularly areas associated with Artificial Intelligence (AI) and Smart Homes. This is the first in a series of posts examining some technology trends of 2016 and a look at how they affect the field of Assistive Technology. The links will become active as the posts are added. If I’m missing something please add it to the comments section.
So although 2016 is unlikely to be looked on kindly by future historians… you know why; it has been a great year for Assistive Technology, perhaps one of promise rather than realisation however. One major technology trend of 2016 missing from this series posts is Virtual (or Augmented) Reality. While VR was everywhere this year with products coming from Sony, Samsung, Oculus and Microsoft its usefulness beyond gaming is only beginning to be explored (particularly within Education).
So what are the goals for next year? Well harnessing some of these innovations in a way where they can be made accessible and usable by people with disabilities at an affordable price. If in 2017 we can start putting some of this tech into the hands of those who stand to benefit most from its use, then next year will be even better.