Mobile Device Accessibility: iOS and the Android Accessibility Suite

One aspect of modern technological life that might help us to keep some faith in humanity are the comprehensive assistive technologies that are built into, or free to download for mobile computing devices. Accessibility features, as they are loosely called, are a range of tools designed to support non-standard users of the technology. If you can’t see the screen very well you can magnify text and icons (1) or use high contrast (2). If you can’t see the screen at all you can have the content read back to you using a screen-reader (3). There are options to support touch input (4, 5) and options to use devices hands free (6). Finally there also some supports for deaf and hard of hearing (HoH) people like the ability to switch to mono audio or visual of haptic alternatives to audio based information.  

With their mobile operating system iOS Apple do accessibility REALLY well and this is reflected in the numbers. In the 2018 WebAim Survey of Low Vision users  there were over 3 times as many iOS users as Android users. That is almost the exact reverse of the general population (3 to 1 in favour of Android). For those with Motor Difficulties it was less significant but iOS was still favoured.

So what are Apple doing right? Well obviously, first and foremost, the credit would have to go to their developers and designers for producing such innovative and well implemented tools. But Google and other Android developers are also producing some great AT, often highlighting some noticeable gaps in iOS accessibility. Voice Access, EVA Facial Mouse and basic pointing device support are some examples, although these are gaps that will soon be filled if reports of coming features to iOS 13 are to be believed.

Rather than being just about the tools it is as much, if not more, about awareness of those tools: where to find them, how they work. In every Apple mobile device you go to Settings>General>Accessibility and you will have Vision (1, 2, 3), Interaction (4, 5, 6) and Hearing settings. I’m deliberately not naming these settings here so that you can play a little game with yourself and see if you know what they are. I suspect most readers of this blog will get 6 from 6, which should help make my point. You can check your answers at the bottom of the post 🙂 This was always the problem with Android devices. Where Apple iOS accessibility is like a tool belt, Android accessibility is like a big bag. There is probably more in there but you have to find it first. This isn’t Google’s fault, they make great accessibility features. It’s more a result of the open nature of Android. Apple make their own hardware and iOS is designed specifically for that hardware. It’s much more locked down. Android is an open operating system and as such it depends on the hardware manufactured how accessibility is implemented. This has been slowly improving in recent years but Google’s move to bundle all their accessibility features into the Android Accessibility Suite last year meant a huge leap forward in Android accessibility.

What’s in Android Accessibility Suite?

Accessibility Menu

Android OS Accessibility Suite Assistant Menu. An onscreen menu with large colourful buttons for features like, power, lock screen, volume
The figure highlighted in the bottom corner launches whatever Accessibility Suite tools you have active. If you have more than one a long press will allow you switch between tools.

Use this large on-screen menu to control gestures, hardware buttons, navigation, and more. A similar idea to Assistive Touch on iOS. If you are a Samsung Android user it is similar (but not as good in my opinion) as the Assistant Menu already built in.

Select to Speak

The select to speak tool when active on a webpage. large red button to stop speech. Arrow at left to extend menu, pause button

Select something on your screen or point your camera at an image to hear text spoken. This is a great feature for people with low vision or a literacy difficulty. It will read the text on screen when required without being always on like a screen reader. A similar feature was available inbuilt in Samsung devices before inexplicably disappearing with the last Android update. The “point your camera at an image to hear text spoken” claim had me intrigued. Optical Character Recognition like that found in Office Lens or SeeingAI built into the regular camera could be extremely useful. Unfortunately I have been unable to get this feature to work on my Samsung Galaxy A8. Even when selecting a headline in a newspaper I’m told “no text found at that location”.

Switch Access

cartoon hand activating a Blue2 switch. Android phone desktop with message icon highlighted

Interact with your Android device using one or more switches or a keyboard instead of the touch screen. Switch Access on Android has always been the poor cousin to Switch Control on iOS but is improving all the time.

TalkBack Screen Reader

Get spoken, audible, and vibration feedback as you use your device. Googles mobile screen reader has been around for a while, while apparently, like Switch Access it’s improving, I’ve yet to meet anybody who actually uses it full time.

So to summarise, as well as adding features that may have been missing on your particular “flavour” of Android, this suite standardises the accessibility experience and makes it more visible. Also another exciting aspect of these features being bundled in this way is their availability for media boxes. Android is a hugely popular OS for TV and entertainment but what is true of mobile device manufacturer is doubly so of Android Box manufacturers where it is still very much the Wild West. If you are in the market for an Android Box and Accessibility is important make sure it’s running Android Version 6 or later so you can install this suite and take advantage of these features.

Could you name the Apple iOS features?

  1. Zoom
  2. Display Accommodations or Increase Contrast   
  3. VoiceOver
  4. Assistive Touch
  5. Touch Accommodations
  6. Switch Control

Alex Lucas – Switch Access, Ableton Live and Inclusive Music

Since the year 2000 Enable Ireland’s Assistive Technology (AT) training service have run a Foundations in AT (5 ECTS) course certified by the Technological University Dublin (TUD). Those of you reading this post will most likely be familiar with AT and what a broad and rapidly evolving area it is. While overall the direction AT has taken over the last decade is positive and exciting, it has also become a more challenging area to work in. As a result, the importance and value of the Foundations in AT course has also increased and this is both reflected in, and as a direct result of the calibre of course participant we’ve had in recent years. The wealth of experience brought by participants each year helps the course evolve and develop, filling in gaps and offering new directions for technology to support people in areas beyond primary needs such as communication, access and daily living. Last month we began what is a new effort on our part to share with a wider audience some of the excellent work produced by Foundations in AT course participants with Shaun Neary’s post Accessible Photography – Photo Editing with Adobe Lightroom & the Grid 3. This month we will look at another area of creativity, music. 

Alex Lucas enrolled in the 2018 Foundations in AT course. As soon as we learned about his background and experience, we knew that his involvement in the course was an opportunity for us to learn more about accessible music technology and practice. Alex is an academic (PhD research student in Queen’s University Belfast), a maker, a musician, a developer and a product designer. Before returning to studies, he had gained 10 years’ experience working in mainstream music technology with big name companies like Focusrite and Novation. In Queens he is currently researching  “Supporting the Sustained Use of Bespoke Assistive Music Technology” and is part of the Research Group: Performance Without Barriers. He also works with Drake Music Northern Ireland

We could be accused of having underutilised Alex, but our suggestion for his project was to produce a resource that would act as an introduction to people new to the area of accessible music technology. Alex chose to focus on the mainstream Digital Audio Workstation (DAW) application Ableton Live and Switch input. As well as the project document (download link below) he released 5 really excellent tutorial videos on YouTube, the first of which is embedded here. 

Alex kindly agreed to contribute to this post so we asked him why he chose to focus on Ableton, to tell us a bit more about his work in inclusive music and a little about the research he is currently undertaking at Queens. Over to you Alex..

***

Why Ableton? 

There are many software applications available for computer-based music production. Ableton Live is arguably one of the most popular DAWs. When first released in 2001, Ableton Live set itself apart from other DAWs through a unique feature called Session View.

Session View is a mode of operation which can be thought of as a musical sketchbook providing composers with an intuitive way to create loop-based music; a feature which is particularly useful when creating electronic music. When combined with Ableton Live’s built-in virtual musical instruments and devices for creating and modifying musical ideas, we find ourselves with a rich toolset for composing music in inclusive settings.

How this works with groups?

Music connects people; we see this often when conducting group-based inclusive music workshops, making work of this kind essential to Drake Music NI. There could be up to twelve participants of mixed abilities in a typical Drake workshop. As Access Music Tutors, we approach group workshops by first speaking to each participant in turn to identify their creative goals. One individual may have an interest in playing distorted synthesiser bass sounds, while another may prefer the softer sound of a real instrument such as a piano. Knowledge of an individual’s creative goals and their access requirements is used to select an appropriate device for the participant to use to control a virtual instrument within Ableton Live.

In addition to the Access Switches described in the video’s mentioned above, Drake Music also uses commercially available assistive music technologies such as Soundbeam and Skoog, and mainstream MIDI controllers such as the Novation Launchpad. It’s possible to connect several of these devices to a single computer running Live.

Together, the group make core musical decisions; i.e. genre, tempo, musical key. The workshop will proceed in one of two ways, either we jam together, or record each participant in turn, building up a composition gradually using overdubbing techniques.

OMHI – One-Handed Musical Instrument Trust

There are a handful of other organisations within the UK, working towards providing inclusion in music. One notable organisation is the One-Handed Musical Instrument Trust (https://www.ohmi.org.uk/). Many traditional musical instruments are designed in such a way that they place a fundamental requirement on the musician; they must have two fully functional hands. This assumption results in the exclusion of some individuals from learning a traditional musical instrument. Furthermore, in some cases, accomplished musicians are not able to return to their instrument after losing the function of a hand due to illness or an accident. OHMI aims to address this shortcoming by running an annual competition which invites instrument designers to adapt traditional musical instruments to be played by one hand only. Many fantastic designs are submitted to OHMI each year. I’m particularly impressed by David Nabb’s Toggle-Key Saxophone (https://www.unk.edu/academics/music/_files/toggle-key-system.pdf) which retains all of the functionality of a standard saxophone while being playable by one hand.

Research

Whilst OHMI primarily focuses on the adaptation of traditional acoustic instruments for inclusion and accessibility; my research centres on the challenges faced by disabled musicians in the long-term use of custom-made digital musical instruments.

In partnership with a disabled musician named Eoin at Drake Music NI, together we’ve been designing a digital musical instrument tailored towards Eoin’s unique abilities. Eoin has a strong desire to play electric guitar but as Eoin cannot hold a guitar, due to its physical characteristics, he has been unable to up until this point.

Using a motion sensor and an access switch, coupled with a Raspberry Pi embedded computer, Eoin is now able to play rudimentary guitar sounds using the movements of his right arm. We’ve tested several prototypes and are now in the process of assembling the instrument for Eoin to use both during Drake music workshops and at home.

As a musician, Eoin is the primary user of the device; however we’ve also been considering Eoin’s primary carer, his father Peter, as a secondary user. We’ve designed a high-level interface for Peter to use, hopefully allowing him to easily set-up the device for Eoin to use at home. We’re particularly interested in the longevity of the device, whether or not it’s viable for Eoin and Peter to use independently. Obsolescence can be a problem for assistive technology in general. Our current assumption is that obsolescence may be an issue with custom-made accessible digital musical instruments but hope, through this research to discover useful mitigation strategies.

***

We just want to thank Alex again for making this tutorial and contributing to this post. The full accompanying document can be viewed/downloaded here. It really is a valuable resource for an area with such potential that is poorly supported at present (certainly here in Ireland) .

Keep an eye on Alex’s YouTube Channel or follow him on Twitter @alexaudio for more quality Accessible Music Technology information.


Learning Tools – Using technology to support learning and facilitate collaboration in education

Yesterday Microsoft Ireland hosted a half-day workshop for second level students using technology for additional support within education. This workshop came about thanks to Tara O’Shea, Community Affairs Manager at Microsoft and Stephen Howell, Academic Program Manager. Tara has been a huge supporter of Enable Ireland Assistive Technology Service over the last decade and been the driving force behind many of the successful projects we have collaborated on. Stephen would be a very familiar face to anyone involved in that space where technology and education meet, not just in Ireland but internationally.

The goal of the workshop was to introduce some of the collaboration tools available to students using Office365, additional supports available to students with maths or language difficulties and to provide alternative ways to produce and present content. Obviously as Microsoft was hosting there was an emphases on their tools nevertheless Stephen was quite open about how similar features are available on other platforms. We (Enable Ireland AT) pride ourselves on providing independent recommendations; the best solution for the user is the solution they use best. The practice of schools forcing students down any particular route: Microsoft, Google or Apple, is restrictive and cause difficulties if there are specific access or support needs. Microsoft and Google though offer more browser-based tools that mean users are free to use any device. I should also acknowledged that Microsoft have really upped their game in the areas of Education and Accessibility over the last few years.   

Collaboration

Fostering collaboration is a cornerstone of modern education and promotes a vital real world skill (teamwork) that will serve students throughout their lives. The screenshot below from Facebook (Stephanie McKellop) and illustrates a way that tools we may have considered more for remote collaboration, can be used within a classroom or lecture hall.

Facebook screenshot from user
Stephanie McKellop.
I learned today that a group of students used a Google doc to take lecture notes -- they all took notes simultaneously in a collective file.
They would mark places they were confused or couldn't follow the lecturer. other students would see and explain.
at the end of the semester they have a massive document of note, questions and explanations from peers

Although this example uses Google Docs, Microsoft OneNote could also be used in this way. In fact there would be a number of advantages to using OneNote such as the ability to incorporate Ink annotations and drawings, audio & video and adding whiteboard or print text using Office Lens.

When it comes to collaboration, Microsoft Teams is at the centre. Teams is a unified communications platform, basically it’s like WhatsApp or Facebook Messenger but with tonnes of additional features. Through Teams you can not only instant message, video/audio call or share desktops but you can also work on shared documents, whiteboards or mind maps. There are also plugins for many third party apps and services, so if you are already collaboration app or service there is probably an integration available. Stephen demonstrated how a tool like Teams could be used in a classroom session by setting up a class team and getting everyone to work on a short Sway presentation (we mentioned Sway in a previous post a couple of years ago, don’t understand why everyone isn’t using it by now). Once everyone had completed their presentation they posted a link to the class message stream and Stephen showed it on the large screen. Okay, this exercise could have been done without Teams but using the service made it so much easier and more importantly everything was recorded for students to revisit in their own time.

Support

We have looked at Microsoft Learning Tools numerous times on this blog over the last few years (read this post is you want to know more about Learning Tools). Thankfully, since its introduction as a plugin for OneNote in 2016 it has gone from strength to strength. Features like Immersive Reader are now standalone apps and have also found their way into many other Office365 apps like Word and Outlook. Some other apps Stephen introduced are listed below with a brief description. They are all free so we encourage you to download and try them yourselves.

Microsoft Math: If you are familiar with the language-learning app Duolingo, this app takes a similar approach to teaching Mathematics. Short challenges with rewards and feedback. Gamifying Maths

Snip & Sketch: Lets you quickly and easily capture content from the web (pictures, text etc), draw and annotate it and share with other apps.

Microsoft WhiteBoard: Provides a blank canvas where you can collaborate with others and share with the class

Microsoft Translator: Useful for translations or transcriptions. Stephen also showed how it can be a great way to practice pronunciation when learning to speak a foreign language.    

Factsheets on Dyslexia at Second Level

This weeks post was contributed by Wyn McCormack, co-author of the Factsheets on Dyslexia at Second Level . Wyn has been involved with the Dyslexia Association of Ireland for over 20 years and has designed and presented courses on dyslexia for parents, teachers and students.  She has written extensively on the topic including Lost for Words, a Practical Guide to Dyslexia at Second Level, (3rd Ed.  2006), and Dyslexia, An Irish Perspective (3nd Ed. 2011) as well as being the co-author of the Factsheets on Dyslexia at Second Level in 2013 (updated 2014, 2015, 2016).  She has been a presenter for SESS, the Special Education Support Service.   She is a former Guidance Counsellor and Special Educational Needs teacher.  Her three sons have dyslexia.

*  *  *  *

In 2014 the Dyslexia Association of Ireland asked myself and Mary Ball, an educational psychologist to write the Factsheets on Dyslexia at Second Level to celebrate their 40th anniversary.  The key objective of the Factsheets was to give teachers clear and concise information on dyslexia, how it affects students and how schools and teachers can help. With dyslexia affecting approximately one in ten people, there are many thousands of students with dyslexia in schools.

There are 18 Factsheets.  The majority were intended for teachers and schools and cover topics such as teaching literacy, numeracy, foreign languages, Maths and Assistive Technology.  Factsheet 16 is for parents on how they can help and Factsheet 17 is for students on study strategies.

I update the Factsheets annually in August and they are available for free download at www.dyslexiacourses.ie.  After putting the work into writing them, I really wanted to get them widely used.   In 2014 I had taken early retirement as a Guidance Counsellor and Special Education Teacher.  So I set up Dyslexia Courses Ireland to offer schools, parents and students courses on dyslexia friendly strategies and AT resources.  I was then joined by Deirdre McElroy, a school colleague who had worked as a NEPS educational psychologist.  The courses have been really well received.  Since 2014 we have had just under 3000 teachers, 540 parents and 480 students attend our courses.  We run courses at central venues for teachers and also give presentations to the teaching staff within schools.  At this stage we have been to schools in every county (outside of N. Ireland).  In 2018 in the last week of August which is the first week of the school year, we presented courses in 14 schools.

The course for students is a study skills workshop.  Students with dyslexia may experience difficulties with organisation, reading, memory and learning, note-taking, writing and spelling.  They may find it hard to show what they know in exams due to misreading questions and poorly structured answers.  The workshop covers strategies that help the student to achieve and which also target their specific difficulties.

A key element of the teacher courses is that while we share ideas with the teachers, we ask them to recommend websites, Apps, and strategies that they are using in the classroom.  As a result we have an extensive list of recommended websites.  The teachers generously have allowed us to share these.  We do this by twice a year sending out a newsletter to all schools as well as to those who attended our courses.   The recommendations have grown so much that while we did have one handout called Useful websites/APPS on Keynotes, subject specific resources, study skills, exam preparation, assistive technology and on-line tutorials, we have had to split it into one for teachers and one for students. Both are available under downloads on the website.

While my favourite websites vary over time, some really helpful ones are as follows;

  • alison.com for on-line tutorials in Project Maths at Junior and Leaving Cert.
  • sparknotes.com and, in particular, their short videos of Shakespearian plays and the No Fear guides where the Shakespearian words are on side of the page with a modern English translation on the other.
  • studystack.com with flashcards and games when key facts have to be learnt.

Just in the last month, I was told about www.canva.com and www.spark.adobe.com which allow infographics be created.

The reason I am so involved is that my three sons are dyslexic and I realised much more needed to be done at second level.  As I have travelled with them on their journey through education, I also realised there was a reason why I could never tell left from right and that I also shared some of dyslexic traits.  These experiences have helped me appreciate the difficulties which many students with dyslexia face in school.

I hope the factsheets contribute to greater awareness of dyslexia at second level and all the ways that teachers and schools can support the these students.

Wyn McCormack

Voice Banking – ModelTalker

Voice Banking involves the recording of a list of sentences into a computer. When enough recordings have been captured, software chops them up into individual sounds, phonetic units. A synthetic voice can then be built out of these phonetic units, this is called Concatenative speech synthesis. The number of sentences or statements needed to build a good quality English language synthetic voice using this process varies but is somewhere between 600 and 3500. This will take at least 8 hours of constant recording. Most people break it up over a few weeks which is recommended as voice quality will deteriorate over the course of a long session. So 20 minutes to half an hour in the morning (when most people’s voices are clearer) would be a good approach. The more recordings made the better quality the resulting voice will be.

There are a number of services offering voice banking and we have listed some that we are aware of below. The technology used varies from service to service and this post isn’t intended to be a guide to which service may be appropriate to a particular user. Our advice would be to investigate all options before making a decision as this process will be a considerable investment of time and in some cases money.

A person might choose to bank their voice for a number of reasons. The most common reason would be if someone has been diagnosed with a progressive illness like Motor Neuron Disease (MND/ALS) or similar that will result in the loss of speech. A voice is a very personal thing and being able to keep this aspect of individuality and identity can be important. The MND Association have detailed information Voice Banking on their website here.  People unable to speak from birth can also take advantage of this technology. The VocalID service (although expensive) seems to offer good options in this regard. A family member could donate their voice by going through the voice banking process (or they could choose an appropriate donated voice). This synthetic voice could then be modified with filters modelled on the users own vocalisations. The result is a unique and personal voice with some of the regional qualities (accent, pronunciation) that reflect their background and heritage. Irish AAC user have historically had little choice when it came to selecting a voice, most grudgingly accepting the BBC newsreader upper-class English voice that was ubiquitous in communication devices. In Ireland, where an accents can vary significantly over such small geographical areas, how you speak is perhaps even more tied to your identity than other countries. Hopefully in the near future we will be hearing AAC users communicating in Cork, Limerick and Dublin accents!

ModelTalker

For research purposes I used the ModelTalker service to create a synthetic voice. I wanted to see how well it dealt with the Irish accent. The ModelTalker service is run out of the Nemours Speech Research Laboratory (SRL) in the Nemours Center for Pediatric Auditory and Speech Sciences (CPASS) at the Alfred I. duPont Hospital for Children in Wilmington, Delaware. It is not a commercial service, only costing a nominal $100 to download your voice once banked. They offer an Online Recorder that works directly in the Chrome Browser or you can download and install their MTVR App if you are using the Windows OS. The only investment you need to make to begin banking your voice is a decent quality USB headset. I used the Andrea NC-181 (about €35). For the best quality they recommend you record about 1600 sentences but they can build a voice from 800. As this was just an experiment I recorded the minimum 800. At the beginning of each session you go through a sound check. Consistency is an important factor contributing to the overall quality of the finished voice. This is why you need to keep using the same computer and microphone throughout the whole process, ideally in the same location. When you begin you will hear the first statement read out, you then record the statement yourself. A colour code will give you feedback on whether the recording was acceptable or not. Red means it wasn’t good enough to use and so you should try again. Yellow is okay, could be better and green means perfect, move on. I found the Irish accent resulted in a lot of yellow. Don’t let this worry you too much. A nice feature for Irish people who want to engage in this process is the ability to recording custom sentences. They recommend that you at least record your own name. So many names and places in Ireland are anglicised versions of Irish that it would be worthwhile spending a bit of time on these custom sentences. “Siobhán is from Drogheda” for example would be incomprehensible using most Text to Speech. At the end of each session you upload your completed sentences which are added to your inventory (if using the browser based recorder they are added as you go). When you feel you have enough completed you can request your voice. When the voice is ready you need to audition it, this process allows you to fine tune how it sounds. I made a screen recording of this process and I will add it to this post when I have edited it down to a manageable length.

Click play below to hear a sample of my synthesized voice. Yes, unfortunately I do kind of sound like that J

Speech synthesis is an area of technology that is progressing rapidly thanks to the interest of big multinationals like Google (listen to their DeepMind powered WaveNet Voices here) and Adobe (caused a stir and even concern in some quarters with project VoCo in 2016). Looking at the two previous examples it’s not hard to imagine that a high quality unique voice could be built from a short sample in the near future.

Voice Banking Services

More about Voice Banking

Good resource on Voice Banking from the MND Association: https://www.mndassociation.org/forprofessionals/aac-for-mnd/voice-banking/

Recent article from the Guardian about Voice Banking, focusing on VocalID: https://www.theguardian.com/news/2018/jan/23/voice-replacement-technology-adaptive-alternative-communication-vocalid

Sensory Pod – Thinking outside the box

It appeared in the Cosmo room as if out of nowhere. Looking like a section of the international space station (one of the newer parts), it immediately grabs the attention of anybody who enters the room. Enable Ireland Children’s Services have been trialling a Sensory Pod over the last few months and both staff and clients are enthusiastic about it. I had a quick chat with Robert Byrne, creator of the Sensory Pod, while he was making some minor modifications based on feedback from our therapists.

view of the sensory pod from the side. sliding door is open, blue LED details on the end

In a previous job Robert Byrne spent a lot of time visiting manufacturers in Asia, which is when he first came across the idea of a capsule hotel. Due to population density, space in some Asian cities is at a premium. A capsule hotel consists of rooms that are only the size of the bed they contain. You have enough head room to sit up in bed but not enough to stand. In this corner of the world with our open spaces and high ceilings the thoughts of a night in such accommodation might cause us to break into a claustrophobic sweat, Robert however only saw an opportunity. Through a family member, Robert had experience of Autism. A common symptom reported by people with this form of neurodiversity is oversensitivity to stimuli: light, noise, touch and smells. It is this aspect of Autism that can actually prevent some people from engaging in everyday activities such as work and education. Robert noticed how successful the capsule hotel room was at shielding its occupant from such outside stimuli and realised it could be a very cost effective way to provide a safe and comfortable space for schools and colleges.

He took the basic design of the capsule room and customised it to suit this new function. inside the sensory pod with green mood lighting. Control console and mirror at centre on frameAlong with his design team, he reinforced the plastic shell and mounted the pod in a steel frame, with an extra bed that can be pulled out alongside the Pod. This provides a comfortable area for a parent or caregiver to relax when the Pod is occupied. They added LED mood lighting, temperature control, audio and 22” learning screen. The design is modular, allowing customisation to best suit individual client’s needs, full details are on the Sensory Pod site.

It’s all very well having a good idea but it takes a particular type of person to be able to see it through to a marketable product. The Sensory Pod have built an extensive portfolio manufacturing and designing sleep systems and safe spaces for some of the Largest Corporate companies across Europe and further afield. They played a key role in Dublin City University’s successful Autism Friendly Campus initiative. Students can apply for a smart card and book a time slot. Using their card they can open the pod door and escape the hustle and bustle of campus life for an hour.

Beyond Boundaries: How Interactive and Immersive Media are being used to support people with autism

This is the first in a two part post about Enable Ireland’s Immersive Media Beyond Boundaries Garden project. If you want to try the apps for yourself you can get them from Google Play here or there are links and some more information on our website here. This first post (Part 1) will give a brief background to Virtual Reality and related technologies and look at some of the research into its potential in the area of autism. Part 2 of the post will outline how we put our Beyond Boundaries and SecretGarden apps together and how we hope to incorporate this technology into future training and use it to support clients of our service.

Background: VR, AR, Mixed Media, 360 Video?

Virtual Reality, referred to as the acronym VR, is one of those technologies that is perpetually “the next big thing”. If you grew up looking at movies like Tron and The Lawnmower Man (giving away my age here), VR is probably filed away in your brain somewhere between hoverboards (that actually hover) and teleportation. When the concept of a technology has been part of popular culture so far in advance of the capability of its realisation, it can hinder rather than promote its development. The trajectory the evolution of VR has taken however is much closer to a technology like Speech Recognition than hoverboards. VR, as with Speech Recognition, saw a great deal of progress in the latter part of the 1980s. With both technologies, although important, this progress was almost nullified by the hype surrounding and subsequent commercialisation of a technology that clearly wasn’t ready for the public consumption. The reality of what VR could offer at the time led to people becoming disillusioned with the technology.

Before I talk about how VR is being used in the area of autism it’s worth clarifying what exactly is meant by some of the terms that are being used. As an emerging technology there is still quite a lot of confusion around what is meant by Virtual Reality and associated technologies; Augmented Reality (AR), Mixed Reality, Immersive Media and 360 Video. First let’s look at the video below which explains what VR and AR are and how they differ.

So what is Mixed Reality? Well in short Mixed Reality is a combination of VR and AR, in theory offering the best of both. Mixed Reality is also closely associated with Microsoft and other Windows aligned hardware manufacturers. Have a look at the short video below.

360 degree Video and Photography are less interactive than the technologies discussed above. The viewer is also restricted in terms of movement, they can only view the scene from the position the camera was placed. Movement can be simulated to some extent however through the use of hotspots or menus, allowing them to navigate between different scenes. More traditional film techniques like fading between scenes can also be used as in the video below. 360 Degree can be either flat or in stereo. Stereo video or 3D video is captured with a camera that has 2 lens about the same distance apart as a person’s eyes. Each eye then gets a slightly different view which our brain stitch together as a 3D image.

Finally Immersive Media is frequently used as an umbrella term for all the technologies discussed above but would more correctly refer to the less interactive 360 Video and Photography.

Immersive Media and Autism

Since the early days of the technology people have proposed that VR may offer potential as a therapeutic or training tool within the area of neurodiversity. Dorothy Strickland of North Carolina State University’s short paper “Two Case Studies Using Virtual Reality As A Learning Tool For Autistic Children” (Journal of Autism and Developmental Disorders, Vol. 26, No. 6, 1996) is generally accepted as being the first documented use of VR as a tool to increase the capabilities of someone with a disability. In this early study (which you can read at the link above) VR was used as a means to teach the children how to safely cross the street. While VR technology itself has clearly moved on, for the reasons outlined above, its use in this area (up until recently) has not and there is still a great deal about this paper that is relevant today. In particular regarding the children’s acceptance of the headset (which would have been chunkier and more uncomfortable than todays) and their understanding of the 3D world presented by it.

Stepping forward almost a quarter of a century and we are riding the peak of the second wave of commercial VR. Thanks largely to developments made due to the rapid evolution of mobile device in the early years of this decade, VR is becoming more accessible and less disappointing than it was first time around. With the new generation of headsets and their ability to render sharp and detailed 3D environments has come a renewed interest in the use of VR in the area of autism.  At a recent CTD Institute webinar on this very subject (Virtual Reality and Assistive Technology) Jaclyn Wickham (@JacWickham), a teacher turned technologist and founder of AcclimateVR outlined some of the reasons why VR could be an appropriate technology to provide training for some people on the autistic spectrum. These included the ability to create a safe and controlled environment where tasks can be practiced and repeated. How the VR experience puts emphases on the visual and auditory senses (with the ability to configure and control both presumably). How you can create an individualised experience and that there are many non-verbal interaction possibilities. Anecdotally this all makes complete sense but we are in the early days and much of the research is still being conducted.

A leading researcher in this area is Dr Nigel Newbutt (@Newbutt) who in June of this year published a short but enlightening update about his progress working with children from Mendip School in the UK. After seeing him present at Doctrid V conference in 2017 I can assure you that progress in this area is being made but even he acknowledges more work is needed. “Our research suggests that head-mounted displays might be a suitable space in which to develop specific interventions and opportunities; to practice some skills people with autism might struggle with in the real world. We’re seeking further funding to address this important question – one that has eluded this field to date.” (Full interview here: From apps to robots and VR: How technology is helping treat autism)

The commercial offerings in the area of VR and Autism (Floreo and AcclimateVR) tend to concentrate on providing a virtual space where basic life skills can be practiced. Another use is as a form of exposure therapy where immersive video and audio of environments and situations are used as a means of preparing someone for the real life experience. You can see examples of both in action at the links above.

Within Enable Ireland AT service our own VR journey was spurred on by a visit and demonstration from James Corbett (@JamesCorbett) of SimVirtua. James could be considered a real pioneer in this area and had in fact met with us previously almost 10 years ago to show us some work he was doing with non-immersive virtual environments (without headsets) in schools. SimVirtua had worked on a Mindfulness VR app called MindMyths and it was this idea of providing a retreat or sanctuary using immersive video that inspired us when it came to working on the Bloom Beyond Boundaries Garden project.

In the second part of this post (coming soon) I’ll give some background to what we hoped to achieve with the Beyond Boundaries garden project and some technical information on how we put it together.

Tobii buys SmartBox – What might this mean for computer access and AAC?

Big news (in the AT world anyway) may have arrived in your mail box early last week. It was announced that leading AAC and Computer Access manufacturer Tobii purchased SmartBox AT (Sensory Software), developers of The Grid 3 and Look2Learn. As well as producing these very popular software titles, SmartBox were also a leading supplier of a range of AAC and Computer Access hardware, including their own GridPad and PowerPad ranges. Basically (in this part of the world at least) they were the two big guns in this area of AT, between them accounting for maybe 90% of the market. An analogy using soft drink companies would be that this is like Coca-Cola buying Pepsi.

Before examining what this takeover (or amalgamation?) means to their customers going forward it is worth looking back at what each company has historically done well. This way we can hopefully provide a more optimistic future for AT users rather than the future offered by what might be considered a potential monopoly.

Sensory Software began life in 2000 from the spare bedroom of founder Paul Hawes. Paul had previously worked for AbilityNet and had 13 years’ experience working in the area of AT. Early software like GridKeys and The Grid had been very well received and the company continued to grow. In 2006 they setup Smartbox to concentrate on complete AAC systems while sister company Sensory Software concentrated on developing software. In 2015 both arms of the company joined back together under the SamrtBox label. By this time their main product, the Grid 3, had established itself as a firm favourite with Speech and Language Therapists (SLT), for the wide range of communication systems it supported and Occupational Therapists and AT Professionals for its versatility in providing alternative input options to Windows and other software. Many companies would have been satisfied with providing the best product on the market however there were a couple of other areas where SmartBox also excelled. They may not have been the first AT software developers to harness the potential resources of their end users (they also may have been, I would need to research that further) but they were certainly the most successful. They succeeded in creating a strong community around the Grid 2 & 3 with a significant proportion of the online grids available to download being user generated. Their training and support was also second to none. Regular high quality training events were offered throughout Ireland and the UK. Whether by email, phone or the chat feature on their website their support was always top quality also. Their staff clearly knew their product inside out, responses were timely and they were always a pleasure to deal with.

Tobii have been around since 2001. The Swedish firm actually started with eyegaze, three entrepreneurs – John Elvesjö, Mårten Skogö and Henrik Eskilsson recognised the potential of eye tracking as an input method for people with disabilities. In 2005 they released the MyTobii P10, the world’s first computer with built-in eye tracking (and I’ve no doubt there are still a few P10 devices still in use). What stood out about the P10 was the build quality of the hardware, it was built like a tank. While Tobii could be fairly criticized for under specifying their all-in-one devices in terms of Processor and Memory, the build quality of their hardware is always top class. Over the years Tobii have grown considerably, acquiring Viking Software AS (2007), Assistive Technology Inc. (2008) and DynaVox Systems LLC (2014). They have grown into a global brand with offices around the world. As mentioned above, Tobii’s main strength is that they make good hardware. In my opinion they make the best eye trackers and have consistently done so for the last 10 years. Their AAC software has also come on considerably since the DynaVox acquisition. While Communicator always seemed to be a pale imitation of the Grid (apologies if I’m being unfair, but certainly true in terms of its versatility and ease of use for computer access) it has steadily being improving. Their newer Snap + Core First AAC software has been a huge success and for users just looking for communication solution would be an attractive option over the more expensive (although much fuller featured) Grid 3. Alongside Snap + Core they have also brought out a “Pathways” companion app. This app is designed to guide parents, care givers and communication partners in best practices for engaging Snap + Core First users. It supports the achievement of communication goals through video examples, lesson plans, interactive goals grid for tracking progress, and a suite of supporting digital and printable materials. A really useful resource which will help to empower parents and prove invaluable to those not lucky enough to have regular input from an SLT.

To sum things up. We had two great companies, both with outstanding products. I have recommended the combination of the Grid software and a Tobii eye tracker more times than I remember. The hope is that Tobii can keep the Grid on track and incorporate the outstanding support and communication that was always an integral part of SmartBox’s operation. With the addition of their hardware expertise and recent research driven progress in the area of AAC, there should be a lot to look forward to in the future.

If you are a Grid user and you have any questions or concerns about this news, true to form, the communication lines are open. There is some information at this link and at the bottom of the page you can submit your question.

‘Eye-Touch’ – an eye-controlled musical instrument

Last week we were visited in Enable Ireland, Sandymount, by two of the most experienced practitioners working in the area of assistive music technology. Dr Tim Anderson http://www.inclusivemusic.org.uk/ and Elin Skogdal (SKUG) dropped by to talk about the new eyegaze music software they have been developing and to share some tips with the musicians from Enable Ireland Adult’s Services. Tim Anderson has been developing accessible music systems for the last 25 years. E-Scape which he developed, is the only MIDI composition and performance software designed from the ground up for users of alternative input methods (Switch, Joystick and now Eyegaze). Tim also works as an accessible music consultant for schools and councils. Elin Skogdal is a musician and educator based at the SKUG Centre. She has been using Assistive Music Technology in music education since 2001 and was one of those responsible for establishing the SKUG Centre. The SKUG Centre is located in Tromsø, Northern Norway. SKUG stands for “Performing Music Together Without Borders”, and the aim of the Centre is to provide opportunities for people who can’t use conventional instruments to play and learn music. SKUG is part of the mainstream art school of Tromsø (Tromsø Kulturskole), which provides opportunities for SKUG students to collaborate with other music and dance students and teachers. SKUG have students at all levels and ages – from young children to university students. If you would to like to know more about Elin’s work at SKUG click here to read a blog post from Apollo Ensemble.

Following the visit and workshop they sent us some more detailed information about the exciting new eyegaze music software they are currently developing Eye-Touch. We have included this in the paragraphs below. If you are interested in getting involved in their very user lead development process you can contact us here (comments below) and we will put you in touch with Tim and Elin.

‘Eye-touch’ (Funded by ‘NAV Hjelpemidler og tilrettelegging’ in 2017, and Stiftelsen Sophie’s Minde in 2018) is a software instrument being developed by the SKUG centre (Part of ‘Kulturskolen i Tromsø’), in collaboration with Dr. Tim Anderson, which enables people to learn and play music using only their eyes. It includes a built-in library of songs called ‘Play-screens’, with graphical buttons which play when you activate them.
Buttons are laid out on screen to suit the song and the player’s abilities, and can be of any size and colour, or show a picture. When you look at a button (using an eye-gaze tracking system such as Tobii or Rolltalk) it plays its musical content. You can also play buttons in other ways to utilise the screen’s attractive look: you can touch a touch-screen or smartboard, press switches or PC keys, or hit keys on a MIDI instrument.
The music within each button can either be musical notes played on a synthesised instrument, or an audio sample of any recorded sound, for example animal noises or sound effects. Sound samples can also be recordings of people’s voices speaking or singing words or phrases. So a child in a class group could play vocal phrases to lead the singing (‘call’), with the other children then answering by singing the ‘response’.

see caption

Pictured above, a pupil in Finland is trying out playing a screen with just three buttons, with musical phrases plus a sound effect of a roaring bear (popular with young players!). She has been using the system for just a few minutes, and was successfully playing the song, which proved very enjoyable and motivating for her.

SKUG’s experience from their previous prototype system has led to the incorporation of some innovative playing features, which distinguish it from other eyegaze music systems, and have been shown to enable people to play who couldn’t otherwise. These features provide an easy entry level, and we have found that they enable new users to start playing immediately and gain motivation. These support features can also be changed or removed by teachers to suit each player’s abilities, and most importantly, be able to evolve as a player practises and improves. One feature is to have the buttons in a sequence which can only be played in the right order, so the player can ‘look over’ other buttons to get to the next ‘correct’ button.
Here are two examples: The Play-screen below has buttons each containing a single note, arranged as a keyboard with colouring matching the Figurenotes scheme. A player with enough ability could learn a melody and play it by moving between the buttons in the empty space below. But by putting the buttons into a sequence order, the player is able to learn and play the melody far more easily – they can look over buttons to get to the next ‘correct’ button (note) of the song, without playing the buttons in between.

screen shot from eyetouch
As well as illustrating a general theme, the facility to add pictures gives us many more possibilities. The Play-screen below left has buttons which show pictures and play sounds and music relating to J.S. Bach’s life story. The buttons could be played freely, but in this case have been put into a sequence order to illustrate his life chronologically. As before, a player can move through the buttons to play then in order, even though they are close together. But we may want to make them even bigger, and make the player’s job even easier, by setting to only display the ‘next’ button in the sequence (below right). So the other buttons are hidden, and the player only sees the button which is next to play, and can then move onto it.

bach lesson can be split into stages to make it more accessibleplay screen featuring images representing the life of classical musician Bach. Each picture plays some music from that period

There is also an accompanying text to tell the story which, if desired, can be displayed on screen via a built in ‘song-sheet’. Teachers can also make their own Play-screens by putting their own music into buttons – by either playing live on a MIDI keyboard, or recording their own sound samples. To further personalise a Play-screen for a pupil, people can also organise and edit all the visual aspects including adding their own pictures.
The Eye-Touch software is also very easy to install and operate – we have found it quick and easy to install it on school pupils’ eye-gaze tablets, and it worked for them straight away.
In January 2018 the SKUG team started a project to further develop Eye-Touch to expand the ways of playing, the creating and editing facilities for teachers, and the range of songs provided in the library.

 

 

Route4U – Accessible route planning

Tamas and Peter from route4u.org called in last week to tell us about their accessible route finding service. Based on Open Street Maps, Route4u allows users to plan routes that are appropriate to their level and method of mobility. Available on iOS, Android and as a web app at route4u.org/maps, Route4u is the best accessible route planning solution I have seen. Where a service like Mobility Mojo gives detailed accessibility information on destinations (business, public buildings), route4u concentrates more on the journey, making them complementary services. When first setting up the app you will be given the option to select either pram, active wheelchair, electronic wheelchair, handbike or walking (left screenshot below). You can further configure your settings later in the accessibility menu selecting curb heights and maximum slopes etc. (right screenshot below)

Accessibility screen shot featuring settings like maximum slope or curb height

Further configure your settings in Accessibility

select you vehicle screen - see text above

You are first asked to select your mobility method

This is great but so far nothing really groundbreaking, we have seen services like this before. Forward thinking cities with deep pockets like London and Ontario have had similar accessibility features built into their public transport route planners for the last decade. That is a lot easier to achieve however because you are dealing with a finite number of route options. Where Route4u is breaking new ground is that it facilitates this level of planning throughout an entire city. It does this by using the technology built into smartphones to provide crowdsourced data that constantly updates the maps. If you are using a wheelchair or scooter the sensors on your smartphone can measure the level of vibration experienced on a journey. This data is sent back to route4u who use it to estimate the comfort experienced on that that journey, giving other users access to even more information on which to base their route choice. The user doesn’t have to do anything, they are helping to improve the service by simply using it. Users can also more proactively improve the service by marking obstacles they encounter on their journey. The obstacle can be marked as temporary or permanent. Temporary obstacles like road works or those ubiquitous sandwich boards that litter our pavements will remain on the map helping to inform the accessibility of the route until another user confirms they have been removed and enters that information.

Example of obstacle added by user - pictusr of curb that may not be accessible to wheelchair

Example of obstacle added by user –

Example of obstacle added by user - picture of gate which would not be accessible to wheelchair

Example of obstacle added by user

If you connect route4u to your FaceBook account you get access to a points based reward system. This allows you compete with friends and have your own league table. In Budapest where they are already well established they have linked with sponsors who allow you cash points in for more tangible rewards like a free breakfast or refreshment. These gamification features should help encourage users less inclined towards altruism to participate and that is key. Route4u when established relies on its users to keep information up to date. This type of service based on crowdsourced data is a proven model, particularly in the route planning sphere. It’s a bit of a catch 22 however as a service needs to be useful first to attract users. It is early days for Route4u in Dublin and Tamas and Peter acknowledge that a lot of work needs to be done before promoting the service here. Over the next few months their team will begin mapping Dublin city centre, this way, when they launch there will be the foundation of an accessible route finding service which people can use, update and build upon. While route4u has obvious benefits for end users with mobility difficulties there is another beneficiary of the kind of data this service will generate. Tamas and Peter were also keen to point out how this information could be used by local authorities to identify where infrastructure improvements are most needed and where investment will yield the most return. In the long run this will help Dublin and her residents tackle the accessibility problem from both sides making it a truly smart solution.

map showing blue, green and red routes

Area that has been mapped

Legend showing levels of accessibility

Legend showing levels of accessibility