Beyond Boundaries: How Interactive and Immersive Media are being used to support people with autism

This is the first in a two part post about Enable Ireland’s Immersive Media Beyond Boundaries Garden project. If you want to try the apps for yourself you can get them from Google Play here or there are links and some more information on our website here. This first post (Part 1) will give a brief background to Virtual Reality and related technologies and look at some of the research into its potential in the area of autism. Part 2 of the post will outline how we put our Beyond Boundaries and SecretGarden apps together and how we hope to incorporate this technology into future training and use it to support clients of our service.

Background: VR, AR, Mixed Media, 360 Video?

Virtual Reality, referred to as the acronym VR, is one of those technologies that is perpetually “the next big thing”. If you grew up looking at movies like Tron and The Lawnmower Man (giving away my age here), VR is probably filed away in your brain somewhere between hoverboards (that actually hover) and teleportation. When the concept of a technology has been part of popular culture so far in advance of the capability of its realisation, it can hinder rather than promote its development. The trajectory the evolution of VR has taken however is much closer to a technology like Speech Recognition than hoverboards. VR, as with Speech Recognition, saw a great deal of progress in the latter part of the 1980s. With both technologies, although important, this progress was almost nullified by the hype surrounding and subsequent commercialisation of a technology that clearly wasn’t ready for the public consumption. The reality of what VR could offer at the time led to people becoming disillusioned with the technology.

Before I talk about how VR is being used in the area of autism it’s worth clarifying what exactly is meant by some of the terms that are being used. As an emerging technology there is still quite a lot of confusion around what is meant by Virtual Reality and associated technologies; Augmented Reality (AR), Mixed Reality, Immersive Media and 360 Video. First let’s look at the video below which explains what VR and AR are and how they differ.

So what is Mixed Reality? Well in short Mixed Reality is a combination of VR and AR, in theory offering the best of both. Mixed Reality is also closely associated with Microsoft and other Windows aligned hardware manufacturers. Have a look at the short video below.

360 degree Video and Photography are less interactive than the technologies discussed above. The viewer is also restricted in terms of movement, they can only view the scene from the position the camera was placed. Movement can be simulated to some extent however through the use of hotspots or menus, allowing them to navigate between different scenes. More traditional film techniques like fading between scenes can also be used as in the video below. 360 Degree can be either flat or in stereo. Stereo video or 3D video is captured with a camera that has 2 lens about the same distance apart as a person’s eyes. Each eye then gets a slightly different view which our brain stitch together as a 3D image.

Finally Immersive Media is frequently used as an umbrella term for all the technologies discussed above but would more correctly refer to the less interactive 360 Video and Photography.

Immersive Media and Autism

Since the early days of the technology people have proposed that VR may offer potential as a therapeutic or training tool within the area of neurodiversity. Dorothy Strickland of North Carolina State University’s short paper “Two Case Studies Using Virtual Reality As A Learning Tool For Autistic Children” (Journal of Autism and Developmental Disorders, Vol. 26, No. 6, 1996) is generally accepted as being the first documented use of VR as a tool to increase the capabilities of someone with a disability. In this early study (which you can read at the link above) VR was used as a means to teach the children how to safely cross the street. While VR technology itself has clearly moved on, for the reasons outlined above, its use in this area (up until recently) has not and there is still a great deal about this paper that is relevant today. In particular regarding the children’s acceptance of the headset (which would have been chunkier and more uncomfortable than todays) and their understanding of the 3D world presented by it.

Stepping forward almost a quarter of a century and we are riding the peak of the second wave of commercial VR. Thanks largely to developments made due to the rapid evolution of mobile device in the early years of this decade, VR is becoming more accessible and less disappointing than it was first time around. With the new generation of headsets and their ability to render sharp and detailed 3D environments has come a renewed interest in the use of VR in the area of autism.  At a recent CTD Institute webinar on this very subject (Virtual Reality and Assistive Technology) Jaclyn Wickham (@JacWickham), a teacher turned technologist and founder of AcclimateVR outlined some of the reasons why VR could be an appropriate technology to provide training for some people on the autistic spectrum. These included the ability to create a safe and controlled environment where tasks can be practiced and repeated. How the VR experience puts emphases on the visual and auditory senses (with the ability to configure and control both presumably). How you can create an individualised experience and that there are many non-verbal interaction possibilities. Anecdotally this all makes complete sense but we are in the early days and much of the research is still being conducted.

A leading researcher in this area is Dr Nigel Newbutt (@Newbutt) who in June of this year published a short but enlightening update about his progress working with children from Mendip School in the UK. After seeing him present at Doctrid V conference in 2017 I can assure you that progress in this area is being made but even he acknowledges more work is needed. “Our research suggests that head-mounted displays might be a suitable space in which to develop specific interventions and opportunities; to practice some skills people with autism might struggle with in the real world. We’re seeking further funding to address this important question – one that has eluded this field to date.” (Full interview here: From apps to robots and VR: How technology is helping treat autism)

The commercial offerings in the area of VR and Autism (Floreo and AcclimateVR) tend to concentrate on providing a virtual space where basic life skills can be practiced. Another use is as a form of exposure therapy where immersive video and audio of environments and situations are used as a means of preparing someone for the real life experience. You can see examples of both in action at the links above.

Within Enable Ireland AT service our own VR journey was spurred on by a visit and demonstration from James Corbett (@JamesCorbett) of SimVirtua. James could be considered a real pioneer in this area and had in fact met with us previously almost 10 years ago to show us some work he was doing with non-immersive virtual environments (without headsets) in schools. SimVirtua had worked on a Mindfulness VR app called MindMyths and it was this idea of providing a retreat or sanctuary using immersive video that inspired us when it came to working on the Bloom Beyond Boundaries Garden project.

In the second part of this post (coming soon) I’ll give some background to what we hoped to achieve with the Beyond Boundaries garden project and some technical information on how we put it together.

GazeSpeak & Microsoft’s ongoing efforts to support people with Motor Neuron Disease (ALS)

Last Friday (February 17th) New Scientist published an article about a new app in development at Microsoft called GazeSpeak. Due to be released over the coming months on iOS, GazeSpeak aims at facilitating communication between a person with MND (known as ALS in the US, I will use both terms interchangeably) and another individual, perhaps their partner, carer or friend. Developed by Microsoft intern, Xiaoyi Zhang, GazeSpeak differs from traditional approaches in a number of ways. Before getting into the details however it’s worth looking at the background, GazeSpeaker didn’t come from nowhere, it’s actually one of the products of some heavyweight research into Augmentative and Alternative Communication (AAC) that has been taking place at Microsoft over the last few years. Since 2013, inspired by football legend and ALS sufferer Steve Gleason (read more here) Microsoft researchers and developers have put the weight of their considerable collective intellect to bear on the subject of increasing the ease and efficiency of communication for people with MND.

Last year Microsoft Research published a paper called ”
AACrobat: Using Mobile Devices to Lower Communication Barriers and Provide Autonomy with Gaze-Based AAC” (abstract and pdf download at previous link) which proposed a companion app to allow an AAC user’s communication partner assist (in an non-intrusive way) in the communication process. Take a look at the video below for a more detailed explanation.

This is an entirely new approach to increasing the efficiency of AAC and one that I suggest, could only have come from a large mainstream tech organisation who have over thirty years experience facilitating communication and collaboration.

Another Microsoft research paper published last year (with some of the same authors at the previous paper) called “Exploring the Design Space of AAC Awareness Displays” looks at importance of a communication partners “awareness of the subtle, social, and contextual cues that are necessary for people to naturally communicate in person”. There research focused on creating a display that would allow the person with ALS express things like humor, frustration, affection etc, emotions difficult to express with text alone. Yes they proposed the use of Emoji, which are a proven and effective way a similar difficulty is overcome in remote or non face to face interactions however they went much further and also looked at solutions like Avatars, Skins and even coloured LED arrays. This, like the other one above, is an academic paper and as such not an easy read but the ideas and solutions being proposed by these researchers are practical and will hopefully be filtering through to end users of future AAC solutions.

That brings us back to GazeSpeak, the first fruits of the Microsoft/Steve Gleason partnership to reach the general public. Like the AACrobat solution outlined above GazeSpeak gives the communication partner a tool rather than focusing on tech for the person with MND. As the image below illustrates the communication partner would have GazeSpeak installed on their phone and with the app running they would hold their device up to the person with MND as if they were photographing them. They suggest a sticker with four grids of letters is placed on the back of the smart phone facing the speaker. The app then tracks the persons eyes: up, down, left or right, each direction means the letter they are selecting is contained in the grid in that direction (see photo below).

man looking right, other person holding smartphone up with gazespeak installed

Similar to how the old T9 predictive text worked, GazeSpeak selects the appropriate letter from each group and predicts the word based on the most common English words. So the app is using AI in the form of machine vision to track the eyes and also to make the word prediction. In the New Scientist  article they mention that the user would be able to add their own commonly used words and people/place names which one assumes would prioritize them within the prediction list. In the future perhaps some capacity for learning could be added to further increase efficiency. After using this system for a while the speaker may not even need to see the sticker with letters, they could write words from muscle memory. At this stage a simple QR code leading to the app download would allow them to communicate with complete strangers using just their eyes and no personal technology.

UPDATE (August 2018): GazeSpeak has been released for iOS and is now called SwipeSpeak. Download here. For more information on how it works or to participate in further development have a look at their GitHub page here.

Call 3 now open for ASSISTID Fellows

Fellowships in Intellectual Disability and Autism

ASSISTID will train experienced researchers in the skills necessary to make a difference in the lives of people with autism and intellectual disability through improved communication, social inclusion and education.

This is the first time a network of multidisciplinary researchers have come together on this scale to develop assistive technologies for those with autism and ID and is a opportunity for researchers to advance their career through expert training and international mobility.

Deadline June 30th 2016
Interim Deadline February 29th 2016

For more information: Dr. Sheeona Gorman

Email: sgorman@respect.ie
Website: www.assistid.euFellowships in intellectual disability