October is AAC (Alternative and Augmentative Communication)
Awareness Month. The goal is to raise awareness of AAC and to promote the many
different ways in which people communicate using communication systems, both
low and high tech. In order to celebrate this, some AAC companies offer
discounts and special promotions. More should be announced in the coming weeks,
and we will update this post to let you know!
Assistiveware will be offering a 50% discount on some of
their most popular apps between the 14th and 16th
Proloquo2Go – a symbol-based acc app, compatible with iPad.
iPod, iPhone and the Apple Watch.
Proloquo4text – a text-based AAC app, again available on the
platforms mentioned above
Keeble – A highly customisable keyboard for iPad, iPod and
iPhone, with word prediction, accommodations for physical and visual
difficulties, and a speak as you type feature.
Pictello – an app for creating visual stories and schedules
for iPad, iPhone and iPod.
Voice Banking involves the recording of a list of sentences into a computer. When enough recordings have been captured, software chops them up into individual sounds, phonetic units. A synthetic voice can then be built out of these phonetic units, this is called Concatenative speech synthesis. The number of sentences or statements needed to build a good quality English language synthetic voice using this process varies but is somewhere between 600 and 3500. This will take at least 8 hours of constant recording. Most people break it up over a few weeks which is recommended as voice quality will deteriorate over the course of a long session. So 20 minutes to half an hour in the morning (when most people’s voices are clearer) would be a good approach. The more recordings made the better quality the resulting voice will be.
There are a number of services offering voice banking and we have listed some that we are aware of below. The technology used varies from service to service and this post isn’t intended to be a guide to which service may be appropriate to a particular user. Our advice would be to investigate all options before making a decision as this process will be a considerable investment of time and in some cases money.
A person might choose to bank their voice for a number of reasons. The most common reason would be if someone has been diagnosed with a progressive illness like Motor Neuron Disease (MND/ALS) or similar that will result in the loss of speech. A voice is a very personal thing and being able to keep this aspect of individuality and identity can be important. The MND Association have detailed information Voice Banking on their website here. People unable to speak from birth can also take advantage of this technology. The VocalID service (although expensive) seems to offer good options in this regard. A family member could donate their voice by going through the voice banking process (or they could choose an appropriate donated voice). This synthetic voice could then be modified with filters modelled on the users own vocalisations. The result is a unique and personal voice with some of the regional qualities (accent, pronunciation) that reflect their background and heritage. Irish AAC user have historically had little choice when it came to selecting a voice, most grudgingly accepting the BBC newsreader upper-class English voice that was ubiquitous in communication devices. In Ireland, where an accents can vary significantly over such small geographical areas, how you speak is perhaps even more tied to your identity than other countries. Hopefully in the near future we will be hearing AAC users communicating in Cork, Limerick and Dublin accents!
For research purposes I used the ModelTalker service to create a synthetic voice. I wanted to see how well it dealt with the Irish accent. The ModelTalker service is run out of the Nemours Speech Research Laboratory (SRL) in the Nemours Center for Pediatric Auditory and Speech Sciences (CPASS) at the Alfred I. duPont Hospital for Children in Wilmington, Delaware. It is not a commercial service, only costing a nominal $100 to download your voice once banked. They offer an Online Recorder that works directly in the Chrome Browser or you can download and install their MTVR App if you are using the Windows OS. The only investment you need to make to begin banking your voice is a decent quality USB headset. I used the Andrea NC-181 (about €35). For the best quality they recommend you record about 1600 sentences but they can build a voice from 800. As this was just an experiment I recorded the minimum 800. At the beginning of each session you go through a sound check. Consistency is an important factor contributing to the overall quality of the finished voice. This is why you need to keep using the same computer and microphone throughout the whole process, ideally in the same location. When you begin you will hear the first statement read out, you then record the statement yourself. A colour code will give you feedback on whether the recording was acceptable or not. Red means it wasn’t good enough to use and so you should try again. Yellow is okay, could be better and green means perfect, move on. I found the Irish accent resulted in a lot of yellow. Don’t let this worry you too much. A nice feature for Irish people who want to engage in this process is the ability to recording custom sentences. They recommend that you at least record your own name. So many names and places in Ireland are anglicised versions of Irish that it would be worthwhile spending a bit of time on these custom sentences. “Siobhán is from Drogheda” for example would be incomprehensible using most Text to Speech. At the end of each session you upload your completed sentences which are added to your inventory (if using the browser based recorder they are added as you go). When you feel you have enough completed you can request your voice. When the voice is ready you need to audition it, this process allows you to fine tune how it sounds. I made a screen recording of this process and I will add it to this post when I have edited it down to a manageable length.
Click play below to hear a sample of my synthesized voice. Yes, unfortunately I do kind of sound like that J
Big news (in the AT world anyway) may have arrived in your mail box early last week. It was announced that leading AAC and Computer Access manufacturer Tobii purchased SmartBox AT (Sensory Software), developers of The Grid 3 and Look2Learn. As well as producing these very popular software titles, SmartBox were also a leading supplier of a range of AAC and Computer Access hardware, including their own GridPad and PowerPad ranges. Basically (in this part of the world at least) they were the two big guns in this area of AT, between them accounting for maybe 90% of the market. An analogy using soft drink companies would be that this is like Coca-Cola buying Pepsi.
Before examining what this takeover (or amalgamation?) means to their customers going forward it is worth looking back at what each company has historically done well. This way we can hopefully provide a more optimistic future for AT users rather than the future offered by what might be considered a potential monopoly.
Sensory Software began life in 2000 from the spare bedroom of founder Paul Hawes. Paul had previously worked for AbilityNet and had 13 years’ experience working in the area of AT. Early software like GridKeys and The Grid had been very well received and the company continued to grow. In 2006 they setup Smartbox to concentrate on complete AAC systems while sister company Sensory Software concentrated on developing software. In 2015 both arms of the company joined back together under the SamrtBox label. By this time their main product, the Grid 3, had established itself as a firm favourite with Speech and Language Therapists (SLT), for the wide range of communication systems it supported and Occupational Therapists and AT Professionals for its versatility in providing alternative input options to Windows and other software. Many companies would have been satisfied with providing the best product on the market however there were a couple of other areas where SmartBox also excelled. They may not have been the first AT software developers to harness the potential resources of their end users (they also may have been, I would need to research that further) but they were certainly the most successful. They succeeded in creating a strong community around the Grid 2 & 3 with a significant proportion of the online grids available to download being user generated. Their training and support was also second to none. Regular high quality training events were offered throughout Ireland and the UK. Whether by email, phone or the chat feature on their website their support was always top quality also. Their staff clearly knew their product inside out, responses were timely and they were always a pleasure to deal with.
Tobii have been around since 2001. The Swedish firm actually started with eyegaze, three entrepreneurs – John Elvesjö, Mårten Skogö and Henrik Eskilsson recognised the potential of eye tracking as an input method for people with disabilities. In 2005 they released the MyTobii P10, the world’s first computer with built-in eye tracking (and I’ve no doubt there are still a few P10 devices still in use). What stood out about the P10 was the build quality of the hardware, it was built like a tank. While Tobii could be fairly criticized for under specifying their all-in-one devices in terms of Processor and Memory, the build quality of their hardware is always top class. Over the years Tobii have grown considerably, acquiring Viking Software AS (2007), Assistive Technology Inc. (2008) and DynaVox Systems LLC (2014). They have grown into a global brand with offices around the world. As mentioned above, Tobii’s main strength is that they make good hardware. In my opinion they make the best eye trackers and have consistently done so for the last 10 years. Their AAC software has also come on considerably since the DynaVox acquisition. While Communicator always seemed to be a pale imitation of the Grid (apologies if I’m being unfair, but certainly true in terms of its versatility and ease of use for computer access) it has steadily being improving. Their newer Snap + Core First AAC software has been a huge success and for users just looking for communication solution would be an attractive option over the more expensive (although much fuller featured) Grid 3. Alongside Snap + Core they have also brought out a “Pathways” companion app. This app is designed to guide parents, care givers and communication partners in best practices for engaging Snap + Core First users. It supports the achievement of communication goals through video examples, lesson plans, interactive goals grid for tracking progress, and a suite of supporting digital and printable materials. A really useful resource which will help to empower parents and prove invaluable to those not lucky enough to have regular input from an SLT.
To sum things up. We had two great companies, both with outstanding products. I have recommended the combination of the Grid software and a Tobii eye tracker more times than I remember. The hope is that Tobii can keep the Grid on track and incorporate the outstanding support and communication that was always an integral part of SmartBox’s operation. With the addition of their hardware expertise and recent research driven progress in the area of AAC, there should be a lot to look forward to in the future.
A few weeks ago, Lee Ridley (a.k.a. Lost Voice Guy) became the first comedian to win Britain’s Got Talent, now in its 12th year. As well as outshining his competitors along the way, and winning with a clear margin, Lee was a favourite with both the judges and the public.
What also makes Lee’s win even more incredible is that fact that he is the first person with a disability to win the show. For a stand-up comedian, being able to connect with your audience is essential, and he did this with self-depreciating humour, fantastic delivery and some killer one-liners, all done through the use of Alternative and Augmentative Communication(AAC).
AAC provides a means of communication for those whose speech is not sufficient to communicate functionally in all environment and with all partners. Lee uses a combination of two devices to support his communication – an iPad with apps, and a dedicated device called a Lightwriter.
Lee has been on the comedy circuit since 2012, and has won prestigious prizes, including the BBC Radio New Comedy Awards in 2014. Below is an interview that Lee participated in, via email, with Karl O’Keeffe back in 2013, which gives some insights into his process and the unique challenges that using a synthesised voice can present.
Karl: You are the first person ever to do stand up comedy who uses a communication device, so you had nobody to learn from. What are the most important techniques and tricks you have learned so far that you wish someone had told you when you were starting?
Lee: I think one of the most important techniques that I have learnt is how to deal with timing. Obviously it’s pretty hard to know when to leave pauses for laughter and stuff, especially as I have to pre plan this. I can pause whenever I want but you have to be ready to pause when people laugh otherwise the start of the next bit gets lost or they don’t laugh as long. You sort of have to know when it’s coming so you’re ready for it. Obviously every audience is different so I’m never going to get it right every time. I think I’m getting better at anticipating when to pause though.
Karl: I see from your videos that you use both a LightWriter and an iPad. Can you tell me which it better for stand up comedy?
Lee: I use my iPad for my stand up and I use my Lightwriter for day to day conversations. I just find that my iPad is easier to understand slightly. It is also easier to find my material on the iPad and because it backs up to the cloud, it’s a bit more secure and means i can use any Apple device. It’s also a bit sexier than my Lightwriter.
Karl: Do you always use the same voice? Why is the voice important in your performance?
Lee: I use the same voice mostly yes. However I do use other voices in my act as well for comedy purposes. For example, I use a woman’s voice to do an impression of my mother. I think that my main voice is important to me because it has become ‘my’ voice. It’d be weird if I changed it now.
Karl: What app do you use on the iPad for communication?
Lee: I use Proloquo2go, which is a brilliant app. It is very complex but easy to use at the same time. It does everything that I need it to do really.
Karl: What is your favourite app on the iPad?
Lee: I tweet quite a lot so I tend to use Tweetbot all the time. I couldn’t get through long train journeys with the Spotify app either!
Karl: Do you use any other Assistive Technology (computer access etc.)?
Lee: No. I only use Proloquo2go on my iPad and iPhone and then my Lightwriter.
You may have heard about or seen photos of Enable Irelands fantastic “No Limits” Garden at this year’s Bloom festival. Some of you were probably even lucky enough to have actually visited it in the Phoenix Park over the course of the Bank Holiday weekend. In order to support visitors but also to allow those who didn’t get the chance to go share in some of the experience we put together a “No Limits” Bloom 2017 Grid. If you use the Grid (2 or 3) from Sensory software, or you know someone who does and you would like to learn more about the range of plants used in Enable Ireland’s garden you can download and install it by following the instructions below.
How do I install this Grid?
If you are using the Grid 3 you can download and install the Bloom 2017 Grid without leaving the application. From Grid explorer:
Click on the Menu Bar at the top of the screen
In the top left click the + sign (Add Grid Set)
A window will open (pictured below). In the bottom corner click on the Online Grids button (you will need to be connected to the Internet).
If you do not see the Bloom2017 Grid in the newest section you can either search for it (enter Bloom2017 in the search box at the top right) or look in the Interactive learning or Education Categories.
If you are using the Grid 2 or you want to install this Grid on a computer or device that is not connected to the Internet then you can download the Grid set at the link below. You can then add it to the Grid as above except select Grid Set File tab and browse to where you have the Grid Set saved.
Tobii Dynavox have recently launched their new Boardmaker Online product in Ireland through SafeCare Technologies. It has all the functionalities of previous versions of Boardmaker, except now that it’s web-based you don’t need any disks and multiple users can access it from any PC.
You can purchase a Personal, Professional or District account and the amount you pay depends on the type of account, the amount of “instructors” and how many years you want to sign up for. You can also get a discount for any old Boardmaker disks that you want to trade in.
You get all the symbols that have been available in past versions, as well as some new symbol sets and any new ones that are created in the future will also be given to you. Because it’s web-based, you have access to previously created activities via the online community and you can upload activities you create yourself to that community and share them with other people in your district or all over the world.
Because it’s no longer tied to one device, you can create activities on your PC and assign them to your “students” who can use them either in school and/or at home. You no longer need to have a user’s device in your possession to update their activities and they don’t need to have a period without their device while you do this.
You (and the other instructors in your district if you have a district licence) can also assign the same activity to many students and by having different accessibility options set up for different students, the activity is automatically accessible for their individual needs. For example, you could create an activity and assign it to a student who uses eye gaze and to a student who uses switches and that activity will show up on their device in the format that’s accessible for them.
The results of students’ work can be tracked against IEP or educational goals which then helps you decide what activities would be suitable to assign next. You can also track staff and student usage.
One limitation is that you can only create activities on a Windows PC or Mac. You can play activities on an iPad using the free app but not create them on it, and you can’t use Boardmaker Online to either create or play activities on an Android or Windows-based tablet.
The other point to mention is that because it’s a subscription-based product, the payment you have to make is recurring every year rather than being a one-off payment, which may not suit everyone.
However, with the new features it’s definitely worth getting the free 30-day trial and deciding for yourself if you’d like to trade in your old Boardmaker disks for the new online version!
Last Friday (February 17th) New Scientist published an article about a new app in development at Microsoft called GazeSpeak. Due to be released over the coming months on iOS, GazeSpeak aims at facilitating communication between a person with MND (known as ALS in the US, I will use both terms interchangeably) and another individual, perhaps their partner, carer or friend. Developed by Microsoft intern, Xiaoyi Zhang, GazeSpeak differs from traditional approaches in a number of ways. Before getting into the details however it’s worth looking at the background, GazeSpeaker didn’t come from nowhere, it’s actually one of the products of some heavyweight research into Augmentative and Alternative Communication (AAC) that has been taking place at Microsoft over the last few years. Since 2013, inspired by football legend and ALS sufferer Steve Gleason (read more here) Microsoft researchers and developers have put the weight of their considerable collective intellect to bear on the subject of increasing the ease and efficiency of communication for people with MND.
This is an entirely new approach to increasing the efficiency of AAC and one that I suggest, could only have come from a large mainstream tech organisation who have over thirty years experience facilitating communication and collaboration.
Another Microsoft research paper published last year (with some of the same authors at the previous paper) called “Exploring the Design Space of AAC Awareness Displays” looks at importance of a communication partners “awareness of the subtle, social, and contextual cues that are necessary for people to naturally communicate in person”. There research focused on creating a display that would allow the person with ALS express things like humor, frustration, affection etc, emotions difficult to express with text alone. Yes they proposed the use of Emoji, which are a proven and effective way a similar difficulty is overcome in remote or non face to face interactions however they went much further and also looked at solutions like Avatars, Skins and even coloured LED arrays. This, like the other one above, is an academic paper and as such not an easy read but the ideas and solutions being proposed by these researchers are practical and will hopefully be filtering through to end users of future AAC solutions.
That brings us back to GazeSpeak, the first fruits of the Microsoft/Steve Gleason partnership to reach the general public. Like the AACrobat solution outlined above GazeSpeak gives the communication partner a tool rather than focusing on tech for the person with MND. As the image below illustrates the communication partner would have GazeSpeak installed on their phone and with the app running they would hold their device up to the person with MND as if they were photographing them. They suggest a sticker with four grids of letters is placed on the back of the smart phone facing the speaker. The app then tracks the persons eyes: up, down, left or right, each direction means the letter they are selecting is contained in the grid in that direction (see photo below).
Similar to how the old T9 predictive text worked, GazeSpeak selects the appropriate letter from each group and predicts the word based on the most common English words. So the app is using AI in the form of machine vision to track the eyes and also to make the word prediction. In the New Scientist article they mention that the user would be able to add their own commonly used words and people/place names which one assumes would prioritize them within the prediction list. In the future perhaps some capacity for learning could be added to further increase efficiency. After using this system for a while the speaker may not even need to see the sticker with letters, they could write words from muscle memory. At this stage a simple QR code leading to the app download would allow them to communicate with complete strangers using just their eyes and no personal technology.
UPDATE (August 2018): GazeSpeak has been released for iOS and is now called SwipeSpeak. Download here. For more information on how it works or to participate in further development have a look at their GitHub page here.
As we approach the end of 2016 it’s an appropriate time to look back and take stock of the year from an AT perspective. A lot happened in 2016, not all good. Socially, humanity seems to have regressed over the past year. Maybe this short term, inward looking protectionist sentiment has been brewing longer but 2016 brought the opportunity to express politically, you know the rest. While society steps and looks back technology continues to leap and bound forward and 2016 has seen massive progress in many areas but particularly areas associated with Artificial Intelligence (AI) and Smart Homes. This is the first in a series of posts examining some technology trends of 2016 and a look at how they affect the field of Assistive Technology. The links will become active as the posts are added. If I’m missing something please add it to the comments section.
So although 2016 is unlikely to be looked on kindly by future historians… you know why; it has been a great year for Assistive Technology, perhaps one of promise rather than realisation however. One major technology trend of 2016 missing from this series posts is Virtual (or Augmented) Reality. While VR was everywhere this year with products coming from Sony, Samsung, Oculus and Microsoft its usefulness beyond gaming is only beginning to be explored (particularly within Education).
So what are the goals for next year? Well harnessing some of these innovations in a way where they can be made accessible and usable by people with disabilities at an affordable price. If in 2017 we can start putting some of this tech into the hands of those who stand to benefit most from its use, then next year will be even better.
Just to make you aware that there’s a FREE AAC Awareness Day being held at the Central Remedial Clinic, Clontarf on 24th August 2016.
Liberator’s AAC Awareness Days are designed for anyone who works with or cares for a non-verbal individual – therapists, teachers, parents, carers etc. This is an excellent opportunity to learn more about:-
Natural Language Development in AAC
Language Acquisition through Motor Planning (LAMP) – The Centre for AAC and Autism