Until now, people with significant physical disabilities could only operate an iPad or iPhone by switch control. With AMAneo BTi it is possible for the first time to operate an iPad or iPhone directly with any mouse or assistive mouse including a trackball, joystick, head mouse or thumb mouse, and even a wheelchair joystick. The AMAneo BTi also has some very useful built-in features such as tremor filter, dwell click and 2 jack plugs for external switches.
Simply connect the AMAneo BTi to your iPad or iPhone via
Bluetooth and the pointer will automatically appear on your device’s screen,
with no additional App required. This allows the user to navigate around the screen
and interact with a mouse to connect with friends, browse the internet, and
Tamas and Peter from route4u.org called in last week to tell us about their accessible route finding service. Based on Open Street Maps, Route4u allows users to plan routes that are appropriate to their level and method of mobility. Available on iOS, Android and as a web app at route4u.org/maps, Route4u is the best accessible route planning solution I have seen. Where a service like Mobility Mojo gives detailed accessibility information on destinations (business, public buildings), route4u concentrates more on the journey, making them complementary services. When first setting up the app you will be given the option to select either pram, active wheelchair, electronic wheelchair, handbike or walking (left screenshot below). You can further configure your settings later in the accessibility menu selecting curb heights and maximum slopes etc. (right screenshot below)
Further configure your settings in Accessibility
You are first asked to select your mobility method
This is great but so far nothing really groundbreaking, we have seen services like this before. Forward thinking cities with deep pockets like London and Ontario have had similar accessibility features built into their public transport route planners for the last decade. That is a lot easier to achieve however because you are dealing with a finite number of route options. Where Route4u is breaking new ground is that it facilitates this level of planning throughout an entire city. It does this by using the technology built into smartphones to provide crowdsourced data that constantly updates the maps. If you are using a wheelchair or scooter the sensors on your smartphone can measure the level of vibration experienced on a journey. This data is sent back to route4u who use it to estimate the comfort experienced on that that journey, giving other users access to even more information on which to base their route choice. The user doesn’t have to do anything, they are helping to improve the service by simply using it. Users can also more proactively improve the service by marking obstacles they encounter on their journey. The obstacle can be marked as temporary or permanent. Temporary obstacles like road works or those ubiquitous sandwich boards that litter our pavements will remain on the map helping to inform the accessibility of the route until another user confirms they have been removed and enters that information.
Example of obstacle added by user –
Example of obstacle added by user
If you connect route4u to your FaceBook account you get access to a points based reward system. This allows you compete with friends and have your own league table. In Budapest where they are already well established they have linked with sponsors who allow you cash points in for more tangible rewards like a free breakfast or refreshment. These gamification features should help encourage users less inclined towards altruism to participate and that is key. Route4u when established relies on its users to keep information up to date. This type of service based on crowdsourced data is a proven model, particularly in the route planning sphere. It’s a bit of a catch 22 however as a service needs to be useful first to attract users. It is early days for Route4u in Dublin and Tamas and Peter acknowledge that a lot of work needs to be done before promoting the service here. Over the next few months their team will begin mapping Dublin city centre, this way, when they launch there will be the foundation of an accessible route finding service which people can use, update and build upon. While route4u has obvious benefits for end users with mobility difficulties there is another beneficiary of the kind of data this service will generate. Tamas and Peter were also keen to point out how this information could be used by local authorities to identify where infrastructure improvements are most needed and where investment will yield the most return. In the long run this will help Dublin and her residents tackle the accessibility problem from both sides making it a truly smart solution.
We all know what it’s like being in school when the sun is shining outside and all you can think about is being out there! Or when you’re trying to get your homework done and all you can think about is who’s posting what on Snapchat or Instagram? Or have you ever found yourself managing to get a study block done and then taking a well-deserved 5-minute break to take a peek at social media, only to emerge from your phone a half an hour later and way behind on your study schedule? Well, the following free apps are for you! In fact, they’re for anyone who wants to use their time on their computer or smartphone more productively, whether you’re a student or not.
Stay Focused is a free google chrome extension that helps you to stay focused on your work by stopping you from looking at time-wasting websites (e.g. Instagram, Snapchat, Facebook, Twitter). You set a certain amount of time in the day that you’re allowed to look at those distracting websites and then once your allotted time for the day has been used up, it blocks you out of them. End of distractions! You can also choose to have a complete block on the websites that are your major culprits for time-wasting.
This one works in a similar way to Stay Focused but it’s for the Mozilla Firefox browser instead of Chrome. You can specify up to six sets of sites to block, with different times and days for each set (e.g. you could have Twitter blocked from 9am to 5pm and Facebook blocked for all but 10 minutes in every hour).
This is one of many apps that use the timing principle behind the Pomodoro Technique (i.e. you work for 25 minutes, then take a 5 minute break, then after four of these sessions you can take a longer break of 15-30mins). This Google Chrome extension helps you to concentrate on your work by blocking a list of websites for the amount of time you’ve set and once your working period is over, it’ll unblock those sites to give you a break from work before you hit those books again!
Offtime is an app for iOS and Android smartphones that not only lets you block calls, texts and notifications when you’re trying to work, but it can also track your phone and app usage so you can identify what distracts you most. You can set different profiles, like School, Family and Me Time and when you’re finished your work, it gives you an activity log with a list of everything that happened while you were working so you don’t have to worry about missing out on anything.
So, with these apps you’ll be able to maximise your study time and even better, you’ll be able to look at all your favourite websites and apps guilt-free on your breaks!
When we use technology we need to be able to position it so that it is easy to use. We need to be able operate the controls and have it positioned so that we can see it without eyestrain. Sometimes it’s useful to mount a device, as our hands may be tied up doing something else; the device may be too heavy or we may even have a limited ability to reach, grasp, or hold the device.
Some of the most common items we use are mobile phones and tablets. There are various mounting options available. The suitability of a mount depends on various factors such as the fixing clamp, where you intend to mount the device, weight of device to be mounted, the reach and adjustability of the mount etc.
Mounting systems are generally composed of a (i)fixing clamp to mount either to a flat table top surface or a circular tubing, (ii) an adjustable arm usually no longer that 500mm (iii) some kind of attachment or cradle to hold the device.
Below are two mounting systems which may offer you some good solutions. Ram mounts are a mainstream supplier of mounts for electronic devices within cars, bikes and trucks. Rehadapt on the other hand, have a range of mounting products to serve clients with “special needs”. Both systems consist of a fixing clamp, adjustable arm and a cradle to hold the device.
In choosing a fixing clamp you need to consider the surface you are fixing it to: do you need to remove clamp often? Will the clamp be secure? Below are two clamps however their site offers more options.
Light 3D wheelchair mount with one tube and one joint with screw. The L3D-WC 2AK is a two pole version of this. You can state the required wheelchair clamp on order. Combine with any cradle with REHAdapt’s Spigot Link System (SLS).
Onto the end of the REHAdapt’s Spigot Link System, there a various options. Some universal cradles are below.
Adjustable tablet holder for mounting any tablet from between 7” and 13″ to a wheelchair with Rehadapt’s Universal Device Socket (UDS). The tablet mount works with Apple iPad and Samsung Galaxy tablets, with and without without protective cases.
The good: It looks good. Lots of component options and excellent for mounting onto a wheelchair.
Today May 18th is Global Accessibility Awareness Day and to mark the occasion Apple have produced a series of 7 videos (also available with audio description) highlighting how their products are being used in innovative ways by people with disabilities. All the videos are available in a playlist here and I guarantee you, if you haven’t seen them and you are interested in accessibility and AT, it’ll be the best 15 minutes you have spent today! Okay the cynical among you will point out this is self promotion by Apple, a marketing exercise. Certainly on one level of course it is, they are a company and like any company their very existence depends on generating profit for their shareholders. These videos promote more than Apple however, they promote independence, creativity and inclusion through technology. Viewed in this light these videos will illustrate to people with disabilities how far technology has moved on in recent years and make them aware of the potential benefits to their own lives. Hopefully the knock on effect of this increased awareness will be increased demand. Demand these technologies people, it’s your right!
As far as a favorite video from this series goes, everyone will have their own. In terms of the technology on show, to me Todd “The Quadfather” below was possibly the most interesting.
This video showcases Apple’s HomeKit range of associated products and how they can be integrated with Siri.
My overall favorite video however is Patrick, musician, DJ and cooking enthusiast. Patrick’s video is an ode to independence and creativity. The technologies he illustrates are Logic Pro (Digital Audio Workstation software) with VoiceOver (Apple’s inbuilt screen-reader) and the object recognizer app TapTapSee which although has been around for several years now, is still an amazing use of technology. It’s Patrick’s personality that makes the video though, this guy is going places, I wouldn’t be surprised if he had his own prime time TV show this time next year.
Last Friday (February 17th) New Scientist published an article about a new app in development at Microsoft called GazeSpeak. Due to be released over the coming months on iOS, GazeSpeak aims at facilitating communication between a person with MND (known as ALS in the US, I will use both terms interchangeably) and another individual, perhaps their partner, carer or friend. Developed by Microsoft intern, Xiaoyi Zhang, GazeSpeak differs from traditional approaches in a number of ways. Before getting into the details however it’s worth looking at the background, GazeSpeaker didn’t come from nowhere, it’s actually one of the products of some heavyweight research into Augmentative and Alternative Communication (AAC) that has been taking place at Microsoft over the last few years. Since 2013, inspired by football legend and ALS sufferer Steve Gleason (read more here) Microsoft researchers and developers have put the weight of their considerable collective intellect to bear on the subject of increasing the ease and efficiency of communication for people with MND.
This is an entirely new approach to increasing the efficiency of AAC and one that I suggest, could only have come from a large mainstream tech organisation who have over thirty years experience facilitating communication and collaboration.
Another Microsoft research paper published last year (with some of the same authors at the previous paper) called “Exploring the Design Space of AAC Awareness Displays” looks at importance of a communication partners “awareness of the subtle, social, and contextual cues that are necessary for people to naturally communicate in person”. There research focused on creating a display that would allow the person with ALS express things like humor, frustration, affection etc, emotions difficult to express with text alone. Yes they proposed the use of Emoji, which are a proven and effective way a similar difficulty is overcome in remote or non face to face interactions however they went much further and also looked at solutions like Avatars, Skins and even coloured LED arrays. This, like the other one above, is an academic paper and as such not an easy read but the ideas and solutions being proposed by these researchers are practical and will hopefully be filtering through to end users of future AAC solutions.
That brings us back to GazeSpeak, the first fruits of the Microsoft/Steve Gleason partnership to reach the general public. Like the AACrobat solution outlined above GazeSpeak gives the communication partner a tool rather than focusing on tech for the person with MND. As the image below illustrates the communication partner would have GazeSpeak installed on their phone and with the app running they would hold their device up to the person with MND as if they were photographing them. They suggest a sticker with four grids of letters is placed on the back of the smart phone facing the speaker. The app then tracks the persons eyes: up, down, left or right, each direction means the letter they are selecting is contained in the grid in that direction (see photo below).
Similar to how the old T9 predictive text worked, GazeSpeak selects the appropriate letter from each group and predicts the word based on the most common English words. So the app is using AI in the form of machine vision to track the eyes and also to make the word prediction. In the New Scientist article they mention that the user would be able to add their own commonly used words and people/place names which one assumes would prioritize them within the prediction list. In the future perhaps some capacity for learning could be added to further increase efficiency. After using this system for a while the speaker may not even need to see the sticker with letters, they could write words from muscle memory. At this stage a simple QR code leading to the app download would allow them to communicate with complete strangers using just their eyes and no personal technology.
UPDATE (August 2018): GazeSpeak has been released for iOS and is now called SwipeSpeak. Download here. For more information on how it works or to participate in further development have a look at their GitHub page here.
There is of course some cross over between the different AT highlights of 2016 I have included here. An overall theme running through all the highlights this year is the mainstreaming of AT. Apple, Google and Microsoft have all made significant progress in the areas previously mentioned: natural language understanding and smart homes. This has led to easier access to computing devices and through them the ability to automate and remotely control devices and services that assist us with daily living tasks around the house. However these developments are aimed at the mainstream market with advantages to AT users being a welcome additional benefit. What I want to look at here are the features they are including in their mainstream products specifically aimed at people with disabilities with the goal of making their products more inclusive. Apple have always been strong in this area and have lead the way now for the last five years. 2016 saw them continue this fine work with new features such as Dwell within MacOS and Touch Accommodations in iOS 10 as well as many other refinements of already existing features. Apple also along with Siri have brought Switch Control to Apple TV either using a dedicated Bluetooth switch or through a connected iOS device in a method they are calling Platform Switching. Platform Switching which also came out this year with iOS 10 “allows you to use a single device to operate any other devices you have synced with your iCloud account. So you can control your Mac directly from your iPhone or iPad, without having to set up your switches on each new device” (need to be on the same WiFi network). The video below from Apple really encapsulates how far they have come in this area and how important this approach is.
Not to be outdone Microsoft bookended 2016 with some great features in the area of literacy support, an area they had perhaps neglected for a while. They more than made up for this last January with the announcement of Learning Tools for OneNote. I’m not going to go into details of what Learning Tools offers as I have covered it in a previous post. All I’ll say is that it is free, it works with OneNote (also free and a great note taking and organisation support in its own right) and is potentially all many students would need by way of literacy support (obviously some students may need additional supports). Then in the fourth quarter of the year they updated their OCR app Office Lens for iOS to provide the immersive reader (text to speech) directly within the app.
Finally Google who would probably have the weakest record of the big 3 in terms of providing inbuilt accessibility features (to be fair they always followed a different approach which proved to be equally effective) really hit a home run with their Voice Access solution which was made available for beta testing this year. Again I have discussed this in a previous post here where you can read about it in more detail. Having tested it I can confirm that it gives complete voice access to all Android devices features as well as any third party apps I tested. Using a combination of direct voice commands (Open Gmail, Swipe left, Go Home etc.) and a system of numbering buttons and links, even obscure apps can be operated. The idea of using numbers for navigation while not new is extremely appropriate in this case, numbers are easily recognised regardless of voice quality or regional accent. Providing alternative access and supports to mainstream Operating Systems is the corner stone of recent advances in AT. As the previous video from Apple showed, access to smartphones or computers gives access to a vast range of services and activities. For example inbuilt accessibility features like Apple’s Switch Control or Google’s Voice Access open up a range of mainstream Smart Home and security devices and services to people with alternative access needs where before they would have to spend a lot more for a specialist solution that would have probably been inferior.
As we approach the end of 2016 it’s an appropriate time to look back and take stock of the year from an AT perspective. A lot happened in 2016, not all good. Socially, humanity seems to have regressed over the past year. Maybe this short term, inward looking protectionist sentiment has been brewing longer but 2016 brought the opportunity to express politically, you know the rest. While society steps and looks back technology continues to leap and bound forward and 2016 has seen massive progress in many areas but particularly areas associated with Artificial Intelligence (AI) and Smart Homes. This is the first in a series of posts examining some technology trends of 2016 and a look at how they affect the field of Assistive Technology. The links will become active as the posts are added. If I’m missing something please add it to the comments section.
So although 2016 is unlikely to be looked on kindly by future historians… you know why; it has been a great year for Assistive Technology, perhaps one of promise rather than realisation however. One major technology trend of 2016 missing from this series posts is Virtual (or Augmented) Reality. While VR was everywhere this year with products coming from Sony, Samsung, Oculus and Microsoft its usefulness beyond gaming is only beginning to be explored (particularly within Education).
So what are the goals for next year? Well harnessing some of these innovations in a way where they can be made accessible and usable by people with disabilities at an affordable price. If in 2017 we can start putting some of this tech into the hands of those who stand to benefit most from its use, then next year will be even better.
Microsoft announced earlier this week that they are building on the success of their much acclaimed literacy support suite for OneNote “Learning Tools” by making some of the features available within other products. First though, if you haven’t come across Learning Tools for OneNote take a look at the video below for an outline of what it offers. Take it away Jeff..
As you can see from the video, offering Text To Speech (TTS) with highlighting, easy to read fonts on distraction free, high visibility backgrounds as well as the comprehension supports, Learning Tools could be very useful to those who need a little assistance with text based content. Learning Tools was originally only available for the version of OneNote which comes bundled with Office 2013 and 2016. However earlier this week Microsoft announced that they are bringing some features to other apps, the most interesting and potentially useful of these would be Office Lens and Word. Office Lens is already a very useful multi-platform app with powerful optical character recognition (OCR) capabilities which allow you photograph a document and have it converted to editable text. Now with the addition of the Immersive Reader functionality of Learning Tools you can photograph a document, export it to immersive reader and immediately use the tools mentioned above to support your understanding of the text. For the moment this feature is only available on Office Lens for iOS but my understanding is it’s their intention to gradually roll it out to other platforms.
Within Word even more functionality is offered through the new editor feature. These include dictionary supports such as synonyms of suggested corrections for misspelled words that can be read aloud with TTS and additional support for commonly confused words. I’ll leave it to Jeff again for a full review of the new features (video below).
the very cleverly named Tap Tap See is an app (you may have noticed, I like apps a lot!) which allows you to identify objects by simply taking a picture. Once you’ve taken the picture, the app searches through a huge database of objects and brand names to find a match foryour picture. The app then tells you what it sees.
I tend to use it when I need quick information, such as the flavour of a tin of soup or the colour of a piece of clothing, so it’s not an app which can give a lot of detail – but the detail it can give can be remarkably accurate.
It does also take a little time to get used to where exactly to point the camera, especially if you’re blind from birth (as I am), but the app is free to use, so yu don’t need to worry about the number of pictures you take.
The app also has a handy features which allows you to use it to identify photos in your library, which I really luke if I want to put a photo on Facebook but can’t remember which one I want to use.
So, all in all, I’d really recommend having a play with this app. have fun!