Hands-free Minecraft from Special Effect

Love it or hate it, the game of Minecraft has captured the imagination of over 100 million young, and not so young people. It is available on multiple platforms; mobile device (Pocket Edition), Raspberry Pi, Computer, Xbox or PlayStation and it looks and feels pretty much the same on all. For those of us old enough to remember, the blocky graphics will hold some level of nostalgia for the bygone 8 Bit days when mere blobs of colour and our imagination were enough to render Ghosts and Goblins vividly. This is almost certainly lost on the main cohort of Minecraft players however who would most probably be bored silly with the 2 dimensional repetitive and predictable video games of the 80’s and early 90’s. The reason Minecraft is such a success is that it has blended its retro styling with modern gameplay and a (mind bogglingly massive) open world where no two visits are the same and there is room for self-expression and creativity. This latter quality has lead it to become the first video game to be embraced by mainstream education, being used as a tool for teaching everything from history to health or empathy to economics. It is however the former quality, the modern gameplay, that we are here to talk about. Unlike the afore mentioned Ghosts and Goblins, Minecraft is played in a 3 dimensional world using either the first person perspective (you see through the characters eyes) or third person perspective (like a camera is hovering above and slightly behind the character). While undoubtedly offering a more immersive and realistic experience, this means controlling the character and playing the game is also much more complex and requires a high level of dexterity in both hands to be successful. For people without the required level of dexterity this means that not only is there a risk of social exclusion, being unable to participate in an activity so popular among their peers, but also the possibility of being excluded within an educational context.

Fortunately UK based charity Special Effect have recognised this need and are in the process doing something about it. Special Effect are a charity dedicated to enabling those with access difficulties play video games through custom access solutions. Since 2007 their interdisciplinary team of clinical and technical professionals (and of course gamers) have been responsible for a wide range of bespoke solutions based on individuals’ unique abilities and requirements. Take a look at this page for some more information on the work they do and to see what a life enhancing service they provide. The problem with this approach of course is reach, which is why their upcoming work on Minecraft is so exciting. Based on the Open Source eyegaze AAC/Computer Access solution Optikey by developer Julius Sweetland, Special Effect are in the final stages of developing an on-screen Minecraft keyboard that will work with low cost eye trackers like the Tobii Eye X and the Tracker 4C (€109 and €159 respectively).

minecraft on screen keyboard

The inventory keyboard

MineCraft on screen keyboards

The main Minecraft on screen keyboard

Currently being called ‘Minekey’ this solution will allow Minecraft to be played using a pointing device like a mouse or joystick or even totally hands free using an eyegaze device or headmouse. The availability of this application will ensure that Minecraft it now accessible to many of those who have been previously excluded. Special Effect were kind enough to let us trial a beta version of the software and although I’m no Minecraft expert it seemed to work great. The finished software will offer a choice of onscreen controls, one with smaller buttons and more functionality for expert eyegaze users (pictured above) and a more simplified version with larger targets. Bill Donegan, Projects Manager with Special Effect told us they hope to have it completed and available to download for free by the end of the year. I’m sure this news that will excite many people out there who had written off Minecraft as something just not possible for them. Keep an eye on Special Effect or ATandMe for updates on its release.

Tobii Tracker 4C

Tobii Gaming, the division of Swedish technology firm Tobii responsible for mainstream (and therefore low cost) eye trackers have released a new peripheral called the Tracker 4C (pictured below).

contents of Tracker 4 C box. Eye tracker, documentation, 2 magnetic mounts

Before getting into the details of this new device I first want to highlight that although this eye tracker can be used as a computer access solution for someone with a disability (it already works with Optikey and Project IRIS), it is not being marketed as such. What this means in practice is that it may not provide the reliability that their much costlier Assistive Technology (AT) eye trackers such as the Tobii PC Eye Mini do. So if eye tracking is your only means of communication or computer access and you have the funds I would recommend spending that extra money. That said, many people don’t have the funds or perhaps they have other more robust means of computer access and just want to use eye tracking for specific tasks like creating music or gaming. For those people the Tracker 4 C is really good news as it packs a lot into the €159 price tag and overcomes many of the weaknesses of its predecessor the Tobii Eye X. The big improvement over the Tobii Eye X is the inclusion of the EyeChip. The EyeChip which was previously only included in the much more expensive range of Tobii AT eye trackers takes care of most of the data processing before sending it on to the computer. The result of this is much less data is being passed to the computer from the eyetracker (100KB/S compared to 20MB/S) and a much lower CPU (Central Processing Unit) load (1% compared to 10%). This allows it to work over an older USB 2 connection and means most (even budget) computers should have no problem running this device (unlike the Eye X which required a high end PC).

All this must have come as some compromise in performance right? Wrong. The Tracker 4C actually beats the Eye X in almost every category. Frequency has risen from 70Hz to 90Hz, slightly longer operating distance is possible .95m, and the max screen size has increased by 3” to 30”. This last stat could be the deciding factor that convinces Tobii PC Eye Mini users to buy the Tracker 4 C as a secondary device as the Mini only works with a max screen size of 19”. The Tracker 4 C also offers head tracking but as I haven’t tested the device I’m unsure of how this works or of it is something that could be utilised as AT. Watch this space, the Tracker 4 C is on our shopping list and I’ll post an update as soon as we get to test whether it’s as impressive in real life as it seems on paper.

The table below compares specs for all Tobii’s current range of consumer eye trackers. In some areas where information was not available I have added a question mark and if appropriate a speculation. I am open to correction.

  Gaming Assistive Technology
Eye Tracker Models  Tobii Eye Tracker 4C Tobii EyeX* Tobii PC Eye Explore Tobii PC Eye Mini
Cost €159 €109 €680 €2000
Size 17 x 15 x 335 mm
(0.66 x 0.6 x 13.1 in)
20 x 15 x 318 mm
(0.8 x 0.6 x 12.5 in)
20 x 15 x 318 mm
(0.8 x 0.6 x 12.5 in)
170 mm × 18 mm × 13 mm

6.69“ × 0.71“ × 0.51“

Weight 91 grams 91 grams 69 grams 59 grams
Max Screen Size 27 inches with 16:9 Aspect Ratio
30 inches with 21:9 Aspect Ratio
27 inches 27 inches 19 Inches
Operating Distance 20 – 37″ / 50 – 95 cm 20 – 35″ / 50 – 90 cm 18-32 “/ 45-80 cm 45 cm – 80 cm 18” – 32″
Track Box Dimensions  16 x 12″ / 40 x 30 cm at 29.5″ / 75 cm 16 x 12″ / 40 x 30 cm at 29.5″ / 75 cm 19 x 15” / 48 x 39cm >35 cm × 30 cm ellipse
>13.4” × 11.8”
Tobii EyeChip Yes No No Yes
Connectivity USB 2.0 (integrated cord, USB 2.0 BC 1.2) USB 3.0 (separate cord) USB 3.0 USB 2.0
USB Cable Length 80 cm 180 cm 180 cm (short, extension needed in some situations)
Head Tracking Yes (not powered by EyeChip) No No No
OS Compatibility Windows 7, 8.1 and 10 (64-bit only) Windows 7, 8.1 and 10 (64-bit only) Windows 7, 8.1 and 10 (64-bit only) Windows 7 Windows 8.1 Windows 10
CPU Load 1%* 10% 10% ? (unconfirmed but similar to Tracker 4 C)
Power Consumption 1.5 Watt 4.5 Watt ? (unconfirmed but suspect same as Eye X) 1.5 Watt
USB Data Transfer Rate 100KB/s 20MB/s ? (unconfirmed but suspect same as Eye X) ? (unconfirmed but similar to Tracker 4 C)
Frequency 90 Hz 70 Hz 55 Hz 60Hz
Illuminators Near Infrared (NIR 850nm) Only Backlight Assisted Near Infrared
(NIR 850nm + red light (650nm))
? (unconfirmed but suspect same as Eye X) ?
Tracking Population 97% 95% ? (unconfirmed but suspect same as Eye X) ?
Additional Software Tobii Eye Tracking Core Software Tobii Eye Tracking Core Software Gaze Point (mouse emulation software) Windows Control

 

* The specs given here are taken from those listed on https://help.tobii.com/hc/en-us/articles/212814329-What-s-the-difference-between-Tobii-Eye-Tracker-4C-and-Tobii-EyeX- accessed 08/03/2017. Because the weight listed is 91 grams I suspect these specs are for the first generation Tobii Eye X (as it weighs 91 grams, the more recent Eye X weighs 69 Grams). The current Eye X specs are probably similar to the PC Eye Explore but I cannot confirm this.

GazeSpeak & Microsoft’s ongoing efforts to support people with Motor Neuron Disease (ALS)

Last Friday (February 17th) New Scientist published an article about a new app in development at Microsoft called GazeSpeak. Due to be released over the coming months on iOS, GazeSpeak aims at facilitating communication between a person with MND (known as ALS in the US, I will use both terms interchangeably) and another individual, perhaps their partner, carer or friend. Developed by Microsoft intern, Xiaoyi Zhang, GazeSpeak differs from traditional approaches in a number of ways. Before getting into the details however it’s worth looking at the background, GazeSpeaker didn’t come from nowhere, it’s actually one of the products of some heavyweight research into Augmentative and Alternative Communication (AAC) that has been taking place at Microsoft over the last few years. Since 2013, inspired by football legend and ALS sufferer Steve Gleason (read more here) Microsoft researchers and developers have put the weight of their considerable collective intellect to bear on the subject of increasing the ease and efficiency of communication for people with MND.

Last year Microsoft Research published a paper called ”
AACrobat: Using Mobile Devices to Lower Communication Barriers and Provide Autonomy with Gaze-Based AAC” (abstract and pdf download at previous link) which proposed a companion app to allow an AAC user’s communication partner assist (in an non-intrusive way) in the communication process. Take a look at the video below for a more detailed explanation.

This is an entirely new approach to increasing the efficiency of AAC and one that I suggest, could only have come from a large mainstream tech organisation who have over thirty years experience facilitating communication and collaboration.

Another Microsoft research paper published last year (with some of the same authors at the previous paper) called “Exploring the Design Space of AAC Awareness Displays” looks at importance of a communication partners “awareness of the subtle, social, and contextual cues that are necessary for people to naturally communicate in person”. There research focused on creating a display that would allow the person with ALS express things like humor, frustration, affection etc, emotions difficult to express with text alone. Yes they proposed the use of Emoji, which are a proven and effective way a similar difficulty is overcome in remote or non face to face interactions however they went much further and also looked at solutions like Avatars, Skins and even coloured LED arrays. This, like the other one above, is an academic paper and as such not an easy read but the ideas and solutions being proposed by these researchers are practical and will hopefully be filtering through to end users of future AAC solutions.

That brings us back to GazeSpeak, the first fruits of the Microsoft/Steve Gleason partnership to reach the general public. Like the AACrobat solution outlined above GazeSpeak gives the communication partner a tool rather than focusing on tech for the person with MND. As the image below illustrates the communication partner would have GazeSpeak installed on their phone and with the app running they would hold their device up to the person with MND as if they were photographing them. They suggest a sticker with four grids of letters is placed on the back of the smart phone facing the speaker. The app then tracks the persons eyes: up, down, left or right, each direction means the letter they are selecting is contained in the grid in that direction (see photo below).

man looking right, other person holding smartphone up with gazespeak installed

Similar to how the old T9 predictive text worked, GazeSpeak selects the appropriate letter from each group and predicts the word based on the most common English words. So the app is using AI in the form of machine vision to track the eyes and also to make the word prediction. In the New Scientist  article they mention that the user would be able to add their own commonly used words and people/place names which one assumes would prioritize them within the prediction list. In the future perhaps some capacity for learning could be added to further increase efficiency. After using this system for a while the speaker may not even need to see the sticker with letters, they could write words from muscle memory. At this stage a simple QR code leading to the app download would allow them to communicate with complete strangers using just their eyes and no personal technology.

New website to support people to use keyboard shortcuts

Girl typing on computer keyboard

Sharon’s Shortcuts is a new educational resource for people who primarily use keyboard shortcuts to access a computer.  The site contains different sections covering common tasks carried out using a PC.  All the keyboard shortcuts mentioned in this site are standard, Windows shortcuts that anyone can use.

While it’s easy to find plenty of tutorials and step by step instructions for using a PC that are mouse-based, this unique website gives step by step instructions on using a PC without the mouse making it a useful resource for screen reader users.

Sharon has over 10 years experience supporting people with a vision impairment and also provides One to One Tutoring Sessions for specific IT skills, getting to grips with work based systems, or a program of study towards a qualification like ECDL.

In this blogpost Sharon discusses her website http://sharons-shortcuts.ie/ and her tutoring services.

Listen to Sharon’s blogpost

AbilityNet’s Robin Christopherson honoured for services to digital inclusion with MBE

How refreshing to see that Digital Inclusion gets the nod in this year’s UK MBE awards, with AbilityNet’s Robin Christopherson receiving well-deserved recognition for his longstanding contribution to the world of accessibility, and digital accessibility in particular.

Here he is, speaking at the Tech4Good Awards in July 2016: living and breathing digital inclusion in his daily life:

 

 

Dawn of the Personal Digital Assistants

Speech Recognition has been around a long time by technology standards however up until about 2010 most of it was spent languishing in Gartner’s wonderfully named “Trough of Disillusionment”. This was partly because the technology hadn’t matured enough and people were frustrated and disappointed when it didn’t live up to expectations, a common phenomenon identified by the previously alluded to Hype Cycle. There are a couple of reasons why Speech Recognition took so long to mature. It’s a notoriously difficult technical feat that requires sophisticated AI and significant processing power to achieve consistently accurate results. The advances in processing power were easy enough to predict thanks to Moore’s Law. Progress in the area of AI was a different story entirely. Speech Recognition relies first on pattern recognition, but that only takes it so far. To improve the accuracy of speech recognition improvements in the broader area of natural language processing were needed. Thanks to the availability of massive amounts of data via the World Wide Web, much of it coming from services like YouTube we have seen significant advances in recent years. However there is also human aspect to the slow uptake of speech driven user interfaces, people just weren’t ready to talk to computers. 2016 is the year that started to change.

Siri (Apple) who was first on the scene and is now 5 years old and getting smarter all the time came to MacOS and AppleTV this year. Cortana (Microsoft) who started on Windows Phone, then to the desktop with Windows 10, made her way onto Xbox One, Android and iOS and is soon to be embodied in all manner of devices according to reports. Unlike Siri, Cortana is a much more sociable personal digital assistant, willing to work and play with anyone. By this I mean Microsoft have made it much easier for Cortana to interact with other apps and services and will be launching the Cortana Skills Kit early next year. As we’ve seen in the past it’s this kind of openness and interoperability that takes technologies in directions not envisaged and often leads to adaption and adoption as personal AT. If there was a personal digital assistant of the year award however, Amazon Echo and Alexa would get it for 2016. Like Microsoft, Amazon have made their Alexa service easy for developers to interact with and many manufacturers of Smart Home products have jumped at the opportunity. It is the glowing reviews from all quarters however that makes the Amazon Echo stand out (a self-proclaimed New Yorker Luddite to the geeks at CNET). Last but not least we have Google. What Google’s personal digital assistant lacks in personality (no name?) it makes up for with stunning natural language capabilities and an eerie knack of knowing what you want before you do. Called Google Now on smartphones (or just Google App? I’m confused!), similar functionality without some of the context relevance is available through Voice Search in Chrome. They also offer voice to text in Google Docs which this year has been much improved with the addition of a range of editing commands. There is also the new Voice Access feature for Android currently in beta testing but more on that later. In the hotly contested area of the Smart Home Google also have a direct competitor to Amazons Echo in their Google Home smart speaker. Google are a strong player in this area, my only difficulty (and it is an actual difficulty) is saying “ok Google”, rather than rolling off the tip of my tongue it kind of catches at the back requiring me to use muscles normally reserved for sucking polo mints. Even though more often than not I mangle this trigger phrase it always works and that’s impressive. So who is missing? There is one organisation conspicuous by their absence with the resources in terms of money, user data and technology who are already positioned in that “personal” space. Facebook would rival Google in the amount of data they have at their disposal from a decade of video, audio and text, the raw materials for natural language processing. If we add to this what Facebook knows about each of its users; what they like, their family, friends and relationships (all the things they like), calendar, history, interests… you get more than a Personal Digital Assistant, maybe Omnipersonal Digital Assistant would be more accurate. The video below which was only released today (21/12/16) is of course meant as a joke (there are any number of things I could add here but I’ll leave it to the Guardian). All things considered however it’s only a matter of time before we see something coming out of Facebook in this area and it will probably take things to the next level (just don’t expect it to be funny).

What does this all mean for AT? At the most basic level Speech Recognition provides an alternative to the keyboard/mouse/touchscreen method of accessing a computer or mobile device and the more robust and reliable it is the more efficiently it can be used. It is now a viable alternative and this will make a massive difference to the section of our community who have the ability to use the voice but perhaps for any number of reasons cannot use other access methods. Language translation can be accurately automated, even in real time like the translation feature Skype launched this year. At the very least this kind of technology could provide real-time subtitling but the potential is even greater. It’s not just voice access that is benefiting from these advances however, Personal Digital Assistants can be interacted with using text also. Speech Recognition is only a part of the broader area on Natural Language Processing. Advances in this area lead directly to fewer clicks and less menu navigation. Microsoft have used this to great effect in their new “Tell me what you want to do” feature in their Office range. Rather than looking through help files or searching through menus you just type what tool you are looking for, in your own words, and it serves it right up!

Natural Language Processing will also provide faster and more accurate results to web searches because there is a better understanding of actual content rather than a reliance on keywords. In a similar way we are seeing this technology working to provide increased literacy supports as the computer will be able to better understand what you mean from what you type. Large blocks of text can be summerised, alternative phrasing can be suggested to increase text clarity. Again the new Editor feature in Microsoft Word is made possible by this level of natural language understanding.

2016 – Technology Trends and Assistive Technology (AT) Highlights

As we approach the end of 2016 it’s an appropriate time to look back and take stock of the year from an AT perspective. A lot happened in 2016, not all good. Socially, humanity seems to have regressed over the past year. Maybe this short term, inward looking protectionist sentiment has been brewing longer but 2016 brought the opportunity to express politically, you know the rest. While society steps and looks back technology continues to leap and bound forward and 2016 has seen massive progress in many areas but particularly areas associated with Artificial Intelligence (AI) and Smart Homes. This is the first in a series of posts examining some technology trends of 2016 and a look at how they affect the field of Assistive Technology. The links will become active as the posts are added. If I’m missing something please add it to the comments section.

Dawn of the Personal Digital Assistants

Game Accessibility

Inbuilt Accessibility – AT in mainstream technology 

Software of the Year – The Grid 3

Open Source AT Hardware and Software

The Big Life Fix

So although 2016 is unlikely to be looked on kindly by future historians… you know why; it has been a great year for Assistive Technology, perhaps one of promise rather than realisation however. One major technology trend of 2016 missing from this series posts is Virtual (or Augmented) Reality. While VR was everywhere this year with products coming from Sony, Samsung, Oculus and Microsoft its usefulness beyond gaming is only beginning to be explored (particularly within Education).

So what are the goals for next year? Well harnessing some of these innovations in a way where they can be made accessible and usable by people with disabilities at an affordable price. If in 2017 we can start putting some of this tech into the hands of those who stand to benefit most from its use, then next year will be even better.

New podcast; Assistive Technology need not be expensive

 

National Learning Network provides a range of flexible training programmes and support services for people who need specialist support (job seekers, unemployed, people with an illness or disability) in 50 centres around the country.

Interview with Kieran Hanrahan at the Comunity Hub for Assistive Technology CHAT meeting.  Kieran shows that exploring ideas to see what is available and collaboration with different organisations can help make assistive technology solutions.  Kieran’s discussion was about Assistive Technology need not be expensive, as we can use technology that is already available to people and design assistive technologies with universal design in mind, so technology is available for all user needs.

Listen to the Kieran’s interview on the podcast page

 

New podcast; Stuart Lawler talks about the community of practice

NCBI working for people with sight lossThis podcast is an audio recording of an interview with Stuart Lawler. Stuart is a rehabilitation service manager at NCBI and talks about the Community of Practice and what it means to the NCBI. The Community of Practice group is organised by the Disability Federation of Ireland (DFI). This is the national support organisation for voluntary disability organisations in Ireland who provide services to people with disabilities and disabling conditions.  Stuart tells why the NCBI were happy to get involved Community of Practice CHAT meetings and how he see it working for the rehabilitation service.

Listen to this interview on the Podcast page

Stuart also has his own regular podcast you can hear at https://www.ncbi.ie/category/technologypodcast/