Alongside the announcement about the upcoming Assistive Access feature (see post here) Apple announced two more Accessibility features that any other time would have both attract well deserved attention. Live Speech is inbuilt text based AAC. AAC is Augmentative and Alternative Communication, and it is the area of AT that supports people with communication difficulties. If someone hasn’t got the ability to speak or perhaps their speech is impaired and difficult for others to understand, they can use AAC to communicate. Stephen Hawking was probably the most famous AAC user.
AAC is a well-established area of AT so what are Apple adding to the area with Live Speech? Ten years ago, most people used dedicated AAC devices however in recent times software/app based AAC has become far more common. Most of these apps are made for the Apple iOS/iPadOS platform and sold through the AppStore. Live Speech a text based AAC system built right into Apple devices (both iOS and MacOS). It can be used in person like a standard AAC device or during FaceTime calls. This second part is the most exciting. Using AAC on audio and video calls has always been challenging, in practice many people used two devices, generating speech on one while using the other for the call. This is a far from ideal solution and the audio quality can be poor, also requiring two devices is a deal breaker for many people. Some platforms (Zoom) allow you to share your computer sound without sharing your screen. This is a better solution but still a workaround. Up until now the best option (and only option for Microsoft Teams) has been to use a Virtual Audio Cable like VB-Audio or some other additional software. This however is a bit technical for some people.
When Live Speech becomes available Apple device users will be able to use it to communicate through FaceTime on their device. The press release doesn’t mention anything about this feature being extended to those who use other AAC apps on their Apple device. Although it would be expected that those using apps like Proloquo2go or Grid for iPad will probably be able to access this functionality from their chosen AAC through an API in a future update. It also doesn’t mention compatibility with other video calling services like Teams and Zoom.
Will Live Speech replace the need for dedicated AAC apps? For many it won’t, and it’s probably not intended to. For those looking for a text based AAC solution however, it’s a great place to start. One feature that might convince people to switch to Live Speech is that it seems it can be used right from lock screen. This feature alone will make it extremely useful and allow people participate in spontaneous communication without having to unlock the device and find the app. This feature is not mentioned anywhere in the press release, however it can be seen in one of the screen shots they released (Below).
Alongside Live Speech Apple also announced Personal Voice. Personal Voice enables you to create your own synthetic voice that sounds like you on your Apple device with just 15 minutes of recordings.
Again, this is not new. Voice Banking has been around for a while now and there are many services that allow you build your own synthetic voice based on recordings. Long time readers of AT and Me might remember that we reviewed ModelTalker back in 2019 and Voice Keeper a year later. What Apple have done with Personal Voice is that they have made the process easier and added it to the OS as an accessibility feature, so providing it for free. Previously, apart from ModelTalker, voice banking services were quite expensive. It is also more secure because the Personal Voice is recorded and created locally on the mobile device. Personal Voice will work with Live Speech, but they do not mention if users of other AAC apps will be able to avail of the service, hopefully they will be.