With the news that Stephen Hawking's revolutionary ACAT communications system is being made open-source and available freely to the public and developers alike, it's a great time to revisit this Giz feature from yesteryear, where we we had the privilege of meeting the incredible professor, and the team keeping him talking.
Were it not for technology, one of the greatest minds of our time (that belonging to Professor Stephen Hawking) would lack a voice, a means to express ideas that greatly expand our understanding of the universe. And while Hawking's synthesised voice may be one of the most instantly recognisable in the world, his debilitating struggle with amyotrophic lateral sclerosis meant that, until recently, his ability to communicate had been deteriorating to the point where he could manage no more than a word per minute using his computer.
Now, thanks to ACAT (Assistive Context Aware Toolkit), an Intel-built system that Hawking himself describes as "life-changing", Hawking is once again as prolific he has ever been during his illness.
"My old system is more than 20 years old, and I was finding it very difficult to continue to communicate effectively and do the things I love to do every day," said Professor Hawking at the unveiling of the new ACAT system in London.
"I am now able to give lectures, write papers and books and speak much faster. This new system is life-changing for me, and I hope that it will serve me well for the next 20 years."
Built in collaboration between Intel and predictive text masters Swiftkey (and under the close scrutiny of Hawking himself), the new ACAT system allows Hawking to write and speak twice as fast as he once could, and navigate his computer and applications at ten times the speed he has ever managed before.
For Intel, the job has been to streamline (rather than completely overhaul) Hawking’s bespoke user interface, allowing the professor the advantages of familiarity with a system he has lived with for 20 years rather than introducing completely new solutions. As such, Hawking’s existing cheek sensor (detected by an infrared switch mounted in his glasses and his sole input tool) remains the same, hooked up to a Lenovo laptop running Windows.
Much of Intel’s work has been on making the interface contextual. Take browsing the web, for example; once Hawking would have to have exited his text input window, open a mouse pointer which painstakingly scans a screen (first vertically, then horizontally, waiting for Hawking to stop it) to the web browser, let the mouse pointer again crawl to the search bar, close the mouse pointer and re-open the text entry tool.
“There’s a heavy reliance on the use of a mouse in a standard UI, and for someone like Stephen that becomes a very cumbersome process”, explained Lama Nachman, Principle Engineer and Project Lead.
“You can imagine a task as simple as opening a document could take three minutes.”
Using ACAT, the above web browsing scenario and many like it can be automated and swiftly carried out, based upon the context of Hawking’s working needs at that moment using a new commands menu.
Intel’s work also discovered small but frustratingly meaningful oversights in Hawking’s existing set-up; though the professor could always write emails, it would take the assistance of another person for him to include an attachment to the text. This is just one of the many small tweaks that sit beneath the headline improvements that are making significant changes to Hawking’s working life.
While Intel worked on the UI, Swiftkey went about creating a bespoke language model for the professor, using some technologies already at work in its mobile applications, and all-new techniques tailored to Hawking's needs. Honing in on Hawking as a specific user allowed Swiftkey to develop a model that would recognise the professor's tone from document to document, making intelligent suggestions that match the informal nature of an email or complex lexicon of a scientific paper.
"Text input shouldn’t be about hardware — text input isn’t about keyboards, it’s about language”, said Swiftkey's Joe Osborne. As a result, Swiftkey's work with ACAT means that Hawking need only input 15 to 20 per cent of the characters he speaks or writes -- Swiftkey's software accurately predicts the rest, massively increasing the amount he can produce.
"Stephen gave us unparalleled access into his life," said Pete Dunham, User Experience Director at Intel Labs.
"An important thing to note is that it's reducing the amount of strength he has to use, by reducing the amount of steps he has to do to get to a point. If you're a runner, your muscles fatigue over time. And that's the same thing that happens to Stephen -- even if it's a small muscle on his face, after a days work with it he is completely worn out."
While Swiftkey's technology remains its core, proprietary business, the rest of the ACAT research and development will be made open source from January of 2015. The teams involved will seed it initially to universities in the hope of expanding the development of the platform, and believe that its modular nature could eventually help the three million worldwide that suffer from motor neuron diseases and quadriplegia.
“We are pushing the boundaries of what we can do with technology, and without it I would not be able to talk with you today," said Hawking.
"Intel's research and development is bringing profound changes in the world, and in the way that disabled people can communicate. By making this technology freely available, it has the potential to greatly improve the lives of disabled people all over the world."
Though Intel have not calculated the precise cost of building the system, spokespeople at the event alluded that, considering it has taken three years and many developers to reach this stage, the cost would be high. But the result, enabling the articulation of a great mind, is perhaps invaluable.
"It's Stephen Hawking," Denham reminded those in attendance at the conference. "The price doesn't matter."
This post was originally published on December 2nd, 2014