On the first day of Apple Watch pre-orders, I put on a £12k wristputer and tried to figure out what was so special about it.
There was the solid gold casing, which was impressive in a blunt way, sure. I was curious about the digital crown, a new UI idea, too. But the only moment I really felt surprised—and unlike the jaded monster I know I am—was when the Apple store employee told me to put two fingers on the screen. I felt a solid, gentle tapping on my wrist, as though this giant chunk of rare and precious metals was breathing. It was my heartbeat, processed through its guts thanks to the Force Touch-sensitive screen, and spit out through the watch’s Taptic Engine. Huh.
The Apple Watch might be the next iPod. It might be a flop. But the tactile framework it uses to communicate with you without text or sound, what Apple calls “Digital Touch” using its “Taptic Engine,” is here to stay in one form or another. It’s a genuinely important new language that is certainly not unique to Apple, but whose adoption by Apple signals its entrance into consumer technology for real.
The Golden Actuator
So, first of all, what is the Taptic Engine? Taptic is Apple’s branded neologism combining the word “haptic,” a word that’s been around for a long time and describes touch-based interaction, with “tap,” which is the word Apple uses to describe how you touch the watch. If you shimmy loose the Watch’s display, you’ll find the Taptic Engine nestled above the battery and next to the digital crown, as iFixit’s teardown revealed yesterday:
Taptic Engine via iFixit Apple Watch teardown.
It’s a rectangular casing that contains a linear actuator. That’s a very vague and broad term that describes a device that turns energy into linear motion—as opposed to circular motion, like the kind in most motors. This tiny mechanism takes electricity from the battery and turns it into motion, which you feel as a tapping on your wrist. “You can get someone’s attention with a gentle tap,” says Apple. “Or even send something as personal as your heartbeat.”
For the majority of computing history, technology has communicated with us in two ways: Sight and sound. Think of haptic—er, Taptic—feedback as the third way. Sure, you touch a screen and dial a phone. But that interaction only went one way—from the human to the device. A machine sending touch back through your fingers? That’s newer, though it goes back far further than your phone vibrating in your pocket.
In fact, the taptic engine has its roots in the nuclear age.
The forefather of remotely-controlled robotics, Raymond Goertz, was working for the Atomic Energy Commission in the 1950s developing ways to handle radioactive materials without exposing the worker. He know that to manipulate nuclear material carefully, the operator needed a lot of information about the situation they were controlling—not just visuals. So he developed an arm that would situate the worker as the “master” of a robot “slave,” which would react and transmit information about texture, weight, and balance back to the user, as Anne Cranny-Francis explains in her book Technology and Touch: The Biopolitics of Emerging Technologies.
A Goertz prototype. Image via Cybernetic Zoo.
Goertz’s research on haptics ended up influencing everything from the robotics being used to clean up the Fukushima nuclear accident site today, to the development of remotely operated microsurgery machines. Consumer tech didn’t get in on this new idea until quite late in the game. It wasn’t until the early 1990s that researchers at MIT started developing haptic feedback systems that would let users “feel” virtual objects.
In 1994, a group at MIT published a paper introducing their prototype—a system called PHANToM, or the Personal Haptic Interface Mechanism (alright, the name was a stretch). The system was, in some ways, the first truly haptic interface.
It looked a bit like a very elaborate thimble into which you’d put your index finger. Three tiny motors would give haptic feedback on the user’s fingertip, giving the impression of weight and solidity that allowed the user to “touch” an object in virtual space.
The work led to haptic interfaces finding their way into some of the emerging technology of the day—including joysticks and mice. One device, called the FEELit Mouse, would allow users to “feel” software. It debuted in Las Vegas in 1997 at COMDEX, the early predecessor to today’s CES.
Pointing device with forced feedback button, Patent No. 6243078.
An incredible archived press release about the mouse trumpeted how it was bringing scifi interfaces into reality, comparing it to the “transition from black and white to color television:”
The FEElit exhibit was one of the most crowded at the Sands Hotel where newer Comdex exhibitors showed innovative products. Bill Gates, CEO of Microsoft, came by to feel his Windows 95 software. The FEELit mouse allows users to feel Windows commands like “dragging and dropping.”
Yep. 20 years ago, Bill Gates stopped by a consumer electronics exhibition to try out the haptic interface of the future. And while FEElit might not have had a very long shelf life, the idea flourished in its own quite way, showing up in video game controls and, eventually, cellphones.
Haptic’s Time Has Come
So why hasn’t haptic caught on in a bigger way? Because anyone who wasn’t handling radioactive waste or operating on a microscopic blood vessel didn’t really need it. Why add more moving parts and costly hardware to devices that were already communicating everything by text or audio pings?
Which brings us back to the watch. Apple is trying to develop an entirely new use case that doesn’t require direct interaction with a screen. It’s trying to fix the tyrannical relationship with our phones for which it is responsible, and it’s doing that using that handy third form of interaction that hasn’t quite found its niche yet. In Apple’s Developer Guidelines, it says that the Taptic Engine, in conjunction with Force Touch, “blurs the boundaries between physical object and software,” by “a new dimension of contextual software controls.”
In an odd way, the success of the Apple Watch and these features are interrelated. The watch is sort of a less obtrusive, less useful phone. But where it is unique is the digital touch features—which seem pretty damn superficial and silly—that let you use taps to communicate with other users. You can, as I did, send your own heartbeat to the wrist of another watch user. You can send a friend an annoying series of taps if they’re late. You could troll them with it.
In early April, Daring Fireball’s John Gruber perfectly explained this idea in his excellent write-up of the watch. As whimsical as it sounds, these haptic moments are the most interesting thing about the watch, as he writes:
[D]igital touch opens the door to forms of remote communication that most of us haven’t ever considered. Non-verbal, non-visual, physical communication across any distance. This could be something big.
Crucially, you can’t do a damned thing with it unless the other person has a watch, too. But even if the Apple Watch fails to catch on, this is a language that we’re going to see leaking into all sorts of Apple products. “Without the Taptic Engine, Apple Watch is not a compelling device,” Gruber concludes.
You might tap from your laptop, or your iPad. You might develop your own morse code-style of tapping to subtly and silently communicate during meetings. You will feel a lot more—even if it doesn’t involved a watch at all.