Displays that can read your mood. Chips implanted in our brains. More than just wearable technology; technology that's in everything we wear. Almost quaint in a sci-fi novella. But in Intel's real-life vision? Thrilling. And RealSense is what's going to get us there.
RealSense isn't a specific piece of technology, but rather Intel's name for the group of hardware and software advances in perceptual computing that are going to get us to that Minority Report future. And first out of the gate? A 3D camera that lets you interact with your devices in a way that you only thought was possible in movies.
The possibilities, in Intel's vision, are as close to limitless as you can imagine. It'll be able to capture and share images, sure, but also be able to tell what mood you're in based on facial expression and posture. It'll be able to understand your natural language commands, unlike the stilted Siri-lish we use today. But the real trick is that it can do more of this already than you might think possible.
Among the camera's functional tricks include taking full 3D images (as seen above) but also, almost more importantly understand depth well enough to differentiate objects. That means, in the most simple use, putting different effects in foreground and background. But it'll also majorly impact the way we interact with our computers, tablets, and phones.
To help get to the future it envisions, Intel is partnering with 3DSystems, a company well-steeped in 3D-printing technology. It's a natural partnership—one that Intel will talk about in more detail later during CES, since the natural next step of 3D interfaces is bringing them into the physical world.
Skype is also along for the ride, although less impressively; you can isolate a face and change the background to whatever you want. It's a green screen effect, but far from seamless. But hey, points for potential!
Gesture controls gave a better impression though; simple pinch and grab movements, waves and pushes. It's smooth, especially for a demo, and putting that type of control in a top-mounted camera is a much more practical solution than the side-mounted Leap Motion.
It's also not limited to desktop—or to hands. The camera can track your face to navigate Google Street View, following your eyes and head to figure out where you want to look.
Also making an appearance: Dragon Naturally Speaking, which has apparently been refined but still had a few hiccups during a demonstration.
A game that turns your hand into a literal hand of god also didn't seem quite ready for prime time. Pinball and a sci-fi flight simulator—both without a mouse or joystick—both fared a little better. But then you remember: That even a janky version of this exists at all is something that would have been hard to imagine just a few years ago.
And what about for kids? There's lots for kids! Gesture-based animated games, depth-perceiving environments, all accessible for tots and tykes. Which makes sense; by the time they're grown-ups, this will be the default mode of interaction.
The future's also goofy; a developer showed off a gesture-based music app that doesn't seem to have much purpose other than to be silly percussive playtime. Which, by the way, actually looks like a whole lot of fun.
The larger point, regardless of how well it works in practice in a demo, is clear; there are pieces of next-generation sight, sound, and touch interaction in place. It's bringing them together that's going to be the hard part.