Facebook wants one billion people to experience virtual reality—about one-seventh of the entire human population. That’s an audacious goal even without a time frame, which Mark Zuckerberg did not provide. But to lure that many people into the void, users don’t just need their minds blown. They also need to feel safe.
During the Oculus Connect event on Wednesday, Oculus Home product manager Christina Womack announced a Safety API for social VR. The new developer tools will include a blocking feature that works across apps, so if you block a user in one app, they will be simultaneously blocked across other apps that utilise the same API. The topic was only addressed once during the event, but an Oculus blog post says the company will announce more details “early next year.”
“Let’s be honest, being in VR with other people, especially strangers, can be intimidating,” Womack said on stage on Wednesday. “For communities to thrive, people need to feel safe—and it costs developers a lot of money and time and effort to build and maintain safe places. We care deeply about protecting the future of social VR. So we want to help. We decided to build an API that does a lot of that for you. Early next year, you’ll be able to get platform-level safety tools, like blocking and reporting, for free. It’s like, built-in best practises that carry app-to-app.”
Oculus is clearly trying to get ahead of harassment in virtual reality before the company attempts to wrangle hundreds of millions of people into the space. But there have already been a number of reported instances of harassment in VR, and as the medium becomes even more lifelike, so will the abuse. Equipping developers with the necessary tools to both prevent and deal with that harassment in social VR is crucial if Facebook wants it to own the next frontier of social networking.
Before addressing safety, Womack also announced Oculus’ plans to expand its avatar customisation tools to better represent a diverse range of users. These redesigned avatars, which will be available early next year, let users customise their avatars’ skin and hair, among other features. And later on next year, avatars will feature more responsive mouth and eye movements. But this customisation and expressiveness may also make users—specifically women and people of colour, and other groups that are disproportionately attacked online— more vulnerable to harassment. Of course, you can always customise an avatar so it’s less likely to be targeted (a white dude), but you shouldn’t have to.
As it stands, two-dimensional social networking is failing pretty miserably at handling harassment. It remains to be seen whether Oculus’ Safety API has the necessary tools for developers to protect all of their users. If Facebook and other major social media platforms can’t get a grip on harassment in lowly 2D, why should we believe they are prepared to handle a billion users in VR?