Mark Zuckerberg’s Metaverse of Madness
28 October, 2021 Oh great oh boy OpinionToday, Mark Zuckerberg unveiled his company’s plan for the Metaverse: A virtual world where real life people can connect socially with other real life people, through virtual experiences.
I’ll admit: It seems kind of cool! I’m not usually a fan of technology or tech utopias, but I do like Star Trek holodecks, and Star Trek holodecks made social seems like a plausible direction for technology to be going in. Would I personally want to spend time in the Metaverse? Probably not. But do I think it looks neat? Absolutely.
But there’s a massive problem that Zuckerberg hasn’t given thought to: Moderation.
It’s already bad
Facebook is horrible at moderation. Revelations from Facebook whistle-blowers Sophie Zhang and Frances Haugen, and reporting work by Buzzfeed News, VICE News, The Washington Post, CNN, the Tech Transparency Project, Media Matters, and others, have revealed that Facebook allowed militia groups to spread vaccine and COVID misinformation, promoted military gear to those who follow extremist content, and failed to remove groups promoting election misinformation. This is despite Facebook’s algorithms being quite capable at finding such content: It recommends the content to users all the time.
Given Facebook’s scale, the site often relies on AI moderation to find and remove content. But when it came to implementing more effective algorithms, Zuckerberg decided to prioritise growth instead. Facebook’s moderation in countries outside of the US and languages other than English is often even worse. And while Zuckerberg and Facebook’s communications department continue to boast about its number of moderators, they refuse to provide details about how many are contractors, and what the country-by-country breakdown is.
That’s just moderating 2D content.
It’s going to get worse
The Metaverse promises to add body language and facial expressions to the ways users communicate. Given the company can’t seem to hire enough moderators to deal with what it has now, and refuses to use algorithms that maybe could, it’s doubtful they’d do much better with the communication forms they want to add.
More than that, the Metaverse as Zuckerberg painted it today looks to be a fundamental shift from lasting content (like posts and comments), to fleeting content (like what you’re expressing right this instant). And that makes preventive moderation nearly impossible.
A lot of social networks β Facebook included β have already begun shifting to fleeting content. Live-streams, for instance, are a form of fleeting content, where what the host does is immediately broadcast to those watching, without time for a moderator β algorithmic or human β to step in. Video or voice chats are even more fleeting: Even if the content lasts in some form, the majority of those watching or listening are doing so minutes, hours, or even days before a moderator could ever close the barn door.
This means most moderation choices move from preventive actions β taking content down before it can do harm β to retributive actions β taking action against an offending account. The only preventive action a moderator can do on a live-stream that’s already ended is to take down the recording so that no one will see it in the future. If they choose to take action against the user who hosted the live-stream, they’re not doing anything about all the people that watched it.
The past is bad
We’ve already seen this shift in moderation. In 2019, the Christchurch shooter used Facebook Live to live-stream his attack at the first mosque on his rampage. The live-stream was removed by Facebook 12 minutes after it had ended, leaving time for it to be recorded and re-uploaded to sites outside of Facebook’s control.
I don’t fault Facebook’s moderation here: Moderating live video is an incredibly difficult task, and removing the stream only 12 minutes after its completion seems like a small miracle. I’m sure Facebook doesn’t want massacres live-streamed on its platform, and it seems they did everything they could to contain the stream once they were alerted about it.
But I do fault Facebook’s product. If moderating live video is such a difficult task, then perhaps we shouldn’t have live video. Facebook isn’t obligated to provide a service that can be so easily used to cause harm. In fact, perhaps they should be obligated to discontinue such a service.
Actually, maybe they should be obligated to not make another one.
The future imitates the past
Facebook has failed to adequately moderate their platform. Even if such moderation is impossible β and I suspect in some cases it might be β Zuckerberg and Facebook have continually failed to follow through on the moderation that is possible.
There’s no indication Facebook’s Metaverse would be different.
Zuckerberg is pitching the Metaverse as being much like the Internet of today: Decentralised, where no one company owns the Metaverse, and different worlds, hosted by different companies, can all be explored regardless of who makes your portal.
But Zuckerberg most definitely intends to own as much of those worlds as he can. Sure, Google, Apple, and your favourite games makers will be able to host their own slices of the Metaverse, but you can be sure that Zuckerberg will try his hardest to make his slice the biggest.
It might be based on open standards like the Web is today, but just like the Web, most of the sites you’ll be visiting will probably be owned the big players. Big players like Facebook.
So how Zuckerberg intends to moderate his slice is a legitimate question. And it’s a question I don’t think he’s thought about.
One last horrible thing
It’s quite possible Zuckerberg is betting on the Metaverse because he thinks it’d be cool and wants to make it happen.
It’s also a certainty that he thinks it’ll be a growth opportunity.
My suspicion is Zuckerberg hopes the Metaverse will become as integral to our lives as the Web and smartphones are today. Just as it’s harder and harder to get a job or schedule a date or keep up with friends without using some tech CEO’s frightening brainchild today, Zuckerberg likely wants the Metaverse to further encroach on our still blessedly offline lives, so that his slice of the pie can grow even larger. To borrow a Bushism, he wants to make the pie higher.
Whatever you think of that, it means that Meta will have a lot more content to moderate. If they aren’t willing to make the decisions necessary to moderate what they’ve already got… what makes you think they’ll do something different with a bigger slice?