The horror from Christchurch, New Zealand, that exploded across newspaper pages and television, computer and smartphone screens this weekend captured imaginations in ways that no one could possibly have imagined beforehand.
The fact that one individual armed with a semi-automatic rifle could visit such an outrage upon people worshipping in a place of religion isn’t the worst of it, awful and distressing though this event is with at least 50 people shot dead and scores wounded, some in critical condition.
And it’s not because it happened in a country like New Zealand, a place many call a paradise on Earth. A place of rich beauty and a largely unspoiled natural environment. A place many of us in the UK see as made up of kinfolk, people with historic links to us, today part of an evolving diverse society just as we are.
No, the worst part of it is the way in which the awfulness played out live in a 17-minute video event that was planned with malicious foresight by an individual who leveraged the ability of having unfettered access to a truly global broadcast medium to expose his evil act in real time, broadcast to Facebook Live via a headcam he wore, as it happened, and potentially reach hundreds of millions of people.
Many of those people amplified what they saw, some unwittingly through sharing comment on social networks linking to it. But many deliberately downloaded copies and shared those copies with their networks online on Facebook, YouTube, Twitter, Reddit and other places, thus proliferating the morbid content as fast as they could upload it.
On Sunday, Facebook said on Twitter that it had removed 1.5 million copies of the killer’s video being shared on Facebook.
But, as TechCrunch points out, 300,000 copies of the video made it to Facebook timelines before the social network could take them down.
And therein lies the huge dilemma facing social networks, social media generally and society at large.
Since the advent of social media in the early years of this century that gave anyone with an opinion, a place online to share it, and an Internet connection to that place to enable you to share it, we have grown accustomed to this ability, this gift of freedom of expression at scale, that enables ideas to be shared online.
A great thing about this is that anyone can do this. But one consequence also is that anyone can do this.
The online world of social networks and media – ranging from the huge like Facebook and YouTube to the relatively niche like Reddit to friend-to-friend peer-to-peer networks on the dark web – reflect the total spectrum of human behaviour that represents our daily lives and experiences in the real world. The good, the bad, and the ugly. All of it.
To some, the online landscape we see today is not a pretty sight. According to Tim Berners’ Lee, the inventor of the World Wide Web thirty years ago this past March 12:
…while the web has created opportunity, given marginalised groups a voice, and made our daily lives easier, it has also created opportunity for scammers, given a voice to those who spread hatred, and made all kinds of crime easier to commit.
Working for an open, globally-connected, secure, and trustworthy Internet for everyone.
What happened in New Zealand is a stark indicator of the magnitude of these tasks ahead. Such issues are big, unquestionably, and should involve the widest possible range of people from across society at large working together in seeking ways to find answers that are realistic and have a chance of making a genuine difference.
A significant part of this process is figuring out a way to un-enable people with evil intent to use the web, and the Internet upon which it is layered, to further their intent, corrupt and pollute the medium for all, and destroy our trust in others.
What form might this take? How would we stop acts like live-streaming unspeakable violence for the world to see, or prevent others from sharing such content and spreading hatred online?
Could it take the form of something designed to give more weight to the force for good to society argument than on an individual’s right to free speech?
For instance, that could work where the moment Facebook, YouTube or any other online place becomes aware of a live-streaming event like that in New Zealand, they simply shut down all live streaming – no one can do it. That would prevent the specific evil-doer spreading his hatred, even though it also prevents every other person with a legitimate, positive, act.
At the same time, Facebook could freeze any upload to that service thus preventing the widespread sharing of recordings. Sure, it would affect everyone uploading video, but you’d stop the bad people.
A bit of a blunt-object approach to the problem, perhaps. Yet with such terrible events, speed is of the essence surely, and the tech isn’t yet advanced enough to moderate everything without human assistance. We can already see how such content proliferates at incredible speed where stopping it after the event is an almost unending game of whack-a-mole.
Maybe there ought to be greater scrutiny of what people do on a social network. Behavioural analysis, content analysis, network-effect analysis… that tech surely exists in a way to use it at huge scale.
And not forgetting the flip side of a right to freedom of expression is the responsibility that comes with the right.
I’m not saying there are any easy answers here. All of this involves social networks run by influential big tech companies. And there are huge trust issues in that area about how such companies use or misuse the personal data of people. Governments would necessarily be involved, some of whom are pretty zealous in their approach to regulation.
But it seems to me that we are at a fork in the road where the time has come to consider whether free and open access to means of communication that enable evil-doers to get away with murder is worth the human cost.