War for Online Truth: What Companies and Countries are Doing to Battle Fake News

By  //  March 15, 2020

The tech world has had quite a rough time as of late. Everywhere you look, there are burning issues looming on the horizon.

The tech world has had quite a rough time as of late. Everywhere you look, there are burning issues looming on the horizon.

Be it the very real consequences the coronavirus outbreak had on the industry, or the governmental and public outcry for better regulation, when it comes to the virtual space we all inhabit. 

With all of these problems spiraling out of control and compounding in on themselves, it can be hard to get your bearings on what’s really happening.

In this article, we’ll shed some light on the issues at hand, to help you better understand what is going on and what it all means.

The problem of defining the problem

“Fake news” and hoaxes are terms that have become more and more common in our day-to-day lives. Their rise to prominence could be partially caused by a certain president’s love of using them. 

And though they communicate the main concept of whatever we may be referring to fairly well in a casual setting, these terms are burdened with an underlying issue.

The fact of the matter is, that they’re blanket terms, which fail to grasp the nuances necessary for effective regulation in the social media space.

In actuality, there’s a trifecta of separate sub-types, collectively known as “information disorder.”

In no particular order, these are: 

  1. disinformation – false information propagated with the intent to cause harm
  2. misinformation – false information propagated without the knowledge it is false, or without the intent to cause harm
  3. malinformation – genuine content shared within a false context and with the intent to cause harm

A similar problem arises when we dive into the topic of “deepfakes.” A deepfake in its most basic form is any piece of media (image or video) where one person is replaced with another one’s likeness.

As such, this definition doesn’t only relate to malicious content which seeks to deceive its viewers. Memes and Instagram filters, which modify a person’s appearance via an AI algorithm, also fall under this definition.

With this knowledge in hand, the problem with a company announcing its intentions to combat either of these phenomena begins to beg further clarification.

Otherwise, the posted content that should be subject to regulation becomes very much ambiguous. 

Through co-operation to effective regulation

Big Tech giants like Facebook, Twitter, Instagram and even Reddit recently expressed their belief that regulation needs to become a bigger part of their modus operandi. And wouldn’t you know it? Governments and the public whole-heartedly agreed with them. 

The demands for better transparency and regulation have been steadily growing over the years and now it finally seems the industry might be taking a step in the right direction.

Even in this year’s CES 2020, the technological convention showcasing all the latest gadgets and gizmos coming soon™, though rich with everything connected to user recognition, was sorely lacking in anything privacy-related. 

Before we properly dive into the deep end of this topic, it’s important to note that content moderation is a complex issue. Companies constantly find themselves on thin ice, having to choose between what falls under the terms of “free speech” to not infringe on anyone’s rights, while doing their best to protect their users from potential harm.

And apart from content related to unlawful conduct (underage pornography, threats, etc.), this is done largely on a subjective case-by-case basis.

Company Regulation

As stated before, Twitter, Facebook and Instagram were one of the first platforms to jump into the fray, alongside some other non-SM companies (ex. Google, Microsoft and IBM). And though their aspirations to properly face the issues discussed above is applaudable, it has left many parties cold

Facebook declared that it would be using an algorithm to seek out and ban deepfakes. However, this solution doesn’t relate to lightly modified content and the other forms of information disorder we discussed above.

After the 2016 Analytica fiasco, Facebook decided to bring about a “Supreme Court” of content moderation, an independent board for evaluating specific cases within the Facebook eco-system (Instagram included).

However, there’s still plenty of issues with the newly implemented system. First of all, the board is likely to be buried under the immense number of claims and appeals it’s sure to receive.

Some are also concerned with the fact that an account for one of the mentioned platforms is necessary, which excludes potentially affected parties, which lack such an account, from accessing the service.

It’s also worth mentioning that Youtube and Facebook moderators have been recently revealed to have to sign waivers which acknowledge chances of suffering PTSD as a direct result of fulfilling their duties. (Note that this is in violation to OSHA.)

Another question that troubles the industry is WhatsApp, which has become the main vehicle for disinformation (especially in India) thanks to its end-to-end encryption, which limits how efficiently moderators can browse and screen content for potentially harmful information.

Governmental Regulation

Different countries have different approaches to the question of online content regulation. As it is near impossible to box them all together under something, what follows is a succinct overview of their policies.

European Union has introduced its fair share of incentives over the years. It has decided specifically to clamp down on terror videos. GDPR also touches on some of these issues, though mainly introducing proper conduct while handling users’ information.

The United Kingdom is considering putting online content under the jurisdiction of Ofcom, which, until now, oversaw only TV and radio broadcasters. The kicker comes in the fact that online users would be free to post offensive and dangerous content, while companies would be liable for anything unlawful they allow to stay on their platforms. 

This is the opposite approach of the regulation which we may see in some other UK industries. For example, in the gambling industry, players can sign up for self-exclusion, effectively managing their conduct on their own. 

Under Ofcom’s rule, companies would need to plainly state what content and behavior goes against their principles. In the event that they fail to deal with offensive material and offenders themselves in a timely and effective way, they could be subject to fines and even a prison sentence. However, who would be the person to actually go to jail for such misconduct remains to be seen.

UK is also considering banning face-recognition AI for the next five years.

Germany employed it’s NetzDG law in 2018. Any company with over two million users is subject to it. Under this law, companies had to set up new content review measures, remove clearly illegal content within twenty-four hours and regularly report on their progress. If they fail to comply, individuals can be fined up to €5m and companies up to €50m.

Russia has a set-up, where regulators are able to switch of a network from being connected to the world-wide-web, though it remains to be seen how this actually works. Additionally, companies are required to save their data on Russian citizens on servers within the country.

China blocks sites such as Twitter, Google and WhatsApp, having their own proprietary substitutes. The country also employs thousands of cyber-police officers, who screen the web for content which may be seen as politically sensitive. It has also banned some keywords outright, like any references to the 1989 Tianmen Square Incident.

Australia passed the Sharing of Abhorrent Violent Media Act in 2019, enforcing criminal penalties for companies, possible jail sentences for executives and fines up to 10% of a company’s global turnover. In 2015 the Enhancing Online Safety Act gave power to an eSafety Commissioner to demand that companies delete harassing or abusive posts. In 2018, revenge porn was added to the list. 

Companies can be sent a 48-hour “take-down notice” and be fined up to AU$525,000 and individuals up to AU$105,000.

An open-ended conclusion

As you can plainly see, this virtual landscape is currently brimming with questions without an answer and loose threads of thoughts. For now, we can only patiently wait until online regulators, both governmental and private manage to find common ground and deploy effective solutions to protect the online community at large. 

But until that day comes, dear reader, you will have to remain vigilant. Stay wary of dubious content and untrustworthy sources. Because as Abraham Lincoln once famously said: “Not everything you read on the internet is true”. 

CLICK HERE FOR BREVARD COUNTY NEWS