Social media platforms (primarily Facebook and Twitter) have faced mounting pressure to better their efforts in combating misinformation, particularly after the US 2020 presidential elections. At the end of January this year, Twitter introduced its Birdwatch program to users in the US. The community-backed program is meant to tackle misinformation on the platform by letting users fact-check tweets coming from other users. But with such a responsibility, just how far can we as the audience get in pointing out what’s “real” and what’s “not”?
Users in the program (which for now includes around 1,000 users) will have the ability to add information or warnings to certain tweets and provide further context. For now, users being included in the pilot are able to write notes on tweets, however those notes aren’t made publicly visible on Twitter, yet.
These notes can only be seen on the public Birdwatch website (which is currently only available in the US). Users can also rate notes that had been submitted by other participants in the program. The intention being that the more accurate a person’s notes are, the better “rating” they get as a moderator.
Prior to the release of Birdwatch, Twitter announced that they had conducted interviews with more than 100 people across the US political spectrum who had given relatively positive feedback about the idea. Many considered that the concept of Birdwatch could provide further useful context to better understand tweets – especially those that hold a political undertone.
“The truth is rarely pure and never simple“
– Oscar Wilde
… And following the US 2020 presidential elections, Wilde’s words have never been more true. During the elections, Twitter had labelled certain tweets that were considered to be misinformation. Last month, the platform introduced the “manipulated media” label that was to be added on questionable tweets. In the final two weeks before the elections, Twitter reported that it had labeled around 300,000 tweets for “disputed and potentially misleading content”. During the US senate hearing, several lawmakers (mostly of the Republican side) questioned a possible “anti conservative bias”.
As for Birdwatch, some have criticised the platform for delegating an important task of moderating content to volunteer users, whereas others say the decision could be a step in the right direction. A chance for the audience to decide what is and what isn’t “the truth”. The biggest challenge to Birdwatch would probably be ensuring that the concept itself doesn’t recreate the very own problem it’s trying to tackle: becoming a venue for misinformation.
Philosophers argue that “the truth is subjective”, a phenomenon that continues to be studied by researchers and psychologists alike.
With that being said, can Twitter users really trust one another to verify information? I mean, it’s safe to say that not everybody will easily trust a single institution or body to make decisions about “what is and what isn’t” the truth, let alone a big Silicon Valley tech company. But the issue with handing that power right back to the people, is the simple fact that we often see different things when we look at the same event (or in this case, a Tweet).
The “Section 230” Card
Both Mark Zuckerberg and Jack Dorsey will soon have to answer to US lawmakers (again) on the 25th of March. This time, the hearing will be about the increase of misinformation. More importantly, this hearing will be focused on how the two platforms plan to tackle the issue.
In the previous hearing back in October, the tech giants testified before the lawmakers on Section 230, which shields tech companies from legal liability for content that their users decide to post.
More regulations? Or more platform guidelines?
Many argue that social media platforms shouldn’t have that much power in controlling the flow of information. This easily became a topic of debate in this day of age, especially in the time of a pandemic when access to information is at its most important.
Earlier this month, members of the European Parliament discussed the sensitive relationship between freedom of speech and the state of media freedom. The EU is currently working on two digital acts that will include guidelines for social media platforms and protocols for handling harmful content such as misinformation.
There might not ever be a solution in “solving misinformation on social media” nor will there be a direct answer as to who should be holding the responsibility in looking out for dodgy content. However, one thing is for certain: fact checking will need to be the future for social media. Whether that future means an introduction of heavier regulations for social platforms to follow, or just more platform guidelines that users will need to adhere to is still unclear.
For now, we’ll have to wait and see. With Twitter’s Birdwatch program, it wouldn’t be surprising to see other social media platforms come up with their own techniques to tackle the growing issue of misinformation.
That being said, if (or when) Twitter introduces Birdwatch for users in your country, would you consider applying to be a “Birdwatcher” yourself? Why or why not?