Following contentious political elections the last several cycles and the growing presence of disinformation, social-media sites are gearing up for another round. Here’s what the major platforms are doing to combat misinformation.
Facebook
Facebook’s policies aren’t changing much from last year, but Facebook is increasing the amount of staff dedicated to flagging potentially false information, it says.
Facebook’s parent company Meta says that it has spent $5 billion on safety and security in the last year. Forty teams made up of hundreds of its employees monitor misinformation on the platform.
One change will be the more careful use of the “false information” warning label that the company placed on certain posts. Due to complaints about the label being overused, Facebook aims to use the label in a “targeted and strategic” way.
Facebook will also focus on preventing harassment and targeting of poll workers.
TikTok
The video social-media platform plans to continue its previous fact checking protocols, blocking certain videos from being recommended until they can be verified.
TikTok’s election information portal will also provide relevant voter information six weeks earlier than it did during the previous presidential election.
Even with steps in place to prevent misinformation, TikTok may be a difficult platform to monitor, The New York Times reports.
Paid political posts or advertising are not permitted on the platform, but some content creators find ways around the regulations. TikTok is taking some steps to better enforce these rules.
Unlike other social media sites, TikTok doesn’t allow much transparency when it comes to video origins or how effective the monitoring practices are, so there’s no real way to tell how effective TikTok’s actions have been.
TikTok will begin sharing data with certain researchers this year, The New York Times reported.
Twitter
Twitter’s regulations will be similar to Facebook’s including flagging potentially false posts and including labels on certain content. Labeled tweets will not be recommended by the algorithm, lessening their outreach potential.
The company will continue to remove false or misleading tweets.
Twitter in particular has been the source of debate over freedom of speech. Elon Musk’s campaign to buy Twitter started in part due to free speech concerns.
YouTube
Google’s YouTube has yet to announce any plans to combat misinformation, not a surprising move considering the company’s fairly quiet public relations strategy.
Those concerned about misinformation distribution have their sights set on YouTube.
In January, over 80 fact checkers around the world signed a letter to YouTube, concerned that the video platform was being used to spread misinformation.
Currently, YouTube has a “strike” policy that gives content creators warnings before demonetization or removal. Videos can receive strikes for certain types of misinformation.
YouTube spokeswoman Ivy Choi said in a statement that YouTube’s recommendation engine is “continuously and prominently surfacing midterms-related content from authoritative news sources and limiting the spread of harmful midterms-related misinformation.”
Why it’s news
The last several election cycles have been tainted by “fake news.” Not only have social media sites been flooded with false information, but opposing candidates have weaponized the term misinformation to attack one another.
Misinformation, and what to do about it, has been a growing concern among voters. Social media sites are taking action by implementing these fact checking measures.
Not everyone is a fan of social media sites deciding what’s false and what isn’t. When groups or individuals are banned from social media—like Bobby Kennedy’s Children’s Health Defense just was—some applaud while others cite First Amendment violations.
There’s no easy solution to the misinformation debate. This election cycle will be telling when considering the effectiveness of fact checking measures.