Remember the 2016 US presidential election? Remember scrolling past extreme headlines and wildly inaccurate memes, only to see them shared later by a high-school classmate/Facebook friend known for their questionable judgment? Remember the results of the election? Of course you do; how could any of us forget?
When discussing information disorder, or “the various ways our electoral environment was and is polluted,” it is important to note that not all incorrect election-related information is shared with the knowledge that it’s flat-out wrong — a big problem nonetheless.
While disinformation and malinformation share the intent to harm, misinformation is differentiated as “misleading information created or disseminated without manipulative or malicious intent.” Unwittingly sharing misleading headlines, cropped images, Photoshopped content, out-of-context (or just plain wrong) statistics, opinion pieces as news and so much more falls into the category of misinformation. Chances are though, someone created that content to be shared with literal reckless abandon — using disinformation to encourage the viral spread of misinformation.
Accidents Happen… But They Still Have Consequences
While misinformation occurs without malicious intent, it can still cause irreversible damage to organizations, people, and — memorably — elections (Pizzagate, anyone?).
Speaking of Pizzagate, a recent example of misinformation gone viral is the co-opting of legitimate disgust and concern regarding child sex trafficking to push an anti-mask agenda in the midst of the global coronavirus pandemic. Well-meaning people by the thousands never stopped to fact-check the artfully designed Instagram and heartfelt Facebook posts claiming — wrongly — that wearing masks leaves children vulnerable to kidnapping and exploitation.
Social media plays an outsized role in the accidental spread of false information. According to a recent study by Pew Research Center, “people who prefer social media for news are more likely to share made-up news and information than those who prefer other pathways.”
There is hope though. Earlier this year, Forbes gave an overview of social media sites making strong efforts to combat misinformation ahead of this year’s presidential election.
- Following the infiltration of Russian bots on Facebook during the 2016, Facebook now requires campaigns to give their US mailing address and state how much they spent on each ad.
- YouTube recently shared several changes meant to make the platform a reliable source for news, including the removal of election-related content that violates their policies, the increase availability to authoritative election news and the overall reduction of misinformation.
- To make it easier for users to identify political candidates, Twitter has reintroduced election labels with pertinent candidate information as well as provided badges for candidates who qualify for US primary ballots. The popular social media platform has also began applying labels to certain tweets from political leaders in an effort to reduce the effects of misleading information.
Caught on Video
When discussing fake news, information disorder or other phrases related to incorrect information, there is usually only a focus on the diction, or text, of a fabricated news site. With this mindset,
“the implications of misleading, manipulated or fabricated visual content, whether that’s an image, a visualization, a graphic, or a video are rarely considered.”
For example, while we can’t say for certain whether or not President Trump knowingly shared this fabricated video of Democratic House Speaker Nancy Pelosi, the Twitter users who shared it to virality actively participated in spreading misinformation.
While video deepfakes and other misleading visuals are hard to stop, researchers at the University of Waterloo recently developed an AI tool capable of detecting misinformation in the form of text. Ultimately, the researchers’ goal is to have their new technology be used by social media and news organizations as an automated fact-checker.
Clickbait and Switch
Online clickbait can be defined simply as “something (such as a headline) designed to make readers want to click on a hyperlink especially when the link leads to content of dubious value or interest.” Specifically, when speaking in terms of a headline, we can use the phrase partisan emotional clickbait, which is a headline that appeals directly and explicitly to the emotions of the partisan reader.
A prime example of partisan emotional clickbait originated during the 2016 presidential campaign. An article from The Political Insider claiming WikiLeaks confirmed Hillary Clinton sold weapons to ISIS was later debunked, but not before its striking headline resulted in over 700,000 engagements on Facebook. The damage of this work of fiction may very likely have affected the voting cycle.
Clickbait has a bad connotation (and gives us flashbacks from 2016), but it can also be used for good. Twitter users are now following an effective trend that uses pop culture to lure people to voting registration sites.
Check, please!
With the high volume of content spread, retained and discussed everyday, it is impossible to stop all bad information from showing up on your timeline or interfering with an election. The best way to combat information disorder is to stay informed and to use and contribute to news with positive intention.
Source-checking, fact-checking and logical processing will all help us navigate this year’s upcoming presidential election.
While we’re on the subject, have you registered to vote?