Category: Fake News

False, Misleading, Click bait-y, & Satirical “News” Sources

CBS News recently highlighted a list of fake news sites on their web site.  Many of these sites had been compiled by communications professor Melissa Zimdars to help her students “navigate and increasingly complex and questionable media landscape.”  She has since updated and expanded this growing list of such questionable site which she has indexed into four categories below:

CATEGORY 1: Below is a list of fake, false, or regularly misleading websites that are shared on Facebook and social media. Some of these websites may rely on “outrage” by using distorted headlines and decontextualized or dubious information in order to generate likes, shares, and profits.

CATEGORY 2: Some websites on this list may circulate misleading and/or potentially unreliable information, or present opinion pieces as news.

CATEGORY 3: Other websites on this list sometimes use hyperbolic or clickbait-y headlines and/or social media descriptions, but may otherwise circulate reliable and/or verifiable information.

CATEGORY 4 : Other sources on this list are purposefully fake with the intent of satire/comedy, which can offer important critical commentary on politics and society, but have the potential to be shared as actual/literal news. I’m including them here, for now, because 1.) they have the potential to perpetuate misinformation based on different audience (mis)interpretations and 2.) to make sure anyone who reads a
story by The Onion, for example, understands its purpose.

Professor Zimdars notes that not all of the sites on this list should be considered fake; at the very least the term “fake” can potentially have a spectrum of nuanced definitions.  This nuance between outright false or misleading or even satirical explains why Zimdars has put her list into categories.  She urges her students to pursue a critical evaluation of any site.  Nevertheless this list represents a good first compilation of questionable news sites.

Link to the list here: False, Misleading, Clickbait-y, & Satirical “News” Sources.

Information overload makes social media a swamp of fake news

Re-posted from ars TECHNICA by Cathleen O’Grady on 6/29/17

Low attention and a flood of data are serious problems for social networks.

Once upon a time, it wasn’t crazy to think that social media would allow great ideas and high-quality information to float to the top while the dross would be drowned in the noise. After all, when you share something, you presumably do so because you think it’s good. Everybody else probably thinks what they’re sharing is good, too, even if their idea of “good” is different. But it’s obvious that poor-quality information ends up being extremely popular. Why?

That popularity might be a product of people’s natural limitations: in the face of a flood of information and finite attention, poor quality discrimination ends up being a virtual certainty. That’s what a simulation of social media suggests, at least.

A group of researchers from the Shanghai Institute of Technology, Indiana University, and Yahoo wanted to investigate the tradeoffs that happen on social media. Their simulated social network allowed them to tweak different parameters to see what would happen.

You want a level of information you can deal with…

In the simulation, agents sit in social networks, connected to other agents that are close to them. Agents can pass messages to each other through these networks. These messages have a rating representing their quality. Because quality is a slippery, subjective thing that’s difficult to get a computer to understand, the simulation is pretty loose about what “quality” means; the ratings might represent truth, originality, or some other value, and all the agents in the model agree on the rating. That’s obviously a drastic simplification of the real world, but having a simple value makes it easy to observe how quality affects sharing.

Agents can make up their own messages or share messages sent to them by their neighbors. If a message is high quality, agents are more likely to pass it along. The model let the researchers tweak the rate of message creation, up to where it would simulate information overload: if agents are creating a high volume of new messages, the other agents could get overwhelmed with information. If not much new is being created, existing information gets bounced around much more.

The amount of information an agent can manage can also be tweaked. Each agent has a memory that can hold only a certain number of the most recent messages produced by neighbors. If that attention span is large, an agent can look through a large number of messages and only share the highest-quality ones. If the memory is small, the menu of messages to share is much smaller.

Playing around with these numbers allowed the researchers to observe how many times a message was shared between its introduction and eventual fade. They found that, when the system had low information overload, higher-quality messages had a much greater chance of popularity. But when information overload was high, quality didn’t make that much of a difference anymore. So, in the face of information overload, the network as a whole was worse at discriminating quality.

Attention also played a role: with higher attention, messages didn’t suddenly go viral; their popularity grew more slowly over time. Only the highest-quality messages had bouts of sudden popularity. But with lower attention, poor-quality messages had a greater chance of attaining viral fame.

…but also a healthy diversity of ideas

In an ideal world, you wouldn’t want just high-quality information, but also a diversity of thinking. Having a lot of messages competing for attention and popularity is probably a healthy thing for a thriving marketplace of ideas. The model found that a low information load might lead to better quality discrimination, but it also leads to low idea diversity.

Is there a place where the tradeoff reaches a good balance? According to the model, yes: you can have high diversity and good quality discrimination, but only if attention is also high. “When [attention] is large, there is a region where the network can sustain very high diversity with relatively little loss of discriminative power,” the researchers write.

Real-world data makes it look even worse

Models always rely on simplifying some assumptions about the real world. One of the most important assumptions with this research is that everyone produces new ideas at the same rate and has the same attention span. Obviously, that isn’t true, so the researchers went looking for some way to make these parameters more realistic.

The researchers used data from Twitter to estimate information overload, looking at a million users’ rate of tweeting vs. retweeting. Different people had different ratios, which the researchers plugged into their model. For attention, they looked at 10 million scrolling sessions from Tumblr, counting the number of times a user stopped while scrolling through their feed. This was a proxy for how many items the user paid close attention to. These numbers, also plugged into the model, gave the agents varying attention spans that closely mimicked the real world.

The result of adding greater realism to the simulation were appalling: the network got really, really bad at picking out the highest-quality messages to go viral. “This finding suggests that the heterogeneous information load and attention of the real world lead to a market that is incapable of discriminating information on the basis of quality,” the researchers write.

The difficult thing in models like this is checking that they definitely apply to the real world. It’s one thing to plug in numbers taken from the real world; it’s another to assume that Twitter or Facebook really do behave the same way as these simulated networks.

A real-world check is difficult in a model like this, because a real-world measure of quality is hard to find. Still, the researchers did a rough-and-ready check by using data from an article-quality rating scheme. They compared the number of times high-rated articles were shared on social media compared to poor-quality articles, and there was no difference: both were just as likely to go viral. That suggests that the real world is just as bad at discriminating quality as the simulated network.

There’s still more work to be done on models like these. For instance, this simulation doesn’t capture the echo chambers that exist on social media, so the role those chambers might play is not clear.

So, what do we do to improve the situation? The researchers suggest that limiting the amount of content in social media feeds might be a start; they recommend controlling bot abuse, although it’s not obvious that this would drastically reduce the information firehose we all face on a daily basis. Trying to maintain a high level of skepticism about the information that drifts into your path might be the only defense for now. “Our main finding,” write the researchers, “is that survival of the fittest is far from a foregone conclusion where information is concerned.”

Nature Human Behaviour, 2016. DOI: 10.1038/s41562-017-0132  (About DOIs).

 

 

 

Report Released: Media Manipulation & Disinformation Online

The Data & Society Research Institute  released a report on May 15 entitled Media Manipulation and Disinformation Online.  This report details media vulnerability to radicalized online groups.  Findings in the executive summary include:

  • Internet subcultures take advantage of the current media ecosystem to
    manipulate news frames, set agendas, and propagate ideas.
  • Far-right groups have developed techniques of “attention hacking” to
    increase the visibility of their ideas through the strategic use of social
    media, memes, and bots—as well as by targeting journalists, bloggers, and
    influencers to help spread content.
  • The media’s dependence on social media, analytics and metrics,
    sensationalism, novelty over newsworthiness, and clickbait makes them
    vulnerable to such media manipulation.
  • While trolls, white nationalists, men’s rights activists, gamergaters, the “alt-
    right,” and conspiracy theorists may diverge deeply in their beliefs, they
    share tactics and converge on common issues.
  • The far-right exploits young men’s rebellion and dislike of “political
    correctness” to spread white supremacist thought, Islamophobia, and
    misogyny through irony and knowledge of internet culture.
  • Media manipulation may contribute to decreased trust of mainstream media,
    increased misinformation, and further radicalization.

The full report may be accessed HERE:

Data & Society is a research institute in New York City that is focused on the social and cultural issues arising from data-centric technological development. (from the site)