Overclaiming

Research reveals that the more people think they know about a topic in general, the more likely they are to allege knowledge of completely made-up information and false facts, a phenomenon known as "overclaiming." The findings are published in Psychological Science, a journal of the Association for Psychological Science.

In one set of experiments, the researchers tested whether individuals who perceived themselves to be experts in personal finance would be more likely to claim knowledge of fake financial terms.

As expected, people who saw themselves as financial wizards were most likely to claim expertise of the bogus finance terms.

"The more people believed they knew about finances in general, the more likely they were to overclaim knowledge of the fictitious financial terms," psychological scientist Stav Atir of Cornell University, first author on the study, says. "The same pattern emerged for other domains, including biology, literature, philosophy, and geography."

"For instance," Atir explains, "people's assessment of how much they know about a particular biological term will depend in part on how much they think they know about biology in general."

In another experiment, the researchers warned one set of 49 participants that some of the terms in a list would be made up. Even after receiving the warning, the self-proclaimed experts were more likely to confidently claim familiarity with fake terms. 

from Science Daily

13 things journalists need to know about AI

A good rule of thumb is to start from the assumption that any story you hear about using AI in real-world settings is, beneath everything else, a story about labor automation.  Max Read’s blog 

This new era requires that newsrooms develop new, clear standards for how journalists will — and won’t — use AI for reporting, writing and disseminating the news. Newsrooms need to act quickly but deliberatively to create these standards and to make them easily accessible to their audiences. Poynter

Any assistance provided to these (AI) companies (by news organizations) could ultimately help put journalists out of business, and the risk remains that, once the media’s utility to the world of AI has been exhausted, the funding tap will quickly be turned off. Media executives can argue that having a seat at the table is better than not having one, but it might just make it easier for big tech to eat their lunch. Columbia Journalism Review 

Google is testing a product that uses artificial intelligence technology to produce news stories, pitching it to news organizations including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp, according to three people familiar with the matter. New York Times

“Reporters tend to just pick whatever the (AI) author or the model producer has said,” Abeba Birhane, an AI researcher and senior fellow at the Mozilla Foundation, said. “They just end up becoming a PR machine themselves for those tools.” Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI and former AP editor, said, “Find the people who are actually using it or trying to use it to do their work and cover that story, because there are real people trying to get real things done.” Columbia Journalism Review

Journalists’ greatest value will be in asking good questions and judging the quality of the answers, not writing up the results. Wall Street Journal 

There are 49 supposed news sites that NewsGuard, an organization tracking misinformation, has identified as “almost entirely written by artificial intelligence software.” The Guardian

Recently, AI developers have claimed their models perform well not only on a single task but in a variety of situations … In the absence of any real-world validation, journalists should not believe the company’s claims. Columbia Journalism Review

If media outlets truly wanted to learn about the power of AI in newsrooms, they could test tools internally with journalists before publishing. Instead, they’re skipping to the potential for profit. The Verge

One of the main ways to combat misinformation is to make it clearer where a piece of content was generated and what happened to it along the way. The Adobe-led Content Authenticity Initiative aims to help image creators do this. Microsoft announced earlier this year that it will add metadata to all content created with its generative AI tools. Google, meanwhile, plans to share more details on the images catalogued in its search engine. Axios 

In the newsroom, some media companies have already tried to implement generative AI to create content that is easily automated, such as newsletters and real estate reports. The tech news media CNET started quietly publishing articles explaining financial topics using “‘automated technology’ – a stylistic euphemism for AI,” CNET had to issue corrections on 41 of the 77 stories after uncovering errors despite the articles being reviewed by humans prior to publication. Some of the errors came down to basic math. It’s mistakes such as these that make many journalists wary of using AI tools beyond simple transcription or programming a script. Columbia Journalism Review

OpenAI and the Associated Press are announcing a landmark deal for ChatGPT to license the news organization's archives. Axios

AI in The Newsroom (video) International News Media Association International  

Fake scientific papers are alarmingly common ­­­

When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%.

Jeffrey Barinard writing in Science Magazine

The Motivation behind Fake News

Fake news may be a fight not over truth, but power, according to Mike Ananny, a media scholar at the University of Southern California. Fake news “is evidence of a social phenomenon at play — a struggle between [how] different people envision what kind of world that they want.”

Ideological fake news lands in the social media feeds of audiences who are already primed to believe whatever story confirms their worldview.

Brooke Borel writing for FiveThirtyEight

What a TikTok Ban Won't Do

While Congress has been up in arms about TikTok, it has failed to pass even the most basic comprehensive privacy legislation to protect our data from being misused by all the tech companies that collect and mine it.

The even deeper problem is that putting TikTok under state control, banning it or selling it to a U.S. company wouldn’t solve the threats that the app is said to pose. If China wants to obtain data about U.S. residents, it can still buy it from one of the many unregulated data brokers that sell granular information about all of us. If China wants to influence the American population with disinformation, it can spread lies across the Big Tech platforms just as easily as other nations can.

it would be much more effective for China to just hack every home’s Wi-Fi router — most of which are manufactured in China and are notoriously insecure — and obtain far more sensitive data than it can get from knowing which videos we swipe on TikTok.

Investigative journalist Julia Angwin writing in the New York Times

Your Inner Voice Can Mislead You

It’s very disturbing when you realize that our brains are a fiction-making machine. We make up all kinds of crazy things to help us feel better and to justify the decisions that we’ve made. The inner voice is the one who arbitrates a lot of that maneuvering around the truth, so we have to be very careful. It’s a master storyteller and far more important than you may realize.

Jim Loehr, performance psychologist and cofounder of the Human Performance Institute, quoted in Fast Company

How Generative AI could spawn a new generation of Disinformation  

There is reason to believe that AI could really be the new variant of disinformation that makes lies about future elections, protests, or mass shootings both more contagious and immune-resistant. Consider, for example, the raging bird-flu outbreak, which has not yet begun spreading from human to human. A political operative—or a simple conspiracist—could use programs similar to ChatGPT and DALL-E 2 to easily generate and publish a huge number of stories about Chinese, World Health Organization, or Pentagon labs tinkering with the virus, backdated to various points in the past and complete with fake “leaked” documents, audio and video recordings, and expert commentary. A synthetic history in which a government-weaponized bird flu would be ready to go if avian flu ever began circulating among humans. A propagandist could simply connect the news to their entirely fabricated—but fully formed and seemingly well-documented—backstory seeded across the internet, spreading a fiction that could consume the nation’s politics and public-health response. The power of AI-generated histories, Horvitz told me, lies in “deepfakes on a timeline intermixed with real events to build a story.”

Matteo Wong writing in The Atlantic

Deepfakes Flourish

Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress”more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

Read more about Deep Fakes in the New York Times

The 3 Things Far-Right & Far-Left Political News Sources have in Common

When researchers analyzed almost 6,000 political news stories produced by partisan and nonpartisan media outlets in 2021, three things became clear:

  • Media outlets with extreme biases — regardless of whether it was a conservative or liberal bias — tended to use shorter sentences and less formal language than nonpartisan outlets.

  • Mainstream news organizations, as a whole, wrote at a higher reading level.

  • Far-right and far-left outlets took a more negative tone than nonpartisan outlets. They generally had a lower ratio of positive to negative words.

The researchers describe their findings in a paper forthcoming in Journalism Studies, “At the Extremes: Assessing Readability, Grade Level, Sentiment, and Tone in US Media Outlets.”

Read the full article from Journalist’s Resources here.

A ChatGPT-assisted academic paper

A ChatGPT-assisted paper is posted to the arXiv. The topic is AI use in drug discovery and the authors conclude, “AI has the potential to revolutionize the drug discovery process.”

The paper is an example of how the ChatGPT bot might be used in academic papers and offers a potential model for AI-assistance transparency. Their conclusion:

As a result of this experiment, we can state that ChatGPT is not a useful tool for writing reliable scientific texts without strong human intervention. One of the main reasons why this AI is not yet ready to be used in the production of scientific articles is its lack of ability to evaluate the veracity and reliability of the information it processes. A real risk is that predatory journals may exploit the quick production of scientific articles to generate large amounts of low-quality content. Overall, addressing the risks associated with the use of AI in the production of scientific articles will require a combination of technical solutions, regulatory frameworks, and public education.

Critical Ignoring compliments Critical Thinking

On the web, where a witches’ brew of advertisers, lobbyists, conspiracy theorists and foreign governments conspire to hijack attention, the same strategy spells doom. Online, critical ignoring is just as important as critical thinking. 

That’s because, like a pinball bouncing from bumper to bumper, our attention careens from notification to text message to the next vibrating thing we must check. A flood of information depletes attention and fractures the ability to concentrate.

Sam Wineburg writing in The Conversation

Our Favorite Conclusions

It would be unfair for teachers to give the students they like easier exams than those they dislike, for federal regulators to require that foreign products pass sticker safety tests than domestic products, or for judges to insist that the defense attorney make better arguments than the prosecutor.

And yet, this is just the sort of uneven treatment most of us give to facts that confirm and disconfirm our favored conclusions.

For example, volunteers in one study were told that they had performed very well or very poorly on a social-sensitivity test and were then asked to assess two scientific reports—one that suggested the test was valid and one that suggested it was not. Volunteers who had performed well on the test believed that the studies in the validating report used sounder scientific methods than did the studies in the invalidating report, but volunteers who performed poorly on the test believed precisely the opposite.

To ensure that our views are credible, our brain accepts what our eye sees. To ensure that our views are positive, our eye looks for what our brain wants.

Daniel Gilbert, Stumbling on Happiness

30 Articles about Spotting Fake News

6 Tips for Identifying Fake News, Sabrina Stierwalt, Quick & Dirty Tips

6 tips to help you detect fake science news, Marc Zimmer, The Conversation

As Fake News Spreads Lies, More Readers Shrug at the Truth, Sabrina Tavernise, New York Times

Beware partisan ‘pink slime’ sites that pose as local news, Margaret Sullivan 

The Breaking News Consumer's Handbook, WNYC Studios

‘Cheap fakes’: Viral videos keep clipping Biden’s words out of context, Bill McCarthy, Politifact  

The Conspiracy Theory Handbook, Stephan Lewandowsky John Cook

Critics of Dan Rather’s tips about fake news brought up his past. But the points are still solid, Alex Horton, Washington Post 

The Fact Checker’s guide to manipulated video, Washington Post

Fake news and the ugly rise of sponsored content, John Pelle, PR Daliy 

False, Misleading, Clickbait-y, and/or Satirical “News” Sources, Melissa Zimdars  

A Finder's Guide To Facts Steve Inskeep, NPR 

How I Detect Fake News, Tim O'Reilly, O’Reilly Media

How Science Fuels a Culture of Misinformation, Joelle Renstrom, Open Mind

How to fight lies, tricks, and chaos online, Adi Robertson, The Verge

How to Outsmart Election Disinformation, Karim Doumar & Cynthia Gordy Giwa ProPublica

How to Spot Fake News, Eugene Kiely and Lori Robertson, FactCheck.org

How to Spot Visualization Lies, Nathan Yau, Flowing Data

How to Stay Informed Without Getting Paralyzed by Bad News, Jacqueline Lekachman, Wired  

Hundreds of ‘Pink Slime’ News Outlets are  distributing algorithmic stories and conservative talking points, Priyanjana Bengani, Columbia Journalism Review  

Infographics Lie. Here's How To Spot The B.S., Randy Olson, Fast Company

In disasters, people are abandoning official info for social media. Here's how to know what to trust, Stan Karanasios, Peter Hayes, The Conversation

Media Manipulation and Disinformation Online, Alice Marwick, Rebecca Lewis   

A philosopher explains America’s “post-truth” problem, Sean Illing, Vox

Photographs cause false memories for the news, Deryn Strange, Maryanne Garry, Daniel M Bernstein, & D. Stephen Lindsay, Semantic Scholar

Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative News Practices, Francesca Triopodi, Data Society

Simple tips to help you spot online fraud, Washington Post

Snopes' Field Guide to Fake News Sites and Hoax Purveyors, Kim LaCapria, Snopes  

Ten Questions for Fake News Detection, Checkology.org  

Want to resist the post-truth age? Learn to analyze photos like an expert would, Nicole Dahmen & Don Heider, Quartz

More about fake news

15 Tools for Spotting Fake News

Tools for Spotting Fake News:  

Ad Fontes Media

Producer of The Media Bias Chart® which rates media sources in terms of political bias and reliability. 

Bellingcat

Investigative search network for citizen journalists using open-source information such as videos, maps and pictures.

Botcheck

Suggests whether a Twitter account is likely to be a bot. 

Botometer

Checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot.

Facterbot

This Facebook Messenger chatbot aimed at delivering fact checks.

Google Reverse Image Search

Check the history of a photo: When it was first used and where.

Hoaxy

Visualizes the spread of articles across social media.

Make Adverbs Great Again

Helps Twitter users determine if an account is a bot.  

NewsBot

This Facebook Messenger app identifies the political leaning of an article.

NewsGuard

Steven Brill’s site that uses trained journalists to rate news items and information sites. Produces an email newsletter that tracks misinformation.

RevEye  

A Chrome reverse image search engine add-on. 

Sensity

This tool is designed to spot fake human faces in pictures and videos. Engineers say they trained detectors using 100s of thousands of deepfake videos and GAN-generated images. Free.

TinEye

A reverse image search engine to help determine when an image first appeared on the internet. A free extension for Chrome and Firefox browsers.

TrustedNews

A Google Chrome plugin that attempts to identify whether a website is generally trustworthy.

WaffesatNoon

This website focuses on hoaxes, rumors and odd news.

More about fake news

12 Fact-checking sites

Know Your Fact-Checking Sites

Most fact-checking sites give out-sized space to political issues. This misses a deal of quality journalism published in other areas (health, environment, religion, etc.). Also, a complaint leveled at fact-checkers is that they will sometimes fall into “selection bias”—the tendency to pick apart stories promoting views with which they disagree.  

Fact-Checker                     News Literacy Project

FactCheck.org                    Politifact

Hoaxy                             Snopes        

Irumor Mill                      SourceWatch                   

Media Bias Fact Check        Truth or Fiction

MetaBunk Washington Post Fact Checker

More about spotting fake news  

9 Natural Biases that Make us Susceptible to Fake News

KNOW YOUR WEAKNESSES

These biases are broad tendencies rather than fixed traits or universal behavioral laws. Everyone does not uniformly share them. Plus, multiple influences result in a given behavior. Agents of fake news try to take advantage of these natural biases.

1. FALSE MEMORIES. Studies have shown we are susceptible to false memories. We selectively remember our own experiences, much less historical and cultural events. Planting fake memories has become easier these days with AI-enhanced photo and video forgeries on the internet.

2. CONFIRMATION BIAS. We tend to seek information that confirms what we already believe to be true. Ask yourself: Do I want to believe this report, not because it is well-sourced and reported, but because it fits with what I already believe? One study found about one in ten US adults are willing to accept anything that sounds plausible and fits their preconceptions about the heroes and villains in politics.  

3. CORRELATION VS CAUSATION. Just because events or statistics have a connection doesn’t mean you can assume one causes the other.  

4. WE OVERVALUE NARRATIVE. Adding a story to a fact increases the likelihood that people will believe it—even when the story narrows the likelihood of it being true. We like tidy stories, not ambiguity.

5. FOOLED BY RANDOMNESS. Humans tend to read meaning into the unexpected and the improbable, even where there is none.    

6. OVERSIMPLIFICATION. To avoid conflict and uncomfortable thinking, we oversimplify to reduce tension. Soon, one side looks good, and the other is dismissed as evil.  

7. SUNK COST FALLACY. We hang on to a course of action or idea when we have invested in it, even when circumstances and reasoning show we should abandon it.  

8. GOOGLE SEARCH RELIANCE. Google is not neutral. When you Google something, the algorithm isn’t weighing facts but other factors, such as your search history. Google tailors your results to what you want—or what the search engine “thinks” you want. Because of this personalization, you are probably getting different results than the person sitting next to you. Be critical of search engines as you are critical of the media. Don’t assume the first link or the first page that comes up when you Google something is the best answer to your question.

9. AVAILABILITY BIAS. This shortcut for making quick decisions gives your memories and experiences more credence than they deserve, making it hard to accept new ideas and theories.    

More about Fake News

4 Fake News Signals from outside the website

CLUES FROM OUTSIDE THE SITE

67. YOUR COMMUNITY. There’s no substitute for knowing people who are well-informed and will let you know when you’ve posted something questionable. These are people you can ask when you have your doubts. Don’t know any experts or researchers, or information junkies from various fields who are critical and helpful? Make some new friends! Developing such a support system is critical for navigating effectively through life. Read some books written by experts. 

68. FACT-CHECKING SITES. Does a fact-checking site identify the assertion of the article as a hoax? Check one of the sites listed at the end of this article or type the article’s topic into a search engine and add “hoax” or “fake.”    

69. THE OTHER SIDE. Take time to check sites that do not agree with your politics. If you discover they are wrong and perhaps not addressing the best arguments of your side, it is a confirmation you are on the right side of an issue. Or maybe you will discover a weakness in your own reasoning you haven’t considered. Either way,  you'll know what other people are consuming, sharpening your thinking. 

70. GOOGLING THE TOPIC. If you do a Google search for a topic, remember that reliable researchers do not write material answering questions like “Did the Holocaust exist?” Instead of decent sources, this type of search will bring up conspiracy theorists. Don’t be misled by a search that frames issues as secret plots and nefarious schemes. 

More fake News Signals

12 Fake News Signals from the Publisher

CLUES FROM THE PUBLISHER

55. REPUTATION. Is the writer’s reputation at stake if they are wrong? Does the organization risk losing reputation or losing finances if it becomes known for having promoted false news? 

56. RELIABILITY. Has the organization been reliable in the past? Have you read other information from the organization confirmed to be accurate?

57. AMATEURISH. Data collected by an amateur is more error-prone than data collected by a professional scientist. Does a quick web search confirm whether the people who collected and organized the data have a good track record of collecting and distributing data?  

58. RESPONSE TO CRITICS. Does the publisher respond publicly to its critics when there are reasonable questions? Does it acknowledge when the critics have a point?

59. DATA SOURCES. Look closely at the sources of data the publisher uses: is this material provided by for-profit companies, partisan organizations, or advocacy groups? While the material may be accurate, data from groups with agendas require greater scrutiny than data from nonpartisan organizations.

60. PAYING THE WRITERS. Content Farms (or Content Mills, if you like) pay very little in return for lots of writing. When news writers are focused on cranking out material to feed the beast, the quality of the work suffers. If you discover a site is considered a Content Farm by professionals or pays writers very little for their work, that’s a big red flag. 

61. DIVERSE VOICES. Does the news organization offer diverse perspectives in its articles? A professional outlet will make a concerted effort to give voice to various ethnicities and political persuasions. The more a newsroom focuses on a single viewpoint, the greater the likelihood it will leave out significant perspectives from its news converge.  

62. FEEDBACK. Reputable news publishers want readers’ feedback on stories for accuracy and look for help in determining coverage priorities.  

63. AGREEMENT. Do you find yourself agreeing with everything your preferred news outlet says? If so, something is wrong. Find a commentator whose politics don’t match with your own—vary your media consumption to get a balance of perspectives. 

64. EASY STORIES. Suppose a news outlet overlooks stories worth telling in favor of the stories that can be easily told. In that case, it may not have the resources to dive into investigative reporting or may not have the goal of getting beyond the low-hanging fruit. 

65. ANONYMOUS SOURCES. Legitimate news outlets will only reference unnamed sources that would endanger them physically or put them in legal jeopardy. Overreliance on anonymous sources should be a red flag to be skeptical of the information, even if it comes from an otherwise trustworthy site. 

66. FRAMEWORK. Some sites have a framework for all their stories (like the College Fix, which is focused on college campus outrage). Articles on these sites may leave out moderating information, so stories lean toward the framework.

More fake news signals

12 Fake News Signals from the Website

CLUES FROM THE SITE  

43. ABOUT US. Check the site’s About page for information about who is behind the operation. If you aren’t familiar with the name, look for information about who owns it. For instance, the Russian government owns the RT network. There is a big difference between state media (RT) and public media in a democracy (like the BBC). If a website does not provide information on its mission, staff members, or physical location, it is most likely unreliable. The language used here should be straightforward. If it seems overblown, be skeptical. 

44. ADDRESSES. There should be a mailing address (better yet, a physical address) and an email address. Any site concerned about making factual corrections (and avoiding defamation) needs a way for readers to contact them.

45. LEGAL NOTICES. Look for a legal section on the website. It may be called a “disclaimer.” Satirical websites sometimes disclose this information in those sections. A site without obvious legal notices (such as EEOC or FCC public file information for TV stations) is a red flag.  

46. GOOGLE “FAKE.” Put the website name in quotes and then add “fake.” Something indicating the site is known for publishing fake news might come up.

47. DATES. Look for a date to make sure the event is recent. Sometimes real stories from several years ago are posted as if they were new. This happens with photos as well. Reliable news outlets want readers to know when the information is posted and will usually display the headline's date. If you are looking at an article on social media, go to the article and look first for a timestamp. Even an old article with good information at the time of publication can be problematic because a study (for instance) may have since been retracted.

48. WEB DESIGN. Poor web design is a red flag. Is the design out of date when compared to other reputable sites? Is the display navigable and professional? 

49. DOWNLOADS. If the website contains advertisements, particularly pop-up ads, asking you to download software, it is likely to be an unreliable.

50. CORRECTIONS. Does the site make corrections as it receives new information, and does it make those corrections obvious? Typically, a note will be added to the top or bottom of a news article when even a factual change is made to a story. In a print or broadcast story, the original error should be clearly stated along with the correct information. The editorial process of a legitimate news organization catches and corrects many errors. If you don’t see corrections from time-to-time on a website, that’s a red flag. Corrections and updates are a part of journalism.  

51. OTHER ARTICLES. Search for the information you know to be false in other articles on the site. Does the site offer quality information on different topics besides the one you are investigating?

52. COMMUNITY POSTS. Some sites allow individuals to post pieces under the banner of the news brand (ex: BuzzFeed Community Posts, Kinja blogs, Forbes blogs). The site editors typically do not vet these posts, making the material suspect.  

53. PREVIOUS FAKE NEWS. Do Wikipedia, Snopes, or other such sites show the website in question as having a connection to spreading false information in the past? While Wikipedia is generally pointed in the right direction but can contain some questionable information, the links to other sites it provides can be invaluable in the hunt for the truth.   

54. EMBARGOS. Does the publisher respect embargos? This is common practice in media, where information suppliers ask publishers to hold back new information until a certain time. It is considered common courtesy and accepted practice to honor embargos except in unusual circumstances. Ignoring these expectations could be a sign the publisher is more interested in rushing out material than operating by industry standards.  

More Fake News Signals