18 articles about AI Fakes    

These ISIS news anchors are AI fakes. Their propaganda is real. – Washington Post

Generative AI poses Threat to election security, intelligence agencies warn – CBS News

Bank of Italy warns against AI-powered fake videos – Reuters

Google's AI Watermarks Will Identify Deepfakes – Dark Reading

In novel case, U.S. charges man with making child sex abuse images with AI – Washington Post

Voice-cloning technology bringing a key Supreme Court moment to 'life' – Associated Press

Flood of Fake Science Forces Multiple Journal Closures – Wall Street Journal

New UK law targets “despicable individuals” who create AI sex deepfakes - Ars Technica 

She was accused of faking an incriminating video but nothing was fake after all  - The Guardian

TikTok’s AI watermarks could help curb deepfakes, but it’s no panacea – Semafor

OpenAI Releases ‘Deepfake’ Detector to Disinformation Researchers – New York Times 

Microsoft and OpenAI launch $2M fund to counter election deepfakes – Tech Crunch  

OpenAI Says It Can Now Detect Images Spawned by Its Software—Most of the Time – Wall Street Journal

How AI-generated disinformation might impact this year’s elections and how journalists should report on it – Reuters Institute  

How Generative AI Is Helping Fact-Checkers Flag Election Disinformation, But Is Less Useful in the Global South – Global Investigative Journalism Network  

In Arizona, election workers trained with deepfakes to prepare for 2024 – Washington Post

Excessive use of words like ‘commendable’ and ‘meticulous’ suggests ChatGPT has been used in thousands of scientific studies - EL PAÍS English

Fooled by AI? These firms sell deepfake detection - Washington Post

Tech created a global village — and puts us at each other’s throats

As we get additional information about others, we place greater stress on the ways those people differ from us than on the ways they resemble us, and this inclination to emphasize dissimilarities over similarities strengthens as the amount of information accumulates. On average, we like strangers best when we know the least about them.

The effect intensifies in the virtual world, where everyone is in everyone else’s business. Social networks like Facebook and messaging apps like Snapchat encourage constant self-disclosure. Because status is measured quantitatively online, in numbers of followers, friends, and likes, people are rewarded for broadcasting endless details about their lives and thoughts through messages and photographs. To shut up, even briefly, is to disappear. One study found that people share four times as much information about themselves when they converse through computers as when they talk in person.

Progress toward a more amicable world will require not technological magic but concrete, painstaking, and altogether human measures: negotiation and compromise, a renewed emphasis on civics and reasoned debate, a citizenry able to appreciate contrary perspectives. At a personal level, we may need less self-expression and more self-examination.

Technology is an amplifier. It magnifies our best traits, and it magnifies our worst.

Nicholas Carr writing in the Boston Globe

Technology that makes us less human

Like an episode out of Black Mirror, the machines have arrived to teach us how to be human even as they strip us of our humanity. Artificial intelligence could significantly diminish humanity, even if machines never ascend to superintelligence, by sapping the ability of human beings to do human things. “We’re seeing a general trend of selling AI as ‘empowering,’ a way to extend your ability to do something, whether that’s writing, making investments, or dating,” AI expert Leif Weatherby explained. “But what really happens is that we become so reliant on algorithmic decisions that we lose oversight over our own thought processes and even social relationships.” What makes many applications of artificial intelligence so disturbing is that they don’t expand our mind’s capacity to think, but outsource it. - Tyler Austin Harper writing in The Atlantic

Performance Ratings Don’t Tell Us What You Think They Do

A significant body of research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance. The effect that ruins our ability to rate others has a name: the Idiosyncratic Rater Effect, which tells us that my rating of you on a quality such as “potential” is driven not by who you are, but instead by my own idiosyncrasies—how I define “potential,” how much of it I think I have, how tough a rater I usually am. This effect is resilient — no amount of training seems able to lessen it. And it is large — on average, 61% of my rating of you is a reflection of me. In other words, when I rate you, on anything, my rating reveals to the world far more about me than it does about you.  

Revealing ourselves without realizing it

When we talk about ourselves, telling others who we are, researchers say the same part of our brain lights up as when we brainstorm ideas, discuss our dreams, or speak extraneously. Scientists at Johns Hopkins University in Baltimore found this to be the case, even when musicians improvise. The same area of the brain is at work in these off-handed dispatches, displaying a musical autobiography of sorts.

When we are engaged in these intensely personal pursuits, we not only reveal intimate parts of ourselves, researchers say a part of the brain involved in self-control and planning is shut down.

Stephen Goforth

Eros as God

We must not give unconditional obedience to the voice of Eros when he speaks most like a god. The real danger seems to me not that the lovers will idolize each other but that they will idolize Eros himself. The couple whose marriage will certainly be endangered by (lapses), and possibly ruined, are those who have idolized Eros. They expected that mere feeling would do for them, and permanently, all that was necessary. When this expectation is disappointed, they throw the blame on Eros or, more usually, on their partners.

CS Lewis
The Four Loves

23 Articles about Journalism & AI: Uses, Ethics, & Dangers

66% of leaders wouldn't hire someone without AI skills, report finds - ZDnet

Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry – Futurism

New AI and Large Language Model Tools for Journalists: What to Know - Global Investigative Journalism Network

AI is disrupting the local news industry. Will it unlock growth or be an existential threat? – Poynter

How Generative AI Is Helping Fact-Checkers Flag Election Disinformation, But Is Less Useful in the Global South – Global Investigative Journalism Network

AI-generated news is here from SF-based Hoodline. What will that mean? -San Francisco Chronicle

News industry divides over AI content rights - Axios 

8 major newspapers join legal backlash against OpenAI, Microsoft – Washington Post

The business of news in the AI economy – Wiley Online Journal

Nearly 70% of newsroom staffers are using A.I. in some capacity, leveraging the technology to generate headlines, edit stories, and perform other tasks – Poynter  

How AI-generated disinformation might impact this year’s elections and how journalists should report on it – Reuters Institute  

AI is already reshaping newsrooms, AP study finds - Poynter 

AI news that’s fit to print: The New York Times’ editorial AI director on the current state of AI-powered journalism – Harvard’s Nieman Lab

Watermarks are Just One of Many Tools Needed for Effective Use of AI in News – Innovating  

We’re not ready for a major shift in visual journalism - Poynter 

Axios Sees A.I. Coming, and Shifts Its Strategy – New York Times 

Newsweek is making generative AI a fixture in its newsroom - Harvard’s Nieman Lab 

Your newsroom needs an AI ethics policy. Start here. – Poynter

Is AI about to kill what’s left of journalism? – Financial Times

Pulitzer’s AI Spotlight Series will train 1,000 journalists on AI accountability reporting – Harvard’s Nieman Lab

AI newsroom guidelines look very similar, says a researcher who studied them. He thinks this is bad news – Reuter’s Institute 

AI’s Most Pressing Ethics Problem – Columbia Journalism Institute

Impact of AI on Local News Models – Local News Initiative

Love as Dependency

The second most common misconception about love is the idea that dependency is love. Its effect is seem most dramatically in an individual who makes an attempt or gesture or threat to commit suicide or who becomes incapacitating depressed in response to a rejection or separation from a spouse or over.

Such a person says, “I do not want to live, I cannot live without my husband (wife, girlfriend, boyfriend), I love him (or her) so much.” And when I respond, as I frequently do, “You are mistaken; you don not love your husband (wife, girlfriend, boyfriend).” “What do you mean?” is the angry question. “I just told you I can’t live without him (or her).” I try to explain. “What you describe is parasitism, not love. When you require another individual for your survival, you are a parasite on that individual. There is no choice, no freedom involved in your relationship. It is a matter of necessity rather than love. Love is the free exercise of choice. Two people love each other only when they are quite capable of living without each other but choose to live with each other.

M. Scott Peck, The Road Less Traveled

Mental shortcuts work — until problems get complex

Franck Schuurmans, a guest lecturer at the Wharton Business School at the University of Pennsylvania, has captivated audiences with explanations of why people make irrational business decisions. A simple exercise he uses in his lectures is to provide a list of 10 questions such as, “In what year was Mozart born?” The task is to select a range of possible answers so that you have 90 percent confidence that the correct answer falls in your chosen range. Mozart was born in 1756, so for example, you could narrowly select 1730 to 1770, or you could more broadly select 1600 to 1900. The range is your choice. Surprisingly, the vast majority choose correctly for no more than five of the 10 questions. Why score so poorly? Most choose too narrow bounds. The lesson is that people have an innate desire to be correct despite having no penalty for being wrong.

Gary Cokins