18 Articles about AI & the Bigger Questions

4 Family Types

There are basically four family types that we all come from. 

1 - The Traditional Family System

The old-fashioned family has a myth that “father knows best.”  This family is under the control of only one member. 

2 - Enmeshed Family System

The frightened family has a myth that it's “us against the world.”  It is emotionally bound together and protective of itself. 

3 - The Fighting Family System

The fighting family has a myth of “every man for himself.”  Each member of this family is strongly individualistic, recognizing no other authority than his (or her) own.

4 - The Open Family System

The healthy family system theme is “all for one and one for all.” The open family system emphasizes the worth, dignity, and uniqueness of each individual, the importance of unconditional positive regard, and the value of positive reinforcement.

7 Quotes about the Limitations of AI

While AI can enhance individual creativity, it might do so at the expense of collective diversity and novelty in creative works. PsyPost

The AI programs aren’t necessarily doing something no human can; they’re doing something no human can in such a short period of time. Sometimes that’s great, as when an AI model quickly solves a scientific challenge that would have taken a researcher years. Sometimes that’s terrifying, as when (they appear) capable of replacing entire production studios. -The Atlantic

“On average 30% of the time the AI models spread misinformation when asked about claims in the news. On average 29% of the time, the AI models simply refused to respond to prompts about false claims in the news over the past month. Instead, the models delivered only non-responsive responses.” -News Guard

While AI models are starting to replicate musical patterns, it is the breaking of rules that tends to produce era-defining songs. Algorithms ‘are great at fulfilling expectations but not good at subverting them, but that’s what often makes the best music,’ Eric Drott, a music-theory professor at the University of Texas at Austin.” How can we be more human than an AI? Produce creative work that goes beyond the expected, the predictable, the established and popular. -The Atlantic

Recent brain scans suggest we don’t need language to think. A group of neuroscientists now argue that our words are primarily for communicating, not for reasoning. "Separating thought and language could help explain why AI systems like ChatGPT are so good at some tasks and so bad at others. These programs mimic the language network in the human brain — but fall short on reasoning." - New York Times

If an LLM can be trained on 17th-century texts, it can just as easily be trained on QAnon forums, or a dataset that presupposes the superiority of one religion or political system. Use a deeply skewed bubble machine like that to try to understand a book, a movie, or someone's medical records and the results will be inherently biased against whatever — or whoever — got left out of the training material. -Business Insider

At times, A.I. chatbots have stumbled with simple arithmetic and math word problems that require multiple steps to reach a solution, something recently documented by some technology reviewers. The A.I.’s proficiency is getting better, but it remains a shortcoming. -New York Times

How to Pick a Leader

Try to ignore everything that is style and not substance. We should de-emphasize things like credentials, expertise, and experience, especially when they apply to something people have done before but is not so relevant for the future. Most of us are less likely to lose our jobs to AI than to reimagine our current roles while working out how to use AI to add value in different ways. Less focus on hard skills and more focus on the right soft skills.

Tomas Chamorro-Premuzic, Columbia University

Those who must be in control

Imperative people can have too strong a sense of responsibility. In pushing themselves to do right, they often pay the price of burnout. When others encourage them to slow down, they won’t for fear that a bad habit of laziness might develop. Or perhaps someone will be displeased. The saying, “When you want something done, ask the busiest person in town to do it” may contain a lot of truth. Especially if the busiest person in town doesn’t have the ability to say no.

Les Carter, Imperative People: Those Who Must Be in Control

21 Articles about Data Science & AI from Sept 2024

Vector Embeddings Explained: A Beginner’s Guide to Powerful AI

Why vector databases are more than databases

“Neural network pruning is a key technique for deploying AI models based on deep neural networks on resource-constrained platforms” 

How Perplexity AI is Transforming Data Science and Analytics 

An Intuitive Guide to Integrate SQL and Python for Data Science

AI Definitions: Supervised training 

Hyperspectral processing and geospatial intelligence

How to Import Data in R

Ai Definitions: Reinforcement Learning  

Seven Common Causes of Data Leakage in Machine Learning

Understanding the Basics of Reinforcement Learning

5 Common Data Science Mistakes and How to Avoid Them

The “latest sign that quantum computing has emerged from its infancy”

Storage technology explained: Vector databases at the core of AI

Researchers looking at the quantity and quality of AI research papers shows China is leading the way

“A planned constellation of spacecraft that will allow the U.S. military to rapidly track, target and destroy an enemy’s ground forces”

The risk of war as China & Russia build arsenals of weapons that could target American satellites

A pilot program for accrediting geospatial models for the National System for Geospatial intelligence

8 Important Quotes About Ethical Issues Raised by AI

A joint mission management center has been set up as intelligence agencies look to streamline satellite imagery delivery

A new way to build neural networks that could make it easier to see how they produce their outputs

17 Free Webinars this week about AI, Journalism & More

Mon, Sept 30 - 756 Violations in Six Months: The State of Press Freedom in 2024

What: A discussion on the findings of the latest MFRR Monitoring Report, which recorded 756 media freedom violations in the first half of 2024. This webinar will explore key trends, including the rise of intimidation and online threats, while diving into the state of media freedom across Europe and candidate countries. The monitoring experts of the Media Freedom Rapid Response consortium will also address anti-media laws, election-related violations, and the role of governments in perpetrating these violations.

Who: Gürkan Özturan Media Freedom Monitoring Officer, European Centre for Press and Media Freedom; Teona Sekhniashvili Europe Network and Press Freedom Coordinator, International Press Institute  Antje Schlaf Mapping Media Freedom Data and Development Manager, European Centre for Press and Media Freedom; Karol Łuczka Eastern Europe Monitoring and Advocacy Officer, International Press Institute; Camille Magnissalis Press Freedom Monitoring and Communications Officer, European Federation of Journalists;  Ronja Koskinen Press Freedom Officer, International Press Institute.

When: 8 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Freedom Rapid Response

More Info

 

Tue, Oct 1 - Investigating the US Election by Digging into Anti-Democratic Efforts to Sideline Voters

What: Leading experts will explore how journalists can investigate and report on efforts to undermine election certification and restrict voter access. They will provide tools for understanding the legal and political forces at play, and provide insights into the complexities of election law, the role of disinformation, and how to effectively track election integrity in 2024.

Who: Justin Glawe, an independent journalist and the author of the forthcoming book “If I Am Coming to Your Town, Something Terrible Has Happened”; Carrie Levine, Votebeat’s managing editor; Nikhel Sus is deputy chief counsel at Citizens for Responsibility and Ethics in Washington (CREW); The moderator is Gowri Ramachandran, director of elections and security in the Brennan Center’s Elections and Government program.

When: 8 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Global Investigative Journalism Network

More Info

 

Tue, Oct 1 - Election Fact-Checking Tools and Best Practices  

What: We’ll explore ways to fight back against misinformation and disinformation during election coverage. We’ll use tools such as Google Fact-Check Explorer to track fact-checked images and stories. We’ll use reverse image search and other Google tools to check election claims. We’ll break down doctored video and audio with WatchFramebyFrame and Deepfake-o-meter. We’ll also look at the innovative Rolliapp.com to track disinformation spreaders on social channels. Participants get a handout with links to tools and exercise materials you can take to your newsroom.

Who: Mike Reilley, UIC senior lecturer and founder of JournalistsToolbox.ai.    

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: RTDNA/Google News

More Info

 

Tues, Oct 1 - Social Media Boot Camp (Day 1)

What: We’ll teach you practical tips and tools for extending your cause and mission via social media.

Who: Kiersten Hill, the driving force behind Firespring’s nonprofit solutions.

When: 2:30 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Firespring

More Info

 

Tues, Oct 1 - Introduction to Solutions Journalism  

What: This one-hour webinar will explore the basic principles and pillars of solutions journalism, talk about why it’s important, explain key steps in reporting a solutions story, and share tips and resources for journalists interested in investigating how people are responding to social problems. We will also explore additional resources we have on hand for your reporting, including the Solutions Story Tracker, a database of more than 15,000 stories tagged by beat, publication, author, location, and more, a virtual heat map of what’s working around the world.

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Solutions Journalism Network

More Info

 

Wed, Oct 2 - Navigating Artificial Intelligence: Google Gemini Deep Dive

What: Discover the unique capabilities that set Google Gemini apart from other AI models. Explore its integration with Google Search, Workspace, and other products, and see how Gemini's unique features enhance user experiences across the Google ecosystem. 

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Pennsylvania Small Business Development Center

More Info

 

Wed, Oct 2 – What’s Next with AI

What: Experts dive into the impact of AI on America’s businesses, workforce and economy.

Who: MIT economics professor David Autor; Brenda Bown Chief Marketing Officer, Business AI, SAP; Garry Tan President & CEO, Y Combinator.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Washington Post   

More Info

 

Wed, Oct 2 - Social Media Boot Camp (Day 2)

What: Learn to use social media to stand out from the crowd. You’ll learn a few advanced social media tips and tricks, elevate your social media presence through micro strategies and activate your advocates.

Who: Kiersten Hill, the driving force behind Firespring’s nonprofit solutions.

When: 2:30 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Firespring

More Info

 

Thu, Oct 3 - Election Countdown: Combating the Most Dangerous Disinformation Trends

What: Top journalists and researchers who battle disinformation will let you know what they’re seeing, what concerns them most, and how voters can identify and counter disinformation during the final countdown.

Who: Nina Jankowicz, co-founder of the American Sunlight Project; Roberta Braga, founder of the Digital Democracy Institute of the Americas; Tiffany Hsu, disinformation reporter for The New York Times; Brett Neely, supervising editor of NPR’s disinformation reporting team; and Samuel Woolley, University of Pittsburgh professor, disinformation researcher and author.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: PEN America

More Info

 

Thu, Oct 3 - COVID conspiracies, flu facts and respiratory realness: The journalists’ guide to debunking health misinformation 

What: This panel of experts will help journalists debunk false narratives about vaccines and respiratory illnesses, find out about the common falsehoods that experts are tracking, and access reliable data and legitimate information about vaccination rates and trends in the communities journalists cover.

Who: CNN Chief Medical Correspondent Dr. Sanjay Gupta; Dr. Céline Gounder Senior Fellow and Editor-at-Large for Public Health, KFF, Creator and Host, “Epidemic,” Medical Contributor, CBS News; Alex Mahadevan MediaWise Director and Poynter Faculty; Dan Wilson Molecular biologist and science communicator, "Debunk the Funk"; Nirav D. Shah Principal Deputy Director of the U.S. Centers for Disease Control and Prevention.

Where: Zoom

Cost: Free

Sponsor: Poynter, U.S. Department of Health and Human Services and the Risk Less

More Info

 

Thu, Oct 3 - Now that AI Can Talk: Making Sense of the New AI Voice Capabilities

What: This webinar will equip you with the knowledge and strategies needed to confidently incorporate AI voice technologies into your instructional design practice. We'll explore best practices for maintaining authenticity and engagement when using AI-generated voices, discuss how to select the right AI voice tool for your specific needs, and address concerns about the impact on human voice actors in the industry. By the end of this session, you'll be prepared to make informed decisions about integrating AI voice capabilities into your learning solutions, balancing innovation with ethical considerations.

Who: Margie Meacham Founder and Chief Freedom Officer, Learningtogo.ai

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine Network

More Info

 

Thu, Oct 3 - Legal Issues in News-Academic Partnerships

What: A discussion of legal issues, liability, and more! This event is perfect for folks starting and expanding student reporting programs in partnership with local outlets.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The University of Vermont Center for Community News and Student Press Law Center

More Info

 

Thu, Oct 3 - AI to Streamline Journalism Workflows

What: New platforms are summarizing important proceedings and digging through data to help journalists more efficiently sift through data and transcripts to pinpoint policies or patterns that could affect a community. Our panelists show you the tools to streamline your workflow and optimize resource allocation.

Who: Sáša Woodruff, Boise State Public Radio; Joe Amditis, Associate director of operations, Center for Cooperative Media; Dustin Dwyer, Reporter/Producer, Michigan Public;  Brian Mackey, Host, "The 21st Show", Illinois Public Media.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Public Media Journalists Association

More Info

 

Thu, Oct 3 - Avoiding polarization when reporting on hot-button issues

What: In this training, you’ll learn strategies for how to cover hot-button issues without alienating or overgeneralizing segments of your community. We’ll talk about how to signal fairness and explain your work in a way that makes the coverage more accessible by people with different views on the issue.

Who: John Diedrich of the Milwaukee Journal Sentinel, who will share his fresh approach to his award-winning series on guns and how he was able to find common ground across the political spectrum.  

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Trusting News

More Info

 

Thu, Oct 3 - Case Study: How the Hearst DevHub built AI tools to improve their newsrooms’ workflow

What: The learnings, pitfalls, highlights and surprises from their nearly two years of AI development as a central editorial innovation and strategy team that collaborates with the San Francisco Chronicle, Houston Chronicle, Albany Times Union and more than a dozen other local newsrooms.

Who: Tim O’Rourke, vice president for content strategy at Hearst Newspapers; Ryan Serpico, the deputy director of newsroom AI and automation on the Hearst DevHu.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Online News Association

More Info

 

Thu, Oct 3 -The State & Future of AI and XR Learning

What: Transforming hands-on training with XR: Discover how immersive practice environments with personalized feedback are redefining skill development; Raising collective IQ with Generative AI: Learn how AI assistants provide real-time support in the moment of need; Escaping "pilot purgatory": Understand how to scale innovative technologies with a compelling business case that drives widespread adoption; Innovating for the future: Avoid the trap of simply automating outdated classroom models instead of reimagining L&D.

Who: Karl Kapp, Ed.D., CFPIM, CIRM Director, Institute for Interactive Technologies, Bloomsburg University; Tony O’Driscoll Research Fellow and Academic Director, Duke University; David Metcalf, Ph.D. Director, Mixed Emerging Technology Integration Lab, University of Central Florida; Anders Gronstedt, Ph.D. President, The Gronstedt Group.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenSesame

More Info

 

Thu, Oct 3 - Making Your College Media Podcast a Reality

What: Building a successful college media podcast requires research, organization, a specific kind of skills training, vision and a sense of adventure. We can’t cover ALL of that in a single confab, but we have ideas, and we’re going to get the conversation going.

Who: Chris Evans, the director of student media at Rice University and creator of the audio-first Illinois Student Newsroom, a nationally known model for training students to produce NPR-quality news.

When: 4 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: College Media Association

More Info

11 Interesting Quotes about AI & Academic Research

Our findings suggest that AI tools are not yet ready to take on the task of editing academic papers without extensive human intervention to generate useful prompts, evaluate the output, and manage the practicalities. - Science Editor

If AI-generated papers flood the scientific literature, future AI systems may be trained on AI output and undergo model collapse. This means they may become increasingly ineffectual at innovating. - The Conversation

In a set of 300 fake and real scientific papers, the AI-based tool, named 'xFakeSci', detected up to 94 per cent of the fake ones. - Deccan Herald

People will say, I have 100 ideas that I don’t have time for. Get the AI Scientist to do those. - Nature

There are signs that AI evaluations of academic papers could be corrupting the integrity of knowledge production. Up to 17 percent of reviews submitted to prestigious AI conferences in the last year were substantially written by large language models (LLMs), a recent study estimated. - Chronicle of Higher Ed

Google just created a version of its search engine free of all the extra junk it has added over the past decade-plus. All you have to do is add udm=14 to the search URL. - Tedium

It’s possible to switch back to an AI-free search experience. Google has added a new Web tab to its search engine page at the same time as introducing these new AI features. You can configure this kind of web search as the default. - PopSci

In a 2023 Nature survey of more than 1,600 scientists, almost 30% said that they had used generative AI tools to help write (academic) manuscripts. - Nature

The highest-profile research is heavily influenced by cultural forces and career incentives that are not necessarily aligned with the dispassionate pursuit of truth. To get your research published in high-impact journals it helps enormously not to challenge the predominant narrative. Scientific narratives can become entrenched and self-reinforcing. And that’s where we are in climate science. - Chronicle of Higher Ed

How big is science’s fake-paper problem? An unpublished analysis shared with Nature suggests that over the past two decades, more than 400,000 research articles have been published that show strong textual similarities to known studies produced by paper mills. - Nature

The Chinese Academy of Sciences (CAS), the country's top science institute, on Tuesday published new guidelines on the use of artificial intelligence (AI) in scientific research, as part of its efforts to improve scientific integrity and reduce research misconduct, such as data fabrication and plagiarism. - Global Times

26 Articles about Politics & AI

Half of U.S. states seek to crack down on AI in elections – Axios

No people, no problem: AI chatbots predict elections better than humans – Semafor

Sophistication of AI-backed operation targeting senator points to future of deepfake schemes – Associated Press  

US intel says AI is boosting, but not revolutionizing, foreign efforts to influence the 2024 elections - CNN

Half of U.S. states seek to crack down on AI in elections – Axios

Rethinking ‘Checks and Balances’ for the A.I. Age – New York Times

AI Could Still Wreck the Presidential Election – The Atlantic 

How A.I., QAnon and Falsehoods Are Reshaping the Presidential Race - New York Times

Uncle Sam wants to know: What can your country do for AI? – Semafor

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI – ABC News

AI Regulation Is Coming. Fortune 500 Companies Are Bracing for Impact. – Wall Street Journal  

Harris will use human Donald Trump stand-ins, not AI, for debate prep – Semafor

Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court? – Associated Press

Breaking Down Global Government Spending on AI – Enterprise AI 

How Innovative Is China in AI? – Information Technology & Innovation Foundation

AI researchers call for ‘personhood credentials’ as bots get smarter – Washington Post 

How AI-generated memes are changing the 2024 election – NPR  

States are writing their own rules for AI in health care  - Axios

Political consultant fined $6M for using AI to fake Biden’s voice in robocalls to voters – New York Post 

AI enters politics: 3 Pa. House candidates used ChatGPT to shape voters guide responses – Lancaster Online

Israel establishes national expert forum to guide AI policy and regulation – Jerusalem Post  

France appoints first AI minister amid political unrest as it aims to become global AI leader – Euro News  

Can politicians benefit from claiming real scandals are deep fakes? (video) – CNN

Self-Deception

The psychologist Ray Hyman has spent most of his life studying the art of deception. Before he entered the halls of science, he worked as a magician and then moved on to mentalism after discovering he could make more money reading palms than performing card tricks. The crazy thing about Hyman’s career as a palm reader is, like many psychics, over time he began to believe he actually did have psychic powers. The people who came to him were so satisfied, so bowled over, he thought he must have a real gift. Subjective validation cuts both ways.

Hyman was using a technique called cold reading where you start with the wide-angle lens of generalities and watch the other person for cues so you can constrict the iris down to what seems like a powerful insight into the other person’s soul. It works because people tend to ignore the little misses and focus on the hits. As he worked his way through college, another mentalist, Stanley Jaks, took Hyman aside and saved him from delusion by asking him to try something new – tell people the opposite of what he believed their palms revealed. The result? They were just as flabbergasted by his abilities, if not more so. Cold reading was powerful, but tossing it aside he was still able to amaze. Hyman realized what he said didn’t matter as long as his presentation was good. The other person was doing all the work, tricking themselves, seeing the general as the specific.

Mediums and palm readers, those who speak for the dead or see into the beyond for cash, depend on subjective validation. Remember, your capacity to fool yourself is greater than the abilities of any conjurer, and conjurers come in many guises. You are a creature impelled to hope. As you attempt to make sense of the world you focus on what falls into place and neglect that which doesn’t fit, and there is so much in life that does not fit.

David McRaney, You are Not so Smart

Wasting Our Love

We may have a feeling of love for mankind, and this feeling may also be useful in providing us with enough energy to manifest genuine love for a few specific individuals. But genuine love for a relatively few individuals is all that is within our power. To attempt to exceed the limits of our energy is to offer more than we can deliver, and there is a point of no return beyond which an attempt to love all comers becomes fraudulent and harmful to the very ones we desire to assist.

Consequently if we are fortunate enough to be in a position in which many people ask for our attention, we must choose those among them whom we are actually to love. This choice is not easy; it may be excruciatingly painful, as the assumption of godlike power so often is. But it must be made.

Many factors need to be considered, primarily the capacity of a prospective recipient of our love to respond to that love with spiritual growth. It is unquestionable that there are many whose spirits are so locked in behind impenetrable armor that even the greatest efforts to nurture the growth of those spirits are doomed to almost certain failure.

To attempt to love someone who cannot benefit from your love with spiritual growth is to waste your energy, to cast your seed upon arid ground. Genuine love is precious, and those who are capable of genuine love know that their loving must be focused as productively as possible through self-discipline.

M Scott Peck, The Road Less Traveled

26 Articles about the Dangers of AI

Justice Department Pushes Companies to Consider AI Risks - Wall Street Journal

Could AI Lead to the Escalation of Conflict? PRC Scholars Think So – Lawfare Media 

Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court? – Associated Press  

Will A.I. Ruin the Planet or Save the Planet? – New York Times 

How Experts in China and the United Kingdom View AI Risks and Collaboration – Data Innovation  

A booming industry of AI age scanners, aimed at children’s faces - Washington Post

Why AI Risks Are Keeping Board Members Up at Night – Wall Street Journal  

Many safety evaluations for AI models have significant limitations – Tech Crunch 

There’s no way for humanity to win an AI arms race – Washington Post  

Using AI to write a fan letter – NPR

Can machine-learning algorithms distinguish truth from falsehood? – The Atlantic

A.I.’s Insatiable Appetite for Energy – New York Times  

Nicolas Cage Says He’s Terrified AI Will "Steal" His Body – Futurism 

Researcher Studying Married Men With AI Girlfriends – futurism  

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too – New York Times

AI is not a magic wand – it has built-in problems that are difficult to fix and can be dangerous – The Conversation

First Came ‘Spam.’ Now, With A.I., We’ve Got ‘Slop’ - New York Times

AI start-up sees thousands of vulnerabilities in popular tools – Washington Post 

AI Is Helping Scammers Outsmart You—and Your Bank - Wall Street Journal

AI is exhausting the power grid. Tech firms are seeking a miracle solution. - Washington Post 

AI boyfriends from Replika and Nomi are attracting more women – Axios

Opinion: A.I.’s Benefits Outweigh the Risks - New York Times

Google’s AI Search Gives Sites Dire Choice: Share Data or Die – Bloomberg  

A booming industry of AI age scanners, aimed at children’s faces - Washington Post  

AI's Trust Problem – Harvard Business Review  

U.S. Army soldier charged with using AI to create child sexual abuse images – Washington Post

A student built a fusion reactor at home in just 4 weeks using $2,000 and AI - BGR

AI Definitions: Perplexity AI

Perplexity AI - A good research option among the generative AI tools, it acts like a search engine but includes results from the web (unlike ChatGPT). Automatically shows where the information came from, so it’s more reliable than ChatGPT. Users can specify where they want the information to be drawn from among a few categories such as academic sources or YouTube. Users can also upload documents as sources and ask it to rewrite prompts. It suggests follow-up questions you might not have considered. Less useful for creative writing. In tests, it was better at summarizing passages, providing information on current events and do coding better than other chatbots. Unmatched speed and accuracy in processing millions of data makes it very useful to data scientists for advanced predictive models. Free. Video tutorial here.

More AI definitions here.

18 Articles about AI & Academic Scholarship

Do AI models produce more original ideas than researchers? - Nature

Strengths, Weaknesses, Opportunities, and Threats: A Comprehensive SWOT Analysis of AI and Human Expertise in Peer Review – Scholarly Kitchen

How Are AI Chatbots Changing Scientific Publishing? – Science Friday

New academic AI guidelines aim to curb research misconduct – Global Times

Generative AI-assisted Peer Review in Medical Publications: Opportunities Or Trap – JRIM Publications

GPT-fabricated scientific papers on Google Scholar: preempting evidence manipulation – Harvard

AI Editing: Are We There Yet? - Science Editor – Science Editor  

AI tool claims 94% accuracy in telling apart fake from real research papers – Deccan Herald  

AI firms must play fair when they use academic data in training – Nature

AI Scientists Have a Problem: AI Bots Are Reviewing Their Work ChatGPT – Chronicle of Higher Ed 

A list of more than 500 papers with clear evidence of generative AI use - Academ-AI

Is AI my co-author? The ethics of using artificial intelligence in scientific publishing – Taylor & Francis Online 

Is ChatGPT a Reliable Ghostwriter? – The Journal of Nuclear Medicine

A new ‘AI scientist’ can write science papers without any human input. Here’s why that’s a problem – The Conversation

Could science be fully automated? A team of machine-learning researchers has now tried. - Nature

How AI tools help students—and their professors—in academic research – Fast Company  

AI-Generated Junk Science Research a Growing Problem, Experts Say – PYMNTS  

Did a criminal Russian academic paper mill use AI to plagiarize a BYU professor and his student? – Deseret News