The greatest mistake
/The greatest mistake you can make in life is to be continually fearing you will make one. -Elbert Hubbard
The greatest mistake you can make in life is to be continually fearing you will make one. -Elbert Hubbard
AI Washing - This references a company’s misleading claims about its use of AI. It’s a marketing tactic that exaggerates the amount of AI technology used in their products to appear more advanced than they actually are. AI washing takes its name from greenwashing, where companies make false or misleading claims about the positive impact they have on the environment.
More AI definitions here.
What to know about the rise of AI deepfakes – CBS News
High School Is Becoming a Cesspool of Sexually Explicit Deepfakes – The Atlantic
Sophistication of AI-backed operation targeting senator points to future of deepfake schemes – Associated Press
Due to AI fakes, the “deep doubt” era is here - ArsTechnica
Taylor Swift and the Power of the AI Backlash – New York Magazine
How AI Is Helping ‘Fake Candidates’ Land Jobs – Wall Street Journal
A.I. Can Now Create Lifelike Videos. Can You Tell What’s Real? - The New York Times
FBI busts musician’s elaborate AI-powered $10M streaming-royalty heist - ArsTechnica
Educational resource page with information and tips about deepfakes - Microsoft
5 Best Deepfake Detector Tools & Techniques – Unite
U.S. Army soldier charged with using AI to create child sexual abuse images – Washington Post
New McAfee tool can detect AI-generated audio - Axios
See why AI detection tools can fail to catch election deepfakes – Washington Post
Google's Nonconsensual Explicit Image Problem is Getting Worse – Wired
Something fascinating is wrong with the eyes in deepfakes – Futurism
Bill to Outlaw AI Deepfakes Backed by SAG-AFTRA – Variety
As AI entrenches itself in the political world, discerning real from fake is critical – NBC Boston
The FCC wants the AI voice calling you to say it's a deepfake – Tech Radar
California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI - ABC News
YouTube is developing AI detection tools for music and faces, plus creator controls for AI training – Tech Crunch
Scammers now using deepfakes to commit title fraud – NBC 6 South Florida
Many political AI deepfakes are totally cartoonish, but the technology is still shaping the election – Fortune
AI-generated deepfakes are a growing threat to consumer identity – CBS 8
What the US can learn from the role of AI in other elections – MIT Tech Review
A small but detailed 2015 study of young adults found that participants were using their phones five hours a day, at 85 separate times. Most of these interactions were for less than 30 seconds, but they add up. Just as revealing: The users weren’t fully aware of how addicted they were. They thought they picked up their phones half as much as they actually did. Whether they were aware of it or not, a new technology had seized control of around one-third of these young adults’ waking hours.
Just look around you—at the people crouched over their phones as they walk the streets, or drive their cars, or walk their dogs, or play with their children. Observe yourself in line for coffee, or in a quick work break, or driving, or even just going to the bathroom. Visit an airport and see the sea of craned necks and dead eyes. We have gone from looking up and around to constantly looking down.
Andrew Sullivan, I used to Be a Human Being
Who: ABC News Anchor Linsey Davis and Melba Tolliver is the author of Accidental Anchorwoman: A Memoir of Chance, Choice, Change, and Connection (2024). In 1967, by accident, Melba Tolliver was the first Black American to anchor network news.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Easton Book Festival
What: Have you played with Google’s NotebookLM AI model, and specifically explored how it can be used to create VERY realistic / human-sounding audio podcast conversations IN MINUTES using collections of articles, books, or other media? Join us for an engaging virtual hour of exploration with Google’s NotebookLM platform and the ideas of Steven Johnson.
Who: Author Steven Johnson, author of 13 books as well as numerous television programs, videos and podcasts about innovation; Wesley Fryer, the author of several books on technology integration and multimedia production.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Media Education Lab
What: Learn how professional fact checkers avoid falling for misinformation whether it’s generated by humans or AI.
Who: Presented by Rachel Roberson, Senior Program Manager, Education Content, KQED; Rik Panganiban, Program Manager, Online Learning, KQED.
When: 5 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: edWebinars
What: How journalists confront misinformation, conspiracy theories, and misleading ways of communicating scientific ideas. How bias manifests in scientific research, from ideation, methodologies, observation, conclusions, and discussions.
Who: OpenMind Magazine Editors-in-Chief Corey Powell and Pamela Weintraub
When: 6 pm, Eastern
Where: Zoom (and in-person)
Cost: Free
Sponsor: Pulitzer Center
What: International freedom of expression standards which provide particular protection to journalists, with a focus on whistleblower protections, protection of sources and anti-SLAPP measures.
When: 8:30 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Centre for Law and Democracy
What: This webinar aims to demystify Artificial Intelligence (A)I by demonstrating that emerging technological tools can be strategically leveraged to enhance the evaluation process. This session will delve into the ethical application of AI within library evaluation practices, focusing on practical strategies to integrate AI responsibly as a tool, assistant, and resource.
Who: Jennifer Pacheco Villalobos, Ph.D. Assistant Professor, Claremont Graduate University
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Research Institute for Public Libraries
What: Led by journalists from Votebeat, this webinar will help local reporters explain how their state plans to certify election results.
Who: Jen Fifield is a senior reporter at Votebeat; Hayley Harding a reporter for Votebeat; Carter Walker is Votebeat’s reporter in Pennsylvania; Carrie Levine is Votebeat's managing editor.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Center for Cooperative Media
What: Be prepared to take on any legal challenges this academic year may bring.
Who: SPLC lawyers
When: 4 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Student Press Law Center
What: The topics will be covered are: The key differences between social networks, target markets social media goals, content strategy, ad strategy, measuring results, and must-have social media tools.
Who: Ray-Sidney Smith, Digital Marketing Strategist, Hootsuite Global Brand Ambassador, Google Small Business Advisor for Productivity, and Managing Director of W3C Web Services.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center, Widener University
What: This webinar will introduce you to mental health specialists explaining modern approaches and understanding of mental health, the causes and effects of stigma and discrimination, and your role as a journalist in overcoming both. By the end of this session, you should feel better equipped to talk about and report on mental health issues.
Who: Alexandra Latham, Communications Manager, Mental Health Europe; Mar Cabra, The Self Investigation Foundation; Guadalupe Morales, Vice president, ENUSP (European Network of (Ex-)Users & Survivors of Psychiatry); Sue Baker OBE, Director, Changing Minds Globally.
When: 4 am, Eastern
Where: Zoom
Cost: Free
Sponsor: European Commission
What: Zamaneh Media, a small Dutch-based newsroom focusing on Persian-language content, embraced AI to overcome challenges in news production and translation. The newsroom developed two AI-driven tools that significantly streamlined their workflows. Despite a small team of just two people with limited technical backgrounds, they improved the newsroom’s efficiency by reducing the time spent on routine tasks like newsletter creation and translating long Persian articles into English. Learn how they built these tools during this session.
Who: Zamaneh Media representatives
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: A new set of guidelines outlining three practical strategies for investigative journalists to anticipate and respond to these legal threats, even when operating in challenging environments. We’ll also hear from three experienced international reporters who have faced these threats and used these strategies to keep reporting safely.
Who: An expert panel featuring three displaced journalists from Latin America and the Middle East, moderated by Vance Center Staff Attorney Carla Pierini Borenstein.
Join the and three experienced journalists to launch new legal guidelines and discuss practical strategies to help investigative journalists respond to legal threats and operate in challenging environments.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Cyrus Vance Center for International Justice
What: This workshop is designed for anyone who wants to harness the power of AI to optimize their workflows. We’ll delve into the world of AI integration, teaching you how to connect ChatGPT with other applications and automate tasks using Zapier and IFTTT. No prior coding experience is required.
When: 12 noon, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center, Widener University
What: How the U.K. is investing in AI’s economic potential, navigating the balance between innovation and risk and shaping the future of regulation.
Who: Peter Kyle MP, U.K. secretary of state for science; Wayve CEO Alex Kendall
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Washington Post, AWS
What: Discover how AI can transform the entire course development process, making it faster and more efficient overall. In this webinar we explore the latest AI tools and techniques that streamline the instructional design process, from content analysis to writing and storyboarding. You will learn how to leverage AI to produce high-quality, engaging courses with reduced development time. Through practical examples and hands-on activities, you will gain the skills to integrate AI into your instructional design workflow, enhancing both speed and quality. By the end of the session, you'll be equipped with the knowledge to harness AI for creating impactful eLearning experiences efficiently.
Who: Garima Gupta, Founder & CEO, Artha Learning Inc.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine Network
What: This webinar will dive into the world of shell companies, exploring how investigative journalists can unravel these complex networks. Experts from the International Consortium of Investigative Journalists (ICIJ) and the Organized Crime and Corruption Reporting Project (OCCRP) will share useful resources to help journalists navigate this challenging field, focusing both on the strategy of the reporting and the most relevant tips and tools.
Who: Karrie Kehoe is ICIJ’s deputy head of data and research; Jan Strozyk is OCCRP’s chief data editor and co-leads OCCRP’s research and data team; The moderator is Simon Bowers, investigations editor at Finance Uncovered.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Global Investigative Journalism Network
What: Thinking through your newsroom’s needs for guidance, parameters, pitfalls – and maybe the beginnings of an AI ethics policy.
Who: Monica Sandreczki, North Country Public Radio; Darla Cameron, Interim Chief Product Officer, Texas Tribune; Alex Mahadevan, Director of MediaWise, Poynter.
When: 12:30 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Public Media Journalists Association
What: An overview of the rights of journalists’ rights in public places; Advice on navigating police restrictions during demonstrations Key information on journalists’ protections against handing over their materials or equipment, including during arrest; Case studies of legal threats against U.S. journalists; Practical resources available to journalists seeking legal support.
Who: Elise Perry, Senior Legal Manager, Legal Service for Independent Media; Thomson Reuters Foundation; Claire Rajan, Partner, A&O Shearman; Alexander Bussey, Associate, A&O Shearman; Lucy Westcott, Emergencies Director, Committee to Protect Journalists.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: TrustLaw, Thomson Reuters Foundation
What: Find out about the key strategies which enable legacy print local news publishers to make a successful transition to profitable digital-first operation. What does a successful revenue mix look like? Find out how to run a successful local news paywall, both from a content and technical perspective. What does the future hold for local news in the US and how can publishers make sure they are a part of it?
Who: Press Gazette editor in chief Dominic Ponsford; Chad Hussain, vice president of international partnerships for Quintype.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Quintype, PressGazette
What: This webinar will help newsrooms integrate AI tools into their daily operations through efficient workflows, content creation, SEO optimization, and social media engagement. You'll see specific examples in a variety of areas with prompts and results using real-world experiences from editorial teams both big and small. This webinar is perfect for journalists, editors, and newsroom managers who want to understand how AI can be a game changer for their teams, making processes more efficient while upholding editorial standards.
Who: David Arkin is the owner of David Arkin Consulting.
When: 2 pm, Eastern
Where: Zoom
Cost: $35
Sponsor: Online Campus Media
What: An overview of what journalists should know about the legal issues surrounding AI. We’ll get into some of the current court cases and their potential impact on the field, copyright issues, a look at how other fields such as the entertainment industry navigate AI issues, considerations when entering into contracts, submitting content for publication, and using technology to create content, and more.
Who: Farrah Vazquez and Chris Weathers of the media firm Davis Wright Tremaine
When: 3 pm, Eastern
Where: Zoom
Cost: Free ($25 for nonmembers)
Sponsor: Online News Association
What: Many teachers report feeling unequipped to engage students in conversations about the uses of predictive and surveillance technologies. In this virtual panel, The Information & Artificial Intelligence Teacher Advisory Council members will detail their experience exploring news stories about AI accountability and creating curricular tools to support educators and students eager to utilize reporting on AI as a tool for better understanding the impact of artificial intelligence in their schools and communities.
Who: The Information & Artificial Intelligence Teacher Advisory Council, a cohort of 12 teachers who developed and tested resources to introduce and engage with reporting created through the Pulitzer Center’s AI Accountability Network.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Pulitzer Center
How Generative AI Works – Financial Times (scroll storytelling)
Demystifying AI – Axios
5 questions about artificial intelligence, answered – Washington Post
W&M professor publishes children’s book to teach AI fundamentals – William & Mary
Shedding light on AI's black box – Axios
What exactly is an AI agent? – Tech Crunch
‘Visual’ AI models might not see anything at all - Tech Crunch
Is this AI? See if you can spot the technology in your everyday life. – Washington Post
ChatGPT and other language AIs are nothing without humans – a sociologist explains how countless hidden people make the magic – The Conversation
What Are Large Language Models (LLMs) and How Do They Work? – MakeUseOf
What Is Deep Learning? - MathWorks
Readers Have a Lot of Questions About AI. We Answer Them. – Wall Street Journal
What is AI? Everything to know about artificial intelligence – Zdnet
How AI models are getting smarter Deep neural networks are learning diffusion and other tricks – The Economist
A good marriage is one which allows for change and growth in the individuals and in the way they express their love. –Pearl S. Buck
AI model collapse - The idea that AI can eat itself by running out of fresh data, so that it begins to train on it’s on product or the product of another AI. This would magnify errors and bias and make rare data more likely to be lost.
More AI definitions here.
Good questions outflank easy answers. -Nobel prize-winning economist Paul Samuelson
Can Artificial Intelligence Be Conscious? – Psychology Today
What Does It Really Mean to Learn? – The New Yorker
There’s no way for humanity to win an AI arms race – Washington Post
Three key misconceptions in the debate about AI and existential risk – The Bulletin
Is AI Really an Existential Threat to Humanity? – Mother Jones
AI Chatbot Credited With Preventing Suicide. Should It Be? – 404 Media
Who will control the future of AI? – Washington Post
The big AI risk not enough people are seeing – The Atlantic
ChatGPT and the Future of the Human Mind - Every
Here’s why AI like ChatGPT probably won’t reach humanlike understanding – Science News Explores
AI's flawed human yardstick - Axios
“AI” as shorthand for turning off our brains. (This is not an anti-AI post; it’s a discussion of how we think about AI.) – StatModeling
If we ignore AI explainability, we will be throwing ourselves to the mercy of algorithms we don’t understand. – Fast Company
Scientists Gave AI an "Inner Monologue" and Something Fascinating Happened – Futurism
Opinion: A.I.’s Benefits Outweigh the Risks – New York Times
End-of-life decisions are difficult and distressing. Could AI help? – MIT Tech Review
Generative AI is a hammer and no one knows what is and isn’t a nail – Medium
There are basically four family types that we all come from.
1 - The Traditional Family System
The old-fashioned family has a myth that “father knows best.” This family is under the control of only one member.
2 - Enmeshed Family System
The frightened family has a myth that it's “us against the world.” It is emotionally bound together and protective of itself.
3 - The Fighting Family System
The fighting family has a myth of “every man for himself.” Each member of this family is strongly individualistic, recognizing no other authority than his (or her) own.
4 - The Open Family System
The healthy family system theme is “all for one and one for all.” The open family system emphasizes the worth, dignity, and uniqueness of each individual, the importance of unconditional positive regard, and the value of positive reinforcement.
While AI can enhance individual creativity, it might do so at the expense of collective diversity and novelty in creative works. PsyPost
The AI programs aren’t necessarily doing something no human can; they’re doing something no human can in such a short period of time. Sometimes that’s great, as when an AI model quickly solves a scientific challenge that would have taken a researcher years. Sometimes that’s terrifying, as when (they appear) capable of replacing entire production studios. -The Atlantic
“On average 30% of the time the AI models spread misinformation when asked about claims in the news. On average 29% of the time, the AI models simply refused to respond to prompts about false claims in the news over the past month. Instead, the models delivered only non-responsive responses.” -News Guard
While AI models are starting to replicate musical patterns, it is the breaking of rules that tends to produce era-defining songs. Algorithms ‘are great at fulfilling expectations but not good at subverting them, but that’s what often makes the best music,’ Eric Drott, a music-theory professor at the University of Texas at Austin.” How can we be more human than an AI? Produce creative work that goes beyond the expected, the predictable, the established and popular. -The Atlantic
Recent brain scans suggest we don’t need language to think. A group of neuroscientists now argue that our words are primarily for communicating, not for reasoning. "Separating thought and language could help explain why AI systems like ChatGPT are so good at some tasks and so bad at others. These programs mimic the language network in the human brain — but fall short on reasoning." - New York Times
If an LLM can be trained on 17th-century texts, it can just as easily be trained on QAnon forums, or a dataset that presupposes the superiority of one religion or political system. Use a deeply skewed bubble machine like that to try to understand a book, a movie, or someone's medical records and the results will be inherently biased against whatever — or whoever — got left out of the training material. -Business Insider
At times, A.I. chatbots have stumbled with simple arithmetic and math word problems that require multiple steps to reach a solution, something recently documented by some technology reviewers. The A.I.’s proficiency is getting better, but it remains a shortcoming. -New York Times
Try to ignore everything that is style and not substance. We should de-emphasize things like credentials, expertise, and experience, especially when they apply to something people have done before but is not so relevant for the future. Most of us are less likely to lose our jobs to AI than to reimagine our current roles while working out how to use AI to add value in different ways. Less focus on hard skills and more focus on the right soft skills.
Imperative people can have too strong a sense of responsibility. In pushing themselves to do right, they often pay the price of burnout. When others encourage them to slow down, they won’t for fear that a bad habit of laziness might develop. Or perhaps someone will be displeased. The saying, “When you want something done, ask the busiest person in town to do it” may contain a lot of truth. Especially if the busiest person in town doesn’t have the ability to say no.
Les Carter, Imperative People: Those Who Must Be in Control
Vector Embeddings Explained: A Beginner’s Guide to Powerful AI
Why vector databases are more than databases
How Perplexity AI is Transforming Data Science and Analytics
An Intuitive Guide to Integrate SQL and Python for Data Science
AI Definitions: Supervised training
Hyperspectral processing and geospatial intelligence
Ai Definitions: Reinforcement Learning
Seven Common Causes of Data Leakage in Machine Learning
Understanding the Basics of Reinforcement Learning
5 Common Data Science Mistakes and How to Avoid Them
The “latest sign that quantum computing has emerged from its infancy”
Storage technology explained: Vector databases at the core of AI
Researchers looking at the quantity and quality of AI research papers shows China is leading the way
The risk of war as China & Russia build arsenals of weapons that could target American satellites
8 Important Quotes About Ethical Issues Raised by AI
A new way to build neural networks that could make it easier to see how they produce their outputs
What: A discussion on the findings of the latest MFRR Monitoring Report, which recorded 756 media freedom violations in the first half of 2024. This webinar will explore key trends, including the rise of intimidation and online threats, while diving into the state of media freedom across Europe and candidate countries. The monitoring experts of the Media Freedom Rapid Response consortium will also address anti-media laws, election-related violations, and the role of governments in perpetrating these violations.
Who: Gürkan Özturan Media Freedom Monitoring Officer, European Centre for Press and Media Freedom; Teona Sekhniashvili Europe Network and Press Freedom Coordinator, International Press Institute Antje Schlaf Mapping Media Freedom Data and Development Manager, European Centre for Press and Media Freedom; Karol Łuczka Eastern Europe Monitoring and Advocacy Officer, International Press Institute; Camille Magnissalis Press Freedom Monitoring and Communications Officer, European Federation of Journalists; Ronja Koskinen Press Freedom Officer, International Press Institute.
When: 8 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Media Freedom Rapid Response
What: Leading experts will explore how journalists can investigate and report on efforts to undermine election certification and restrict voter access. They will provide tools for understanding the legal and political forces at play, and provide insights into the complexities of election law, the role of disinformation, and how to effectively track election integrity in 2024.
Who: Justin Glawe, an independent journalist and the author of the forthcoming book “If I Am Coming to Your Town, Something Terrible Has Happened”; Carrie Levine, Votebeat’s managing editor; Nikhel Sus is deputy chief counsel at Citizens for Responsibility and Ethics in Washington (CREW); The moderator is Gowri Ramachandran, director of elections and security in the Brennan Center’s Elections and Government program.
When: 8 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Global Investigative Journalism Network
What: We’ll explore ways to fight back against misinformation and disinformation during election coverage. We’ll use tools such as Google Fact-Check Explorer to track fact-checked images and stories. We’ll use reverse image search and other Google tools to check election claims. We’ll break down doctored video and audio with WatchFramebyFrame and Deepfake-o-meter. We’ll also look at the innovative Rolliapp.com to track disinformation spreaders on social channels. Participants get a handout with links to tools and exercise materials you can take to your newsroom.
Who: Mike Reilley, UIC senior lecturer and founder of JournalistsToolbox.ai.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: RTDNA/Google News
What: We’ll teach you practical tips and tools for extending your cause and mission via social media.
Who: Kiersten Hill, the driving force behind Firespring’s nonprofit solutions.
When: 2:30 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Firespring
What: This one-hour webinar will explore the basic principles and pillars of solutions journalism, talk about why it’s important, explain key steps in reporting a solutions story, and share tips and resources for journalists interested in investigating how people are responding to social problems. We will also explore additional resources we have on hand for your reporting, including the Solutions Story Tracker, a database of more than 15,000 stories tagged by beat, publication, author, location, and more, a virtual heat map of what’s working around the world.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Solutions Journalism Network
What: Discover the unique capabilities that set Google Gemini apart from other AI models. Explore its integration with Google Search, Workspace, and other products, and see how Gemini's unique features enhance user experiences across the Google ecosystem.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Pennsylvania Small Business Development Center
What: Experts dive into the impact of AI on America’s businesses, workforce and economy.
Who: MIT economics professor David Autor; Brenda Bown Chief Marketing Officer, Business AI, SAP; Garry Tan President & CEO, Y Combinator.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The Washington Post
What: Learn to use social media to stand out from the crowd. You’ll learn a few advanced social media tips and tricks, elevate your social media presence through micro strategies and activate your advocates.
Who: Kiersten Hill, the driving force behind Firespring’s nonprofit solutions.
When: 2:30 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Firespring
What: Top journalists and researchers who battle disinformation will let you know what they’re seeing, what concerns them most, and how voters can identify and counter disinformation during the final countdown.
Who: Nina Jankowicz, co-founder of the American Sunlight Project; Roberta Braga, founder of the Digital Democracy Institute of the Americas; Tiffany Hsu, disinformation reporter for The New York Times; Brett Neely, supervising editor of NPR’s disinformation reporting team; and Samuel Woolley, University of Pittsburgh professor, disinformation researcher and author.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: PEN America
What: This panel of experts will help journalists debunk false narratives about vaccines and respiratory illnesses, find out about the common falsehoods that experts are tracking, and access reliable data and legitimate information about vaccination rates and trends in the communities journalists cover.
Who: CNN Chief Medical Correspondent Dr. Sanjay Gupta; Dr. Céline Gounder Senior Fellow and Editor-at-Large for Public Health, KFF, Creator and Host, “Epidemic,” Medical Contributor, CBS News; Alex Mahadevan MediaWise Director and Poynter Faculty; Dan Wilson Molecular biologist and science communicator, "Debunk the Funk"; Nirav D. Shah Principal Deputy Director of the U.S. Centers for Disease Control and Prevention.
Where: Zoom
Cost: Free
Sponsor: Poynter, U.S. Department of Health and Human Services and the Risk Less
What: This webinar will equip you with the knowledge and strategies needed to confidently incorporate AI voice technologies into your instructional design practice. We'll explore best practices for maintaining authenticity and engagement when using AI-generated voices, discuss how to select the right AI voice tool for your specific needs, and address concerns about the impact on human voice actors in the industry. By the end of this session, you'll be prepared to make informed decisions about integrating AI voice capabilities into your learning solutions, balancing innovation with ethical considerations.
Who: Margie Meacham Founder and Chief Freedom Officer, Learningtogo.ai
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine Network
What: A discussion of legal issues, liability, and more! This event is perfect for folks starting and expanding student reporting programs in partnership with local outlets.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The University of Vermont Center for Community News and Student Press Law Center
What: New platforms are summarizing important proceedings and digging through data to help journalists more efficiently sift through data and transcripts to pinpoint policies or patterns that could affect a community. Our panelists show you the tools to streamline your workflow and optimize resource allocation.
Who: Sáša Woodruff, Boise State Public Radio; Joe Amditis, Associate director of operations, Center for Cooperative Media; Dustin Dwyer, Reporter/Producer, Michigan Public; Brian Mackey, Host, "The 21st Show", Illinois Public Media.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Public Media Journalists Association
What: In this training, you’ll learn strategies for how to cover hot-button issues without alienating or overgeneralizing segments of your community. We’ll talk about how to signal fairness and explain your work in a way that makes the coverage more accessible by people with different views on the issue.
Who: John Diedrich of the Milwaukee Journal Sentinel, who will share his fresh approach to his award-winning series on guns and how he was able to find common ground across the political spectrum.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Trusting News
What: The learnings, pitfalls, highlights and surprises from their nearly two years of AI development as a central editorial innovation and strategy team that collaborates with the San Francisco Chronicle, Houston Chronicle, Albany Times Union and more than a dozen other local newsrooms.
Who: Tim O’Rourke, vice president for content strategy at Hearst Newspapers; Ryan Serpico, the deputy director of newsroom AI and automation on the Hearst DevHu.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: Transforming hands-on training with XR: Discover how immersive practice environments with personalized feedback are redefining skill development; Raising collective IQ with Generative AI: Learn how AI assistants provide real-time support in the moment of need; Escaping "pilot purgatory": Understand how to scale innovative technologies with a compelling business case that drives widespread adoption; Innovating for the future: Avoid the trap of simply automating outdated classroom models instead of reimagining L&D.
Who: Karl Kapp, Ed.D., CFPIM, CIRM Director, Institute for Interactive Technologies, Bloomsburg University; Tony O’Driscoll Research Fellow and Academic Director, Duke University; David Metcalf, Ph.D. Director, Mixed Emerging Technology Integration Lab, University of Central Florida; Anders Gronstedt, Ph.D. President, The Gronstedt Group.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenSesame
What: Building a successful college media podcast requires research, organization, a specific kind of skills training, vision and a sense of adventure. We can’t cover ALL of that in a single confab, but we have ideas, and we’re going to get the conversation going.
Who: Chris Evans, the director of student media at Rice University and creator of the audio-first Illinois Student Newsroom, a nationally known model for training students to produce NPR-quality news.
When: 4 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: College Media Association
Escape the concentration camp of your own mind and become the person you were meant to be. -Auschwitz survivor Edith Eva Eger (born Sept. 29, 1927)
Talk less, say more.
Our findings suggest that AI tools are not yet ready to take on the task of editing academic papers without extensive human intervention to generate useful prompts, evaluate the output, and manage the practicalities. - Science Editor
If AI-generated papers flood the scientific literature, future AI systems may be trained on AI output and undergo model collapse. This means they may become increasingly ineffectual at innovating. - The Conversation
In a set of 300 fake and real scientific papers, the AI-based tool, named 'xFakeSci', detected up to 94 per cent of the fake ones. - Deccan Herald
People will say, I have 100 ideas that I don’t have time for. Get the AI Scientist to do those. - Nature
There are signs that AI evaluations of academic papers could be corrupting the integrity of knowledge production. Up to 17 percent of reviews submitted to prestigious AI conferences in the last year were substantially written by large language models (LLMs), a recent study estimated. - Chronicle of Higher Ed
Google just created a version of its search engine free of all the extra junk it has added over the past decade-plus. All you have to do is add udm=14 to the search URL. - Tedium
It’s possible to switch back to an AI-free search experience. Google has added a new Web tab to its search engine page at the same time as introducing these new AI features. You can configure this kind of web search as the default. - PopSci
In a 2023 Nature survey of more than 1,600 scientists, almost 30% said that they had used generative AI tools to help write (academic) manuscripts. - Nature
The highest-profile research is heavily influenced by cultural forces and career incentives that are not necessarily aligned with the dispassionate pursuit of truth. To get your research published in high-impact journals it helps enormously not to challenge the predominant narrative. Scientific narratives can become entrenched and self-reinforcing. And that’s where we are in climate science. - Chronicle of Higher Ed
How big is science’s fake-paper problem? An unpublished analysis shared with Nature suggests that over the past two decades, more than 400,000 research articles have been published that show strong textual similarities to known studies produced by paper mills. - Nature
The Chinese Academy of Sciences (CAS), the country's top science institute, on Tuesday published new guidelines on the use of artificial intelligence (AI) in scientific research, as part of its efforts to improve scientific integrity and reduce research misconduct, such as data fabrication and plagiarism. - Global Times
Half of U.S. states seek to crack down on AI in elections – Axios
No people, no problem: AI chatbots predict elections better than humans – Semafor
Sophistication of AI-backed operation targeting senator points to future of deepfake schemes – Associated Press
Half of U.S. states seek to crack down on AI in elections – Axios
Rethinking ‘Checks and Balances’ for the A.I. Age – New York Times
AI Could Still Wreck the Presidential Election – The Atlantic
How A.I., QAnon and Falsehoods Are Reshaping the Presidential Race - New York Times
Uncle Sam wants to know: What can your country do for AI? – Semafor
California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI – ABC News
AI Regulation Is Coming. Fortune 500 Companies Are Bracing for Impact. – Wall Street Journal
Harris will use human Donald Trump stand-ins, not AI, for debate prep – Semafor
Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court? – Associated Press
Breaking Down Global Government Spending on AI – Enterprise AI
How Innovative Is China in AI? – Information Technology & Innovation Foundation
AI researchers call for ‘personhood credentials’ as bots get smarter – Washington Post
How AI-generated memes are changing the 2024 election – NPR
States are writing their own rules for AI in health care - Axios
Political consultant fined $6M for using AI to fake Biden’s voice in robocalls to voters – New York Post
AI enters politics: 3 Pa. House candidates used ChatGPT to shape voters guide responses – Lancaster Online
Israel establishes national expert forum to guide AI policy and regulation – Jerusalem Post
France appoints first AI minister amid political unrest as it aims to become global AI leader – Euro News
Can politicians benefit from claiming real scandals are deep fakes? (video) – CNN
Becoming is a service of Goforth Solutions, LLC / Copyright ©2024 All Rights Reserved