AI Definitions: Transhumanism

Transhumanism - A philosophical movement that advocates attempting to unlock human potential through artificial intelligence and science, with the goal of overcoming biological limitations and combating aging and illness to achieve immortality. This might be achieved through humans merging with machines or upload human consciousness into digital realms. In effect, transhumanism seeks to redefine what it means to be human. In 1957, Julian Huxley summarized the term as “man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.” Critics warn that this effort could erode the very qualities that define humanity, such as empathy, vulnerability and shared experience while exacerbating social inequalities.

What are you willing to give up sleep for?

Our sleep habits both reveal and shape our loves. A decent indicator of what we love is that for which we willingly give up sleep.

My willingness to sacrifice sleep reveals less noble loves. I stay up late later than I should, drowsy, collapsed, on the couch, vaguely surfing the internet, watching cute puppy videos. Or I stay up trying to squeeze more activity into the day to pack it with as much productivity as possible. My disordered sleep reveals a disordered love, idols of entertainment or productivity.

My willingness to sacrifice much-needed rest and my prioritizing amusement or work over the basic needs of my body and the people around me reveal of that these good things—entertainment and work—have taken a place of ascendancy in my life.

Tish Warren, Liturgy of the Ordinary

25 Webinars this week about AI, Journalism & Media

Mon, Mar 23 - Wikipedia Edit-a-thon: Amplifying Women’s Voices on Financial Independence

What: Participants will edit existing Wikipedia entries and create new articles using a curated worklist of women who helped change laws, contributed new research, created new networks, and ultimately, bolstered economic independence for women. New editors are welcome and will receive an introduction to Wikipedia editing.

Who: Smithsonian curator Rachel Seidman; Ariel Cetrone of Wikimedia DC.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Smithsonian

More Info

 

Mon, Mar 23 - Social Media Marketing Strategy for Small Business

What: You’ll learn how to build a clear, sales-focused social media marketing strategy that actually converts. This is not a theory session. You will have created a practical, written plan you can immediately use in your business.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Small Business Development Center, Kutztown University

More Info

 

Tue, Mar 24 - Branding 101

What: Join us for a collaborative virtual workshop where we'll explore the key elements of effective branding: what you want to be known for, how you want customers to feel when they interact with your business, and how to create consistency across all touchpoints. We'll connect these pieces back to your business goals, so your brand becomes a tool for growth, not just decoration.  

Who: Jordan Hanna Gray, SBDC Advisor.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Virginia Small Business Development Center

More Info

 

Tue, Mar 24 - How journalism collaboratives can stay safe

What: Learn from experts about how to safely practice journalism and prepare for and respond to evolving safety challenges.

Who: Jeff Belzil is the International Women Media Foundation’s security director.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Collaborative Journalism Resource Hub, which is housed at the Center for Cooperative Media

More Info

 

Tue, Mar 24 - Why Real Journalists Are Better Than AI

What: We’ll discuss the growing presence of AI in news copy and why so many publications are turning to machines to do the work that was once done by people. We’ll look at what this has done for the quality of story production.  And we’ll discuss how journalists can stand out in a sea of AI slop, why human journalists are more important than ever, and how to educate your audience and leadership about journalists’ value over AI.

Who: Jonathan Maze, editor-in-chief of Restaurant Business at Informa Connect, and Greg Friese, MS, NRP, digital content strategy leader.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Society of Business Publication Editors

More Info

 

Tue, Mar 24 - How To Automate With AI

What: Walk through the basics of AI-powered automation using Make, with practical examples from my real ministry work. You’ll see how to use AI to handle tasks that take up far too much time. By the end of the session, you will have a clear, practical understanding of how automation works and the confidence to start building simple automations for your own ministry context.

Who: Rob Laughter who helps lead the creative team at The Summit Church in the Raleigh, NC.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: AI for Church Leaders

More Info

 

Wed, Mar 25 - Intellectual Property 101

What: We break down the IP framework -consisting of trademarks, patents, trade secrets and copyrights- that every founder needs to know. 

Who: Sima S. Kulkarni, Duane Morris.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Small Business Development Center at Temple University

More Info

 

Wed, Mar 25 - Der Spiegel Crossmedia: Wins, Misses, and Lessons Learned 

What: How Der Spiegel in Germany is reaching younger audiences. We'll have an honest conversation about what worked, what didn't, and what those experiments reveal about serving young audiences.

Who: Aleksandra Janevska, Deputy Lead of Crossmedia Unit, Der Spiegel.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: International News Media Association

More Info

 

Wed, Mar 25 - Creating value for a sustainable future

What: We will explore: Why community connection is a structural advantage driving trust, engagement and long-term viability; How uniquely local utility outperforms commoditized news, particularly in underserved communities; Why reader revenue is a signal as much as a funding source; What sustainable U.S. outlets consistently get right, regardless of model or market

Who: George Adelman, Director and Head of Partnerships, FT Strategies; Angilee Shah, CEO and Editor and Chief Charlottesville Tomorrow; Cheryl Phillips Founder, Big Local News at Stanford.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: FT Strategies

More Info

 

Wed, Mar 25 - Fun and Games with Copyright 

What: his workshop will introduce Copyright: the Card Game, a fun and interactive method of covering the basics of copyright and how they apply to faculty, students and the classroom. Participants will learn how the game was developed, and have the opportunity to play.

Who: Paul Bond of SUNY Broome Community College, one of the developers of the game and a librarian in the Southern Tier of New York.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Wed, Mar 25 - AI for Good: Secure, Smart, High-Impact AI for Nonprofits

What: This session will cut through the noise and provide a practical, responsible roadmap for using AI to expand impact while protecting data, reputation, and community relationships.

Who: Robert Friend, Fundraising Specialist at Eventgroove.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Tech for Good

More Info

 

Wed, Mar 25 - Scaling AI Agents: Breaking the Inference Memory Wall Across Compute, Storage and Networking

What: We examine how Supermicro's accelerated computing and all‑flash storage servers, combined with WEKA’s Augmented Memory Grid software, transform inference memory into a scalable, distributed resource.

Who: Allen Liu, Project Manager, Supermicro; Val Bercovici, Chief AI Officer, WEKA; Awanish Verma, Director, Product Management, AMD; Wendell Wenjen, Sr., Director of Marketing, Storage Solutions, Supermicro.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechTarget

More Info

 

Wed, Mar 25 - Crisis Communications: Who Is Telling Your Story?

What: This session explores the fundamentals of effective crisis communications for public safety and government agencies. Participants will learn how to prepare for high-stakes situations, manage messaging during rapidly evolving incidents, and communicate with transparency and professionalism when public attention is at its highest.

When: 1 pm, Eastern

Where: Zoom

Cost: $49

Sponsor: TOC Public Relations

More Info

 

Wed, Mar 25 - AI Impact Hour for Nonprofits

What: In this session, you’ll learn how to: Streamline communication and content creation; Organize information and reduce repetitive tasks; Support fundraising and outreach with beginner-friendly tools.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Wed, Mar 25 - Teaching the Ethics of Advertising

What: We’ll explore an approach to advertising literacy education that takes an ethics- and systems-approach to analyzing digital ads.

Who: Michelle Ciccone, a PhD Candidate in the Department of Communication at the University of Massachusetts Amherst, and a former K-12 technology integration specialist; Cecilia Yuxi Zhou is an assistant professor in the Academy for Educational Development and Innovation at the Education University of Hong Kong.

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Thu, Mar 26 - Detecting AI-Generated Content – Updated Tools and Techniques

What: An updated version of a guide published by Global Investigative Journalism Network in 2025. We will introduce new resources, tools, and investigative methods that journalists can use to identify AI-generated images.

Who: Henk van Ess, a leading expert in open source intelligence and digital verification.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Global Investigative Journalism Network

More Info

 

Thu, Mar 26 - Restoring Trust in Science: Storytelling, AI, and Integrity in Scholarly Publishing

What: This webinar brings together leading voices to examine how trust can be rebuilt across scientific communication and the publication ecosystem. Our expert panelists will explore three critical challenges: Storytelling and public engagement; AI in peer review: Malfeasance and integrity.

Who: Michele Springer, Deputy Director of Medical Editing at Omnicom Health Medical Communications; Holden Thorp, Editor-in-Chief of Science; Ivan Oransky, MD, Co-founder of Retraction Watch and Executive Director, The Center For Scientific Integrity; Megan Ranney, Dean, Yale School of Public Health; Steve Smith, DPhil, Independent Consultant, STEM Knowledge Partners.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: International Society for Medical Publication Professionals

More Info

 

Thu/Fri, Mar 26/27 - SkillsFest26

What: Topics include: FOIAs, The First Amendment, Algorithms, Pitches, Reporting, Investigation, Ethics, Solutions Journalism, Rural communities, Headlines, Newsroom rights, AP Style, Immigration coverage, Conflicts of Interest, Backgrounding, Copyright, Misinformation, Resilient News teams, Covering Suicide, Design, Criminal justice, Grant Writing, Usiong AI.

Who: Professional journalists and experts.

When: Thursday, 1 pm, Eastern through Friday, 8:30 pm, Eastern.

Where: Zoom

Cost: Free

Sponsor: Society of Professional Journalists

More Info

 

Thu, Mar 26 - Trump and Higher Ed: The Latest

What: Audience Q&A

Who: Sarah Brown, The Chronicle’s news editor; Rick Seltzer, author of the Daily Briefing newsletter.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Chronicle of Higher Education

More Info

 

Thu, Mar 26 - An Intro to the Retraction Watch Research Accountability Reporting Fellowship

What: The application process, and a brief primer on how to cover issues of scientific integrity at your nearby institutions.

Who: Retraction Watch co-founder Ivan Oransky; Stephanie M. Lee, senior writer at The Chronicle of Higher Education.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsors: Retraction Watch & The Open Notebook

More Info

 

Thu, Mar 26 - The Future of Security-Focused AI

What: A practical session for IT leaders, chief data officers, and anyone responsible for safeguarding public‑sector data.  We’ll break down what modern cloud backup and recovery look like and how security‑focused AI is helping agencies stay ahead of threats and recover faster.

Who: Vishal Chaudhry, Chief Data Officer, Washington State Health Care Authority; Jennifer Franks,  Director, Center for Enhanced Cybersecurity, Government Accountability Office; Jeff Reichard, Vice President, Solution Strategy, Veeam.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: GovLoop

More Info

 

Thu, Mar 26 - Inside Nonprofit Local News: Careers, Pathways, and Possibilities

What: An inside look at how the field works, where it’s growing and the opportunities ahead.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Journalism Project

More Info

 

Thu, Mar 26 -  Start an AI-Native Business: Informational Session

What: The start of an AI series where we take entrepreneurs through step by step on how to create an AI Native Business. In this session, we will run through the program information, talk about what makes an AI native business, how to construct and integrate AI into each area of your business.  

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Small Business Development Center, Widener University

More Info

 

Fri, Mar 27 - The Economics of news in 2026

What: This webinar aims to teach news leaders worldwide how to reinvent themselves to best serve the public. The panel offer their unique perspectives on how the news industry must evolve to thrive in the age of AI.

Who: Experts from the University of Maryland’s Philip Merrill College of Journalism and Robert H. Smith School of Business team up with industry leaders

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Robert H. Smith School of Business at the University of Maryland

More Info

 

Fri, Mar 27 - Copyright Law and Preservation, Conservation and Digitization of Film and Video

What: Our experts will unpack copyright issues affecting conservation, preservation and digitization. Specifically, the panel will review the status of the law and the status of best practices in libraries, archives and museums.

Who:  Jillian Borders , Head of Preservation at UCLA Film and Television Archive; Eric Harbeson, Scholarly Communications and Copyright Strategist for Authors Alliance.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Open Copyright Education Advisory Network (OCEAN)

More Info

Intrinsically lovable

No sooner do we believe that God loves us than there an impulse to believe that he does so, not because he is love, but because we are intrinsically lovable. But then, how magnificently we have repented (so) we next offer our own humility to God’s admiration. Surely, he’ll like that. If not that, our clear-sighted and humble recognition that we still lack humility. Thus, depth beneath depth and subtlety within subtlety, there remains some lingering idea of our own, our very own, attractiveness.

It is easy to acknowledge but almost impossible to realize for long, that we are mirrors whose brightness if we are bright, is wholly derived from the sun that shines upon us. Surely we must have a little – however little – native luminosity?

We want to be loved for our cleverness, beauty, generosity, fairness usefulness. The first hint that anyone is offering us the highest love of all is a terrible shock.

CS Lewis, The Four Loves

AI Literacy

AI literacy does not require waiting for a formal training program. A useful starting point is developing what researchers describe as output skepticism — the habit of asking, for any AI-generated result, whether the system could plausibly have reached that conclusion incorrectly and, if so, what the downstream consequences would be. Effective AI literacy is not about mastering the tool — it is about knowing where the tool ends and your own judgment begins. -JD Supra

Just Saying No isn't Easy

“The capacity of AI is so endless that it can be really hard to just say no and stop whatever the next improvement is that you want. As a perfectionist, that often can result in not knowing when to stop. The next best thing is possible, so, often, you end up spending more time writing the perfect workflow and telling AI what to do." - Jack Downey, Head of Strategy, Operations and Product at Webster Pass Consulting, quoted by CBS News

AI Definitions: Model Context Protocol (MCP)

Model Context Protocol (MCP) - This server-based open standard operates across platforms to facilitate communication between LLMs and tools like AI agents and apps. Developed by Anthropic and embraced by OpenAI, Google and Microsoft, MCP can make a developer's life easier by simplifying integration and maintenance of compliant data sources and tools, allowing them to focus on higher-level applications. In effect, MCP is an evolution of RAG. This allows an AI model to talk to Excel or PowerPoint, executing tasks autonomously.

More AI definitions

Humans — not AI — are to blame for deadly Iran school strike

Humans — not AI — are to blame for deadly Iran school strike, sources say. According to former military officials and people familiar with aspects of the bombing campaign in Iran, the thousands of people who gather intelligence and analyze satellite photos to build massive target lists ahead of potential conflicts with foreign adversaries are to blame for the deadly Iran school strike. The error was one that AI would not be likely to make: US officials failed to recognize subtle changes in satellite imagery, while human intelligence analysts missed publicly available information about a school located inside the Revolutionary Guard compound. -Semafor

20 Recent Articles about AI & Journalism

Notes on RISJ’s AI and the Future of News symposium - Harvard’s Nieman Lab

How Journalists Can Make AI Work for Them -  Columbia Journalism ReviewNotes on RISJ’s AI and the Future of News symposium

A lot of journalism folks are offering editing advice as Grammarly’s AI “experts” – Harvard’s Nieman Lab

Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes – Futurism

Can AI Save Local News? – Wall Street Journal

As AI data centers scale, investigating their impact becomes its own beat – Harvard’s Nieman Lab

Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes - Futurism

In This Cleveland Newsroom, AI Is Writing (But Not Reporting) the News – Columbia Journalism Review

Retraction of article containing fabricated quotations by an AI Tool - Arstechnica

Eight in ten of world’s biggest news websites now block AI training bots – Press  Gazette

The Fight over AI at McClatchy - Columbia Journalism Review

New York Times publisher: AI is using our facts without paying for them – Mediaite

Generative Engine Optimization FAQs from the ‘What Is AI Reading?’ report  - Muck Rack  

College paper fights to stop AI slop website from stealing its identity – Washington Post

How AI is reshaping the news industry - Harvard’s Nieman Lab

How will AI reshape the news in 2026? Forecasts by 17 experts from around the world – Reuter Institute

How AI is affecting me as a human (and journalist) – Axios  

Here are the news outlets that got AI right in 2025 — and the ones that got it very, very wrong – Poynter

AI Used to Promote Non-Existent Evacuation Flights From the Middle East – Bellingcat

What the ‘AI inflection point’ means for journalism – Fast Company

Privacy Concerns with AI-powered Meta Ray-Ban glasses

The things you record with your AI-powered Meta Ray-Ban glasses — yes, even those intimate moments where you think you're alone — are probably being seen by strangers. An investigation by two Swedish newspapers found that offshore Meta workers in Kenya were asked to analyze intimate and even "disturbing" videos taken by glasses wearers, including videos taken in bathrooms, footage featuring nudity and sexual content, and images showing personal information like bank accounts. It's part of a process known as data labeling, used to train AI models with footage first reviewed and annotated by humans so that the AI can understand what it's "looking" at. -Mashable

Arguments worth having

Parents who browbeat their kids into being obedient and agreeable may not be giving them the best preparation for the real world. A new study shows that encouraging teens to argue calmly and effectively against parental orders makes them much more likely to resist peer pressure.

University of Virginia researchers observed more than 150 13-year-olds as they disputed issues like grades, chores, and friends with their mothers. When researchers checked back in with the teens two and three years later, they found that those who had argued the longest and most convincingly—without yelling, whining, or throwing insults—were also 40 percent less likely to have accepted offers of drugs and alcohol than the teens who had caved quickly.

“We found that what a teen learned in handling these kinds of disagreements with their parents was exactly what they took into their peer world,” study author Joseph P. Allen tells NPR.org. The key to having a constructive debate with your kids, experts say, is listening to them attentively and rewarding them when they make a good point—even if you don’t end up reaching a mutual agreement. “Think of those arguments not as a nuisance,” Allen says, “but as a critical training ground” for wise, independent decision-making.

The Week Magazine

25 Recent Articles about AI & Legal Issues

AI Legal Platform now Valued at $5.5 Billion – AI Business  

Encyclopedia Britannica sues OpenAI over AI training – Reuters

The AI Literacy Gap is Now a Security and Compliance Liability – JD Supra

Who’s liable when AI is used for harm? – KARE-11

Grammarly is using our identities without permission – The Verge  

Thaler Is Dead. Now for the AI Copyright Questions That Actually Matter. - Copyright Lately

AI legal advice is driving lawyers bananas - Axios 

AI Deepfakes in the Workplace: A New Frontier of Employer Liability – JD Supra

A judge in New Zealand questioned the remorse of a defendant who had used A.I. to write apologies to victims and the court. - New York Times

Employers Turn to AI to Screen Candidates’ Social Media: Best Practices to Minimize Legal Threats – JD Supra

Arkansas attorney resigns after using AI to assist in case work – Thv11 

Interest in Law School Is Surging. A.I. Makes the Payoff Less Certain. – New York Times

AI research should always be verified, especially in court – Post Crescent

League City police to review policies after giving theft suspect an AI mug makeover – ABC13

How AI and social media sites are still collecting kids’ data despite privacy laws – Technical.ly  

ABA Highlights AI’s Challenges for Legal Education and Liability – Bloomberg

Proposed New York law would bar AI chatbots from posing as lawyers, allow duped users to sue – Reuters

What Was Grammarly Thinking? – The Atlantic

Legal advocates object to bill to allow AI interpretation in court – Wisconsin Public Radio

Federal Court Rules Some AI Chats Are Not Protected by Legal Privilege – Crowell Legal

White House puts red state AI laws under scrutiny – Axios

AI Legal Compliance for Law Firms: What Lawyers Need to Know in 2026 – JD Supra

A Long-Running AI Copyright Question Gets an Answer as Supreme Court Stays Mum – CNET

DOJ attorney in Raleigh accused of fake legal arguments, prompting warning about AI from prosecutor - WRAL

AI pilot program in L.A. County courts will help judges craft rulings in some cases – LA Times

AI Definitions: Machine Learning

Machine Learning (ML) - This type of AI can spot patterns in data sets and then improve what it can do on its own, making predictions or decisions. This process evolves and the ML adapts as it is exposed to new data, improving the output without explicit human programming. An example would be algorithms recommending ads for users, which become more tailored the longer it observes the users‘ habits (someone’s clicks, likes, time spent, etc.). A developer of a ML system creates a model and then “trains” it by providing it with many examples. Data scientists then combine ML with other disciplines (like big data analytics and cloud computing) to solve real-world problems. However, the results are limited to probabilities, not absolutes. It doesn’t reveal causation. A subset of “narrow AI,” ML is an alternative approach to symbolic artificial intelligence, and it is better at spotting faces and recognizing voices. Machine learning can be divided into four types: supervised, unsupervised, semi-supervised, and reinforcement learning. A clever computer program can be considered AI if it can mimic human-like behavior. However, the computer system is not machine learning unless its parameters are automatically informed by data without human intervention. Video: Introduction to Machine Learning

AI Bioweapons

Microsoft researchers selected 72 different proteins that are subject to legal controls, such as ricin, a bacterial toxin already used in several terrorist attacks. Using specialized AI protein design tools, they came up with more than 70,000 DNA sequences that would generate variant forms of these proteins. Computer models suggested that at least some of these alternatives would also be toxic. The researchers asked four suppliers of biosecurity screening systems used by DNA synthesis labs to run these sequences through their software. The tools failed to flag many of these sequences as problematic. Their performance varied widely. One tool flagged just 23% of the sequences.  Some DNA vendors, accounting for perhaps 20% of the market, don’t screen their orders at all.  -Science.org