AI Definitions: Training Data

Training Data – A massive amount of text is initially fed into the system to train it. The AI uses this info to create a map of relationships, so it can make predictions. Giving the AI lots of data means more options, which can lead to more creative results. However, this can also make it more vulnerable to hackers and hallucinations. Using more curated, locked-down data sets makes AI models less vulnerable and more predictable but also less creative. 

More AI definitions

AI as Scientist

An AI system that wrote a paper without human involvement that passed peer review for a workshop at the 2025 International Conference on Learning Representations, a top-tier venue in the field of machine learning. The paper was mediocre, according to experts. But its existence marks a turning point that the scientific community is only beginning to grapple with: AI has quickly moved from assisting scientists to attempting to be one. What if one day the AI-generated papers stop being mediocre? -Scientific American

 

AI Essentials – Google (through Coursera)

AI Fluency for Educators (Anthropic)

AI Fluency for nonprofits (Anthropic)

AI Fluency for Students (Anthropic)

AI Fluency: Framework & Foundations (Anthropic)

AI for Everyday Living: A Beginner Workshop for Older Adults (OpenAI Academy)

AI For Everyone (Coursera)

ChatGPT 101: The Complete Beginner's Guide and Masterclass (Udemy)

ChatGPT for Education 101 (OpenAI Academy)

ChatGPT for Education 102 (OpenAI Academy)

ChatGPT for Government 101 (OpenAI Academy)

ChatGPT for Government 102 (OpenAI Academy)

ChatGPT Foundations: Getting Started with AI (OpenAI Academy)

Claude 101 (Anthropic)

Claude Code in Action (Anthropic)

Coursera’s AI courses

Exploring ChatGPT in 2 hours: Practical Guide for Beginners (Udemy)

Generative AI for Data Analysts – IBM (through Coursera)

Generative AI for Data Scientists – Google (through Coursera)

Generative AI for Everyone (Coursera)

Generative AI with Large Language Models (Coursera)

Introduction to Claude Cowork (Anthropic)
Intro to Generative AI: A Beginner’s Primer on Core Concepts - Google (through Coursera)

Introduction to Generative AI (Google)

Learn how to use ChatGPT to Make Money! (Udemy)

Make Teaching Easier with Artificial Intelligence (Udemy)

Master Basics of ChatGPT & OpenAI API (Udemy)

Microsoft AI Product Manager – Microsoft (through Coursera)

Prompt Engineering for ChatGPT – Vanderbilt University (through Coursera)

Prompting with Purpose (OpenAI Academy)

Small Business Jam: Online AI Skill Lab (OpenAI Academy)

Teaching AI Fluency (Anthropic)

AI Definitions: Tokens

Tokens - Think of a token as the root of a word. “Creat” would be the “root” of words like create, creative, creator, creating, and creation. An LLM looks for correlations — words that go together like giraffe and neck. This group of words are represented by a token. A single word might fall into many tokens since the word might have multiple meanings and the subwords of this word will likely correlate to many other subwords. One token generally corresponds to ~4 characters of text for common English text. Examples

More AI definitions

The AI Niceness Overload

Stanford researchers say chatbots are overly agreeable when giving interpersonal advice, affirming users' behavior even when harmful or illegal. On top of that, users could not distinguish when an AI was acting overly agreeable. The study’s lead author worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships.  More from Stanford

A Simple Explanation as to How AI (LLMs) Works

Building the AI 

Large Language Models (LLMs). Computer programs that do one thing: predict the next “token.”   

Training Data. A massive amount of text is initially fed into the system to train it.  

Parameters. The internals rules and limitations learned from the training data. 

Tokenization part 1: pre-training. The process of converting the raw training data (text, images, or audio) into small units called tokens. 

What happens When Someone uses the AI

Prompt. A user asks a question.

Tokenization part 2: inference. The process of converting the prompt (whether text, images, or audio) into small units called tokens. 

Embedding. The conversion of tokens into numbers (vectors) so the computer can look at their relationships. 

Vector databases. The storage and search engine for vector embeddings.  

RAG. The system searches the vector database relevant to the prompt to prevent hallucinations and provide updated information.

Transformers. The core AI architecture that uses vectors to make a prediction about which token to generate next for the prompt.  

From Boredom to Flow

A comfortable routine can turn on us, leaving our creativity stifled, dulling us to other possibilities. Lethargic and sleepwalking through life, boredom soon arrives. At the other end of the spectrum, we have those bungee-jumping thrill-seeking people. Tired of sexual escapades, gambling, and rock climbing, they might self-medicate to starve off the tedium. Then there are drugs that can stimulate many feelings: euphoria, depression, anxiety and even fear. In each case, the goal is to stimulate the brain’s dopamine reward pathway.

Psychologists tell us that the cure for chronic tedium is not to switch to constant high-sensation thrills. There is a sweet spot between boredom and anxiety called flow. As Dr. Richard Friedman writes:

“Flow happens when a person’s skills and talent perfectly match the challenge of an activity: playing in the zone, where there is total and unself-conscious absorption in the activity. Make the task too challenging and anxiety results; make it too easy and boredom emerges. Flow gets to the heart of fun. It’s not hard to see why the enforced tranquility of a Caribbean vacation could be a dreadful bore for a workaholic but bliss for a couch potato: temperament, as well as talent, must match the activity.”

Hurting from Loss

Love anything that lives—a person, a pet, a plant—and it will die. Trust anybody and you may be hurt; depend on anyone and that one may let you down. The price of cathexis (letting something or someone become important to us) is pain. If someone is determined not to risk pain, then such a person must do without many things: having children, getting married, the ecstasy of sex, the hope of ambition, friendship - all that makes life alive, meaningful and significant.

Move out or grow in any dimension and pain as well as joy will be your reward. A full life will be full of pain. But the only alternative is not to live fully or not to live at all. The attempt to avoid legitimate suffering lies at the root of all emotional illness.

M Scott Peck, The Road Less Traveled

The pain you feel is a reminder that you are alive, living life. You’re not on the sidelines; you are in the game. Let it be a stepping stone instead of a stumbling block.

13 Webinars this week about AI, Journalism & Media

Mon, Mar 30 - Responsible Journalism

What: Best practice when reporting on domestic abuse and sexual violence.

When: 9 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Welsh Women’s Aid

More Info

 

Tue, Mar 31 - Agile Infographics

What: Quickly and efficiently make professional infographics. To stay current, designers want to adopt “Agile methodologies.” This innovative/cutting-edge workshop shows (step-by-step) how to use Agile to turn words into professional, compelling infographics quickly. Learn the proven techniques and tools the pros use to do more with less. 

Who: Mike Parkinson Author, Owner, Billion Dollar Graphics.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine

More Info

 

Tue, Mar 31 - The Future of Reporting: Navigating AI's Impact on Editorial

What: Our expert panel will break down how B2B media companies are responding to media landscape changes and share actionable strategies for journalists to thrive. We’ll discuss: Which tasks are easily automated to save you time; Where to lean on human strengths like deep connections and context; The impact of AI on source verification and research; The B2B media roadmap for an AI-integrated future.

Who: Brendan Howard, Freelance Podcast Host; Maria Korolov, Technology Journalist & Author; Alexis Gajewski, Associate Director of Newsroom Operations, Endeavor B2B; Priyanka Rao, Founder & CEO, AI Champions.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Society of Business Publication Editors

More Info

 

Tue, Mar 31 - The Panama Papers at 10: What Changed — and What Hasn’t

What: Ten years ago, the Panama Papers exposed the hidden offshore financial system used by politicians, billionaires and criminals around the world. Its impact continues to shape the fight against financial secrecy today.A conversation about how the investigation unfolded, the reforms it triggered and why the struggle for transparency is far from over.

Who: ICIJ Executive Director Gerard Ryle; international tax justice expert Tove Maria Ryding.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: International Consortium of Investigative Journalist

More Info

 

Tue, Mar 31 - Media Law & Press Freedom in the Current Administration

What: We will evaluate the angles of attack against journalists and news organizations and discuss how to exercise First Amendment rights in the face of hostility and near constant threats.

Who: Jeffrey Hermes, Deputy Director, Media Law Resource Center; George Freeman, Executive Director, Media Law Resource Center.

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: National Association of Hispanic Journalists

More Info

 

Wed, April 1 - Advertising Transformation in Local Media: How Regional Publishers Are Rebuilding Revenue

What: How the publisher's advertiser relationships were tested in real time as they covered one of the most consequential local news stories in the country: the federal immigration enforcement operations that have drawn national attention.

Who: Brian Kennett, VP and head of digital advertising and agency services at the Minnesota Star Tribune; Dave Karabag, regional VP of advertising sales at the Orlando Sentinel.

When: 10 am, Eastern

Where: Zoom

Cost: Free to members

Sponsor: International News Media Association

More Info

 

Wed, April 1 - ChatGPT for Teachers 101  

What: This session will provide a practical walkthrough of the platform and show how teachers, school staff, and district administrators can begin using AI to support their day-to-day work.

Who: Kirk Gulezian, Education & Government, OpenAI.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 1 - Using workspace analytics to drive AI adoption

What: A hands-on session on how teams use Workspace Analytics in ChatGPT Enterprise to run stronger rollouts—finding where adoption is gaining traction, where it’s stalling, and what to do next. 

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 1 - 8 Ways to Grow Your Nonprofit’s Following on Social Media

What: The average growth rate for new social media followers ranges from .64% to 3% per month, depending upon the platform. In other words, the era of organic growth on social media is over. To grow your nonprofit’s following on social media, you need to make a concerted effort to let your supporters and donors know how to find your nonprofit on social media. This free 20-minute webinar will present eight ways to grow your nonprofit’s following on social media.

Who: Heather Mansfield, Founder of Nonprofit Tech for Good.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Tech

More Info

 

Wed, April 1 - Disability Guide Launch  

What: Built by Military Veterans in Journalism (MVJ) in collaboration with the Disabled Journalists Association (DJA), Fix the Frame: A Newsroom Guide to Disability Narratives is a practical resource designed to help journalists produce more accurate, respectful, and inclusive reporting on disability.

Who: Zack Baddorf (MVJ); Cara Reedy (DJA); Rebecca Cokley, Ford Foundation; Sam Kille; Beth Haller; Russell Midori (MVJ); Devon Lancia (MVJ).

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Military Veterans in Journalism

More Info

 

Thu, April 2 - How to Reinforce Training with AI and Coaching

What: We’ll explore how leaders can automate coaching prompts, personalize development pathways, and measure impact without losing the human touch. The combination of AI and coaching creates an ecosystem of intelligent reinforcement—keeping learners engaged long after training ends.

Who: Tim Hagen is the President of Progress Coaching.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenSesame

More Info

 

Thu, April 2 - AI‑Driven Cyber Threats: What’s Real, What’s Hype, and How to Prepare

What: We will explore how AI is accelerating both cyberattacks and defenses, why SMBs and mid-market organizations are increasingly targeted, and how the gap between traditional security tools and AI-era threats continues to widen. 

Who: Justin Vredeveld, Business Development Manager; Jared Olson, Security Team Lead; Blake Mielke, Incident Response Lead.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Ontech Systems

More Info

 

Fri, April 3 - Using AI to Measure the Efficacy of Professional Learning 

What: How using AI to analyze large collections of data can shed light on the efficacy of professional learning.

Who: Lisa Schmucki, Founder and CEO of edWeb.net; Thor Prichard, President and CEO of Clarity Innovations.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: EdWeb.net

More Info

A Summary of 3 Major AI Legal Issues

There are dozens of lawsuits pending over the use of AI. Here are three of the major issues facing the courts.

1a. Copyright & AI: Creating with AI  

Defined: Copyright law is about protecting original expression that’s been “fixed in any tangible medium.”  

Current Law: AI-generated content can't be copyrighted because an AI cannot hold a copyright and these images are not considered to be the work of a human creator. When AI is combined with human effort, the US Copyright Office determines whether a work is protected based on the amount of AI used.

This image from a US Copyright Office presentation illustrates:

Do you think this is a copyright violation?

In Nov 2025, Getty Images largely lost its London lawsuit against artificial intelligence company Stability AI over its image generator. While it failed on copyright grounds, Getty succeeded "in part" on trademark infringement in relation to Getty watermarks. 

1b. Copyright & AI: AI Training

Defined: Training Data is the data initially provided to an AI model so it can create a map of relationships, which it then uses to make predictions. Giving the AI a wide data means more options and may lead to more creative results. Issues arise when AI companies potentially use copyrighted content for training without first receiving permission from the copyright holders.

Current Law: There is no law that directly applies to using copyrighted material for training an AI. The Copyright Office has left the door open to considering it as a possibility, apparently waiting until the courts rule on the issue. 

Unresolved Legal issues: There are more than 30 active lawsuits between AI companies and creators over whether permission is needed for copyrighted material to be used for training data without permission. Some AI companies argue that their use falls under the legal concept of “fair use.” It holds that there are exceptions to the copyright rules when the material is used for things like education and news.  

Example: So far, Anthropic and Meta have successfully argued in lower courts that their use of the copyrighted books was "exceedingly transformative." However, authors whose works were alleged pirated can receive compensation as part of a $1.5 billion settlement.  

2. Right of publicity & AI

Defined: The Right of Publicity is the right of individuals to control the commercial use of their name, image, likeness, or voice (NIL).

Current Law: There is no federal NIL law, only state laws that are not entirely consistent. Tenn. has what’s called The Elvis Act, which protects individuals from AI voice cloning and unauthorized digital replicas. New York has a similar law.  One area of the law that remains murky is what constitutes a commercial use of AI. A bill was introduced in Congress in 2024 that would forbid AI-generated replicas of people without their permission. However, the legislation hasn’t been voted on in the House or Senate.

Unresolved Legal issues: Are unauthorized clones in ads, music, and social media a violation of NIL laws? Or are they an expression of First Amendment rights? 

Example 1: Grammarly’s “Expert Review”

This feature promised feedback on your writing from the perspective of a bunch of famous authors,  journalists and academics. Now deactivated. Class-action lawsuit. The tool wasn’t that good, so it is making those writers look bad, they claim.

Go Deeper: Should you be allowed, legally, to use AI to “write like yourself”?

Should you be allowed to use AI to write like someone else? Would it make it more acceptable to train an AI on someone’s voice or image if the creative work used was first legally purchased? 

Example 2:  Matthew McConaughey

The actor has secured eight trademarks from the U.S. Patent and Trademark Office to protect his likeness and voice from unauthorized AI use. McConaughey’s lawyers believe that the threat of a lawsuit in federal courts would help deter misuse, though an actual court fight would have an uncertain outcome.

Go Deeper: There is a difference in how the law considers public vs. private citizens when it comes to issues of defamation. Should there also be a difference in how we treat the AI replication of a public figure?

How does generative AI challenge our traditional understanding of personal identity?

3. Liability

Defined: Businesses are responsible for damage caused by AI.

Current Law: Areas of risk include intellectual property infringement, data breaches, bias, and defamation. Improper AI use by employees, particularly deepfakes, could expose businesses to harassment and discrimination claims.

Unresolved Legal issues: Who is responsible when AI is used for harm?  Are businesses responsible when employees use AI tools to create doctored images and audio targeting coworkers? 

Examples: 

In Maryland, a school employee was sentenced to jail after using AI to create a racist deepfake recording of a principal.  

A Nashville television meteorologist sued her former station after management failed to adequately address deepfake sexual images created using her likeness.

Employers using AI to screen candidates’ social media could be held responsible for bias and false inferences. Cultural or linguistic styles, code-switching, slang, sarcasm, and memes, could lead to misclassification by bots. AI tools don’t understand context or sarcasm and are at risk of misreading humor, quotes, or historical posts. Reviews of social media feeds can reveal religion, disability, pregnancy, age, and a host of other factors that should not be considered at the time of hire. 

Go Deeper: AI in employee handbooks

Other concerns: freelancing contracts 

Happiness or Growth?

You stop to visit a friend to find her five-year-old running around in diapers. Your friend explains, “That’s the way he likes it, and as long as he’s happy, then it's all right with me.” You’d probably say to yourself, “That’s not love. Love works to see children grow up and take on responsibility as they are able.” If I love you, I can’t just be looking out for what makes you happy. When happiness and growth collide, real love chooses growth. If there's someone in your life and you are wondering if he or she really loves you, ask yourself this question: Is this person seeking what’s in your best interest? Even when you don’t fully understand why they are doing what they are doing, do they persist in looking out for you? Is this person willing to sacrifice your favor in order to see you grow?

Reducing Loneliness: AI or Human?

Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling. -404 Media

A more meaningful study might be to teach the LLM to mimic a first semester student of the same economic and social background as the person who is part of the study.

The Place of AI-Generated Code

Computer programming is now becoming a conversation, a back-and-forth talk fest between software developers and their bots. Coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace. A.I.-generated videos look janky, artificial photos surreal; law briefs can be riddled with career-ending howlers. But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose. -New York Times

AI Definitions: Abstractive Summarization

Abstractive Summarization (ABS) – A natural language processing summary technique that generates new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness, while extractive summarization is better when sticking to the original language is critical. 

More AI definitions