Hurting from Loss

Love anything that lives—a person, a pet, a plant—and it will die. Trust anybody and you may be hurt; depend on anyone and that one may let you down. The price of cathexis (letting something or someone become important to us) is pain. If someone is determined not to risk pain, then such a person must do without many things: having children, getting married, the ecstasy of sex, the hope of ambition, friendship - all that makes life alive, meaningful and significant.

Move out or grow in any dimension and pain as well as joy will be your reward. A full life will be full of pain. But the only alternative is not to live fully or not to live at all. The attempt to avoid legitimate suffering lies at the root of all emotional illness.

M Scott Peck, The Road Less Traveled

The pain you feel is a reminder that you are alive, living life. You’re not on the sidelines; you are in the game. Let it be a stepping stone instead of a stumbling block.

13 Webinars this week about AI, Journalism & Media

Mon, Mar 30 - Responsible Journalism

What: Best practice when reporting on domestic abuse and sexual violence.

When: 9 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Welsh Women’s Aid

More Info

 

Tue, Mar 31 - Agile Infographics

What: Quickly and efficiently make professional infographics. To stay current, designers want to adopt “Agile methodologies.” This innovative/cutting-edge workshop shows (step-by-step) how to use Agile to turn words into professional, compelling infographics quickly. Learn the proven techniques and tools the pros use to do more with less. 

Who: Mike Parkinson Author, Owner, Billion Dollar Graphics.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine

More Info

 

Tue, Mar 31 - The Future of Reporting: Navigating AI's Impact on Editorial

What: Our expert panel will break down how B2B media companies are responding to media landscape changes and share actionable strategies for journalists to thrive. We’ll discuss: Which tasks are easily automated to save you time; Where to lean on human strengths like deep connections and context; The impact of AI on source verification and research; The B2B media roadmap for an AI-integrated future.

Who: Brendan Howard, Freelance Podcast Host; Maria Korolov, Technology Journalist & Author; Alexis Gajewski, Associate Director of Newsroom Operations, Endeavor B2B; Priyanka Rao, Founder & CEO, AI Champions.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Society of Business Publication Editors

More Info

 

Tue, Mar 31 - The Panama Papers at 10: What Changed — and What Hasn’t

What: Ten years ago, the Panama Papers exposed the hidden offshore financial system used by politicians, billionaires and criminals around the world. Its impact continues to shape the fight against financial secrecy today.A conversation about how the investigation unfolded, the reforms it triggered and why the struggle for transparency is far from over.

Who: ICIJ Executive Director Gerard Ryle; international tax justice expert Tove Maria Ryding.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: International Consortium of Investigative Journalist

More Info

 

Tue, Mar 31 - Media Law & Press Freedom in the Current Administration

What: We will evaluate the angles of attack against journalists and news organizations and discuss how to exercise First Amendment rights in the face of hostility and near constant threats.

Who: Jeffrey Hermes, Deputy Director, Media Law Resource Center; George Freeman, Executive Director, Media Law Resource Center.

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: National Association of Hispanic Journalists

More Info

 

Wed, April 1 - Advertising Transformation in Local Media: How Regional Publishers Are Rebuilding Revenue

What: How the publisher's advertiser relationships were tested in real time as they covered one of the most consequential local news stories in the country: the federal immigration enforcement operations that have drawn national attention.

Who: Brian Kennett, VP and head of digital advertising and agency services at the Minnesota Star Tribune; Dave Karabag, regional VP of advertising sales at the Orlando Sentinel.

When: 10 am, Eastern

Where: Zoom

Cost: Free to members

Sponsor: International News Media Association

More Info

 

Wed, April 1 - ChatGPT for Teachers 101  

What: This session will provide a practical walkthrough of the platform and show how teachers, school staff, and district administrators can begin using AI to support their day-to-day work.

Who: Kirk Gulezian, Education & Government, OpenAI.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 1 - Using workspace analytics to drive AI adoption

What: A hands-on session on how teams use Workspace Analytics in ChatGPT Enterprise to run stronger rollouts—finding where adoption is gaining traction, where it’s stalling, and what to do next. 

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 1 - 8 Ways to Grow Your Nonprofit’s Following on Social Media

What: The average growth rate for new social media followers ranges from .64% to 3% per month, depending upon the platform. In other words, the era of organic growth on social media is over. To grow your nonprofit’s following on social media, you need to make a concerted effort to let your supporters and donors know how to find your nonprofit on social media. This free 20-minute webinar will present eight ways to grow your nonprofit’s following on social media.

Who: Heather Mansfield, Founder of Nonprofit Tech for Good.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Tech

More Info

 

Wed, April 1 - Disability Guide Launch  

What: Built by Military Veterans in Journalism (MVJ) in collaboration with the Disabled Journalists Association (DJA), Fix the Frame: A Newsroom Guide to Disability Narratives is a practical resource designed to help journalists produce more accurate, respectful, and inclusive reporting on disability.

Who: Zack Baddorf (MVJ); Cara Reedy (DJA); Rebecca Cokley, Ford Foundation; Sam Kille; Beth Haller; Russell Midori (MVJ); Devon Lancia (MVJ).

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Military Veterans in Journalism

More Info

 

Thu, April 2 - How to Reinforce Training with AI and Coaching

What: We’ll explore how leaders can automate coaching prompts, personalize development pathways, and measure impact without losing the human touch. The combination of AI and coaching creates an ecosystem of intelligent reinforcement—keeping learners engaged long after training ends.

Who: Tim Hagen is the President of Progress Coaching.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenSesame

More Info

 

Thu, April 2 - AI‑Driven Cyber Threats: What’s Real, What’s Hype, and How to Prepare

What: We will explore how AI is accelerating both cyberattacks and defenses, why SMBs and mid-market organizations are increasingly targeted, and how the gap between traditional security tools and AI-era threats continues to widen. 

Who: Justin Vredeveld, Business Development Manager; Jared Olson, Security Team Lead; Blake Mielke, Incident Response Lead.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Ontech Systems

More Info

 

Fri, April 3 - Using AI to Measure the Efficacy of Professional Learning 

What: How using AI to analyze large collections of data can shed light on the efficacy of professional learning.

Who: Lisa Schmucki, Founder and CEO of edWeb.net; Thor Prichard, President and CEO of Clarity Innovations.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: EdWeb.net

More Info

A Summary of 3 Major AI Legal Issues

There are dozens of lawsuits pending over the use of AI. Here are three of the major issues facing the courts.

1a. Copyright & AI: Creating with AI  

Defined: Copyright law is about protecting original expression that’s been “fixed in any tangible medium.”  

Current Law: AI-generated content can't be copyrighted because an AI cannot hold a copyright and these images are not considered to be the work of a human creator. When AI is combined with human effort, the US Copyright Office determines whether a work is protected based on the amount of AI used.

This image from a US Copyright Office presentation illustrates:

Do you think this is a copyright violation?

In Nov 2025, Getty Images largely lost its London lawsuit against artificial intelligence company Stability AI over its image generator. While it failed on copyright grounds, Getty succeeded "in part" on trademark infringement in relation to Getty watermarks. 

1b. Copyright & AI: AI Training

Defined: Training Data is the data initially provided to an AI model so it can create a map of relationships, which it then uses to make predictions. Giving the AI a wide data means more options and may lead to more creative results. Issues arise when AI companies potentially use copyrighted content for training without first receiving permission from the copyright holders.

Current Law: There is no law that directly applies to using copyrighted material for training an AI. The Copyright Office has left the door open to considering it as a possibility, apparently waiting until the courts rule on the issue. 

Unresolved Legal issues: There are more than 30 active lawsuits between AI companies and creators over whether permission is needed for copyrighted material to be used for training data without permission. Some AI companies argue that their use falls under the legal concept of “fair use.” It holds that there are exceptions to the copyright rules when the material is used for things like education and news.  

Example: So far, Anthropic and Meta have successfully argued in lower courts that their use of the copyrighted books was "exceedingly transformative." However, authors whose works were alleged pirated can receive compensation as part of a $1.5 billion settlement.  

2. Right of publicity & AI

Defined: The Right of Publicity is the right of individuals to control the commercial use of their name, image, likeness, or voice (NIL).

Current Law: There is no federal NIL law, only state laws that are not entirely consistent. Tenn. has what’s called The Elvis Act, which protects individuals from AI voice cloning and unauthorized digital replicas. New York has a similar law.  One area of the law that remains murky is what constitutes a commercial use of AI. A bill was introduced in Congress in 2024 that would forbid AI-generated replicas of people without their permission. However, the legislation hasn’t been voted on in the House or Senate.

Unresolved Legal issues: Are unauthorized clones in ads, music, and social media a violation of NIL laws? Or are they an expression of First Amendment rights? 

Example 1: Grammarly’s “Expert Review”

This feature promised feedback on your writing from the perspective of a bunch of famous authors,  journalists and academics. Now deactivated. Class-action lawsuit. The tool wasn’t that good, so it is making those writers look bad, they claim.

Go Deeper: Should you be allowed, legally, to use AI to “write like yourself”?

Should you be allowed to use AI to write like someone else? Would it make it more acceptable to train an AI on someone’s voice or image if the creative work used was first legally purchased? 

Example 2:  Matthew McConaughey

The actor has secured eight trademarks from the U.S. Patent and Trademark Office to protect his likeness and voice from unauthorized AI use. McConaughey’s lawyers believe that the threat of a lawsuit in federal courts would help deter misuse, though an actual court fight would have an uncertain outcome.

Go Deeper: There is a difference in how the law considers public vs. private citizens when it comes to issues of defamation. Should there also be a difference in how we treat the AI replication of a public figure?

How does generative AI challenge our traditional understanding of personal identity?

3. Liability

Defined: Businesses are responsible for damage caused by AI.

Current Law: Areas of risk include intellectual property infringement, data breaches, bias, and defamation. Improper AI use by employees, particularly deepfakes, could expose businesses to harassment and discrimination claims.

Unresolved Legal issues: Who is responsible when AI is used for harm?  Are businesses responsible when employees use AI tools to create doctored images and audio targeting coworkers? 

Examples: 

In Maryland, a school employee was sentenced to jail after using AI to create a racist deepfake recording of a principal.  

A Nashville television meteorologist sued her former station after management failed to adequately address deepfake sexual images created using her likeness.

Employers using AI to screen candidates’ social media could be held responsible for bias and false inferences. Cultural or linguistic styles, code-switching, slang, sarcasm, and memes, could lead to misclassification by bots. AI tools don’t understand context or sarcasm and are at risk of misreading humor, quotes, or historical posts. Reviews of social media feeds can reveal religion, disability, pregnancy, age, and a host of other factors that should not be considered at the time of hire. 

Go Deeper: AI in employee handbooks

Other concerns: freelancing contracts 

Happiness or Growth?

You stop to visit a friend to find her five-year-old running around in diapers. Your friend explains, “That’s the way he likes it, and as long as he’s happy, then it's all right with me.” You’d probably say to yourself, “That’s not love. Love works to see children grow up and take on responsibility as they are able.” If I love you, I can’t just be looking out for what makes you happy. When happiness and growth collide, real love chooses growth. If there's someone in your life and you are wondering if he or she really loves you, ask yourself this question: Is this person seeking what’s in your best interest? Even when you don’t fully understand why they are doing what they are doing, do they persist in looking out for you? Is this person willing to sacrifice your favor in order to see you grow?

Reducing Loneliness: AI or Human?

Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling. -404 Media

A more meaningful study might be to teach the LLM to mimic a first semester student of the same economic and social background as the person who is part of the study.

The Place of AI-Generated Code

Computer programming is now becoming a conversation, a back-and-forth talk fest between software developers and their bots. Coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace. A.I.-generated videos look janky, artificial photos surreal; law briefs can be riddled with career-ending howlers. But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose. -New York Times

AI Definitions: Abstractive Summarization

Abstractive Summarization (ABS) – A natural language processing summary technique that generates new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness, while extractive summarization is better when sticking to the original language is critical. 

More AI definitions

The E-Nose

Scientists have been developing and refining a technology called the e-nose—which is exactly what it sounds like. These systems detect and distinguish aromas, sometimes with about 1,000 times as much precision as humans can. Researchers are exploring—or even commercializing—e-nose systems that can scan a person’s breath to detect deadly infections, sniff the air in a building to seek out signs of potential contaminants, or even develop perfumes more quickly and cheaply than before. - Wall Street Journal

Orange Buttons are the Best

An appeal to authority is a false claim that something must be true because an authority on the subject believes it to be true. It is possible for an expert to be wrong, we need to understand their reasoning or research before we appeal to their findings. In a design meeting you might hear something like this:

“Amazon is a successful website. Amazon has orange buttons. So orange buttons are the best.”

Feel free to switch out ‘Amazon’ and ‘orange buttons’ for anything you want; you get an equally week argument.  

When we counter any logical fallacy, we want to do it as cleanly as possible. In the above example, we only need to point out that many successful websites don’t have orange buttons and many unsuccessful sites do have orange buttons. Then we can move away from the matter entirely unless there is some research or reason available to explain the authority’s decision.

Rob Sutcliffe writing in Prototypr

The intersection of Science & AI in 15 Articles

Unresolved Copyright & AI Questions


In March 2026, the Supreme Court declined to hear Thaler v. Perlmutter.

This leaves in place the D.C. Circuit’s March 2025 ruling.

There was no ruling on the merits. This doesn’t set precedent.

Stephen Thaler listed his AI system as the sole author, disclaimed any human creative contribution, and asked for copyright protection anyway. The DC court said no.

The D.C. Circuit held that copyright law requires a human author, which did not disqualify using AI assistance.

Questions not resolved:

1.         How much Human involvement is enough? 

The US Copyright Office said in its “Zarya of the Dawn” comic book registration decision that the AI- images weren’t protectable, but the human-authored text and the selection and arrangement of text and images were. So far, the Office has said that prompts are not copyrightable—prompting is more like giving instructions to a commissioned artist than actually determining the expressive content of the final image. But what if dozens, even hundreds of prompts are entered? Wouldn’t that involve substantial human effort, iterative refinement, and a creative vision? The Copyright Office says getting different results from the same prompt is proof the user isn’t controlling the expression. The underlying question is this: Is prompting closer to authorship or closer to curation?

2.         Can you prove what you Contributed?

If your work incorporates more than a de minimis amount of AI-generated material, the Copyright Office requires a disclosure statement about the AI involvement and a description of your human contribution. This means the creator must keep files, prompts, drafts, notes on what was intended and layered edits—in case there is a need to prove exactly what the human contribution was. A copyright applicant can avoid this simply by not disclosing the AI use. The system, in effect, rewards silence.  

3.         What Happens When Uncopyrightable AI Output Gets Licensed Anyway?

AI-generated materials are already being licensed, bundled, and sold. An example: Someone took a Python library and used an AI coding agent to rewrite it, then changed the project’s license to a more permissive one. The original creator objected, saying the original license still applied.

4.         AI Output Can Absolutely Infringe. So Now What?

The SCOTUS denial also prompted a wave of commentary suggesting that AI-generated works now exist in some kind of copyright-free zone. They don’t. Issues still on the table: Whether AI-generated summaries of news articles are substitutive enough to infringe, and whether AI-generated narrative retellings of novels cross the line from ideas to expression. One judge dismissed claims that AI bullet-point summaries of investigative journalism were substantially similar to the originals. The same judge allowed a lawsuit to proceed because ChatGPT’s summary of a novel was might have captured the “overall tone and feel” of the original work.

Bottom line: Millions of people are using AI tools every day without knowing whether what they’re making is protectable, infringing, both, or neither. 

Thaler Is Dead. Now for the AI Copyright Questions That Actually Matter 

Coding in the time of AI

You won't see the code yourself anymore, the robots will write it for you. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work. Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way. If you were one of those crafters—the people who wrote idiomatic code that made that programming language sing—there's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either. -Anil Dash

20 Recent Articles about AI & Academic Scholarship

Research integrity is locked into an arms race with agentic AI slop – LSE  

AI can help with research, but humans must remain accountable say university executives – Times Higher Ed 

Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers – Taylor & Francis

AI tool flags plagiarism in 95% of Ph.D. theses submitted this year at India university. – Times of India 

How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists

Hallucinated References: Five Excuses for Academic Misconduct – Dorethea Baur

Ministers urged not to allow data mining of academic literature – Research Professional News

Librarian finds ‘preposterous number’ of fake references in paper from Springer Nature journal – Retraction Watch 

AI is inventing academic articles – and scholars are citing them – the Observer  

DataSeer develops AI system to track dataset reuse – Research Information  

Journal Submissions Riddled With AI-Created Fake Citations – Inside Higher Ed

Account for AI in the environmental footprint of scientific publishing – Nature  

Will AI Help or Hinder Scientific Publishing? – Undark

Hey ChatGPT, write me a fictional paper: these LLMs are willing to commit academic fraud. – Nature

Scientists are failing to disclose their use of AI despite journal mandates, finds study – Physics World

AI in the editorial workflow: Journals set the rules, institutions set the habits – Scholarly Futures  

AI is turning research into a scientific monoculture - Nature

What happens when reviewers receive AI feedback in their reviews? – ArXiv

Human versus artificial intelligence: investigating ability of young academics from research and non-research institutions to identify ChatGPT-generated dental research abstracts - Nature 

Fear of stigma blamed as 0.1 per cent of papers declare AI use - Times Higher Ed