May your trails
/May your trails be crooked, winding, lonesome, dangerous, leading to the most amazing view. -Edward Abbey
May your trails be crooked, winding, lonesome, dangerous, leading to the most amazing view. -Edward Abbey
There are dozens of lawsuits pending over the use of AI. Here are three of the major issues facing the courts.
Defined: Copyright law is about protecting original expression that’s been “fixed in any tangible medium.”
Current Law: AI-generated content can't be copyrighted because an AI cannot hold a copyright and these images are not considered to be the work of a human creator. When AI is combined with human effort, the US Copyright Office determines whether a work is protected based on the amount of AI used.
This image from a US Copyright Office presentation illustrates:
Do you think this is a copyright violation?
In Nov 2025, Getty Images largely lost its London lawsuit against artificial intelligence company Stability AI over its image generator. While it failed on copyright grounds, Getty succeeded "in part" on trademark infringement in relation to Getty watermarks.
Defined: Training Data is the data initially provided to an AI model so it can create a map of relationships, which it then uses to make predictions. Giving the AI a wide data means more options and may lead to more creative results. Issues arise when AI companies potentially use copyrighted content for training without first receiving permission from the copyright holders.
Current Law: There is no law that directly applies to using copyrighted material for training an AI. The Copyright Office has left the door open to considering it as a possibility, apparently waiting until the courts rule on the issue.
Unresolved Legal issues: There are more than 30 active lawsuits between AI companies and creators over whether permission is needed for copyrighted material to be used for training data without permission. Some AI companies argue that their use falls under the legal concept of “fair use.” It holds that there are exceptions to the copyright rules when the material is used for things like education and news.
Example: So far, Anthropic and Meta have successfully argued in lower courts that their use of the copyrighted books was "exceedingly transformative." However, authors whose works were alleged pirated can receive compensation as part of a $1.5 billion settlement.
Defined: The Right of Publicity is the right of individuals to control the commercial use of their name, image, likeness, or voice (NIL).
Current Law: There is no federal NIL law, only state laws that are not entirely consistent. Tenn. has what’s called The Elvis Act, which protects individuals from AI voice cloning and unauthorized digital replicas. New York has a similar law. One area of the law that remains murky is what constitutes a commercial use of AI. A bill was introduced in Congress in 2024 that would forbid AI-generated replicas of people without their permission. However, the legislation hasn’t been voted on in the House or Senate.
Unresolved Legal issues: Are unauthorized clones in ads, music, and social media a violation of NIL laws? Or are they an expression of First Amendment rights?
Example 1: Grammarly’s “Expert Review”
This feature promised feedback on your writing from the perspective of a bunch of famous authors, journalists and academics. Now deactivated. Class-action lawsuit. The tool wasn’t that good, so it is making those writers look bad, they claim.
Go Deeper: Should you be allowed, legally, to use AI to “write like yourself”?
Should you be allowed to use AI to write like someone else? Would it make it more acceptable to train an AI on someone’s voice or image if the creative work used was first legally purchased?
Example 2: Matthew McConaughey
The actor has secured eight trademarks from the U.S. Patent and Trademark Office to protect his likeness and voice from unauthorized AI use. McConaughey’s lawyers believe that the threat of a lawsuit in federal courts would help deter misuse, though an actual court fight would have an uncertain outcome.
Go Deeper: There is a difference in how the law considers public vs. private citizens when it comes to issues of defamation. Should there also be a difference in how we treat the AI replication of a public figure?
How does generative AI challenge our traditional understanding of personal identity?
Defined: Businesses are responsible for damage caused by AI.
Current Law: Areas of risk include intellectual property infringement, data breaches, bias, and defamation. Improper AI use by employees, particularly deepfakes, could expose businesses to harassment and discrimination claims.
Unresolved Legal issues: Who is responsible when AI is used for harm? Are businesses responsible when employees use AI tools to create doctored images and audio targeting coworkers?
Examples:
In Maryland, a school employee was sentenced to jail after using AI to create a racist deepfake recording of a principal.
A Nashville television meteorologist sued her former station after management failed to adequately address deepfake sexual images created using her likeness.
Employers using AI to screen candidates’ social media could be held responsible for bias and false inferences. Cultural or linguistic styles, code-switching, slang, sarcasm, and memes, could lead to misclassification by bots. AI tools don’t understand context or sarcasm and are at risk of misreading humor, quotes, or historical posts. Reviews of social media feeds can reveal religion, disability, pregnancy, age, and a host of other factors that should not be considered at the time of hire.
Go Deeper: AI in employee handbooks
Other concerns: freelancing contracts
You stop to visit a friend to find her five-year-old running around in diapers. Your friend explains, “That’s the way he likes it, and as long as he’s happy, then it's all right with me.” You’d probably say to yourself, “That’s not love. Love works to see children grow up and take on responsibility as they are able.” If I love you, I can’t just be looking out for what makes you happy. When happiness and growth collide, real love chooses growth. If there's someone in your life and you are wondering if he or she really loves you, ask yourself this question: Is this person seeking what’s in your best interest? Even when you don’t fully understand why they are doing what they are doing, do they persist in looking out for you? Is this person willing to sacrifice your favor in order to see you grow?
Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling. -404 Media
A more meaningful study might be to teach the LLM to mimic a first semester student of the same economic and social background as the person who is part of the study.
Horror Novel ‘Shy Girl’ Canceled Over Suspected A.I. Use – New York Times
Human Strategy In An AI-Accelerated Workflow – Smashing Magazine
Why AI-Generated UX Still Feels Off – Vandelay Design
Netflix to Pay as Much as $600 Million for Ben Affleck’s AI Firm – Bloomberg
Designing AI Experiences People Actually Use - Buzz
Tilly Norwood, the fully AI 'actor,' to be part of rapidly expanding 'Tillyverse' – NBC News
How I’m dealing with the pressure to adopt AI as a designer — mynameismartin
German voice actors boycott Netflix over AI training concerns – Reuters
Deepfaking Orson Welles’s Mangled Masterpiece – New Yorker
Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood - New York Times
Against Generative AI: Is Art the Last Refuge of Our Humanity? – Lit Hub
Some thoughts about tool design and AI - Buckenham
Hundreds of creatives warn against an AI slop future – The Verge
How design elements are used in Google’s Gemini – Google Design
’Stranger Things’ Creators Accused by Fans of Using AI To Write Series Finale - Vice
AI Will Bring Val Kilmer Back To Life For a New Aventure Film – Geeky Rant
AI and the Rosetta Stone – Yann-Edern Gillet
Is There an Ethical Path for AI Art? – Hyperallergic
Computer programming is now becoming a conversation, a back-and-forth talk fest between software developers and their bots. Coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace. A.I.-generated videos look janky, artificial photos surreal; law briefs can be riddled with career-ending howlers. But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose. -New York Times
Abstractive Summarization (ABS) – A natural language processing summary technique that generates new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness, while extractive summarization is better when sticking to the original language is critical.
The last of the human freedoms is to choose one's attitude in any given set of circumstances. - Victor Frankl, born March 26, 1905
What to do if your employer is requiring you to use AI – Fast Company
How to Make Claude Code Improve from its Own Mistakes – Toward Data Science
I’ve taught thousands of people how to use AI – here’s what I’ve learned – The Guardian
AI and the Rosetta Stone – Yann-Edern Gillet
Labor Department launches AI literacy course – Axios
AI Is Rewriting the Old Rules of Google Search and SEO – Wall Street Journal
Human Strategy In An AI-Accelerated Workflow – Smashing Magazine
Why Prompt Engineering Hits a Wall and where we go next – KD Nuggets
Five Ways People Are Using Claude Code – New York Times
Don't get used to cheap AI - Axios
How Teens Use and View AI – Pew Research
I didn’t know NotebookLM could do this — 10 features hiding in plain sight – Tom’s Guide
AI Agents Are Taking America by Storm – The Atlantic
No Time to Read a Long Google Doc? Try Gemini's Quick AI Audio Summaries – PC Mag
Google Docs now has AI-generated audio summaries - Neowin
Scientists build a “periodic table” for AI – Science Daily
I use the 'hype check' prompt to help with big decisions — here's how it works - Tom’s Guide
Scientists have been developing and refining a technology called the e-nose—which is exactly what it sounds like. These systems detect and distinguish aromas, sometimes with about 1,000 times as much precision as humans can. Researchers are exploring—or even commercializing—e-nose systems that can scan a person’s breath to detect deadly infections, sniff the air in a building to seek out signs of potential contaminants, or even develop perfumes more quickly and cheaply than before. - Wall Street Journal
An appeal to authority is a false claim that something must be true because an authority on the subject believes it to be true. It is possible for an expert to be wrong, we need to understand their reasoning or research before we appeal to their findings. In a design meeting you might hear something like this:
“Amazon is a successful website. Amazon has orange buttons. So orange buttons are the best.”
Feel free to switch out ‘Amazon’ and ‘orange buttons’ for anything you want; you get an equally week argument.
When we counter any logical fallacy, we want to do it as cleanly as possible. In the above example, we only need to point out that many successful websites don’t have orange buttons and many unsuccessful sites do have orange buttons. Then we can move away from the matter entirely unless there is some research or reason available to explain the authority’s decision.
Rob Sutcliffe writing in Prototypr
Why You Should Stop Worrying About AI Taking Data Science Jobs – Toward Data Science
AI Learns to Smell – Wall Street Journal
Scientists are failing to disclose their use of AI despite journal mandates, finds study – Physics World
AI Predicts Experiment Outcomes Using Game Theory – Quantum Zeitgeist
Will AI Help or Hinder Scientific Publishing? – Undark
Account for AI in the environmental footprint of scientific publishing – Nature
Decoding the brain, inspiring AI: How Rahul Biswas is bridging neuroscience and artificial intelligence - TechTalks
The H-Index of Suspicion: How Culture, Incentives, and AI Challenge Scientific Integrity – NEJM
Artificial Intelligence and the Fraud Industry in Scientific Publishing (video) - Ministry of Science, Innovation and Universities, Spain
What the Scientific Literature is Filling Up With (thanks to AI) – Science.org
Today’s fraudsters can exploit the online scientific world to quickly create realistic looking papers on an industrial scale - Taylor and Francis
AI is turning research into a scientific monoculture – Nature
Scientists build a “periodic table” for AI – Science Daily
In March 2026, the Supreme Court declined to hear Thaler v. Perlmutter.
This leaves in place the D.C. Circuit’s March 2025 ruling.
There was no ruling on the merits. This doesn’t set precedent.
Stephen Thaler listed his AI system as the sole author, disclaimed any human creative contribution, and asked for copyright protection anyway. The DC court said no.
The D.C. Circuit held that copyright law requires a human author, which did not disqualify using AI assistance.
Questions not resolved:
1. How much Human involvement is enough?
The US Copyright Office said in its “Zarya of the Dawn” comic book registration decision that the AI- images weren’t protectable, but the human-authored text and the selection and arrangement of text and images were. So far, the Office has said that prompts are not copyrightable—prompting is more like giving instructions to a commissioned artist than actually determining the expressive content of the final image. But what if dozens, even hundreds of prompts are entered? Wouldn’t that involve substantial human effort, iterative refinement, and a creative vision? The Copyright Office says getting different results from the same prompt is proof the user isn’t controlling the expression. The underlying question is this: Is prompting closer to authorship or closer to curation?
2. Can you prove what you Contributed?
If your work incorporates more than a de minimis amount of AI-generated material, the Copyright Office requires a disclosure statement about the AI involvement and a description of your human contribution. This means the creator must keep files, prompts, drafts, notes on what was intended and layered edits—in case there is a need to prove exactly what the human contribution was. A copyright applicant can avoid this simply by not disclosing the AI use. The system, in effect, rewards silence.
3. What Happens When Uncopyrightable AI Output Gets Licensed Anyway?
AI-generated materials are already being licensed, bundled, and sold. An example: Someone took a Python library and used an AI coding agent to rewrite it, then changed the project’s license to a more permissive one. The original creator objected, saying the original license still applied.
4. AI Output Can Absolutely Infringe. So Now What?
The SCOTUS denial also prompted a wave of commentary suggesting that AI-generated works now exist in some kind of copyright-free zone. They don’t. Issues still on the table: Whether AI-generated summaries of news articles are substitutive enough to infringe, and whether AI-generated narrative retellings of novels cross the line from ideas to expression. One judge dismissed claims that AI bullet-point summaries of investigative journalism were substantially similar to the originals. The same judge allowed a lawsuit to proceed because ChatGPT’s summary of a novel was might have captured the “overall tone and feel” of the original work.
Bottom line: Millions of people are using AI tools every day without knowing whether what they’re making is protectable, infringing, both, or neither.
Thaler Is Dead. Now for the AI Copyright Questions That Actually Matter
One loves that for which one labors, and one labors for that which one loves. – Erich Fromm
You won't see the code yourself anymore, the robots will write it for you. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work. Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way. If you were one of those crafters—the people who wrote idiomatic code that made that programming language sing—there's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either. -Anil Dash
Research integrity is locked into an arms race with agentic AI slop – LSE
AI can help with research, but humans must remain accountable say university executives – Times Higher Ed
Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers – Taylor & Francis
AI tool flags plagiarism in 95% of Ph.D. theses submitted this year at India university. – Times of India
How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists
Hallucinated References: Five Excuses for Academic Misconduct – Dorethea Baur
Ministers urged not to allow data mining of academic literature – Research Professional News
Librarian finds ‘preposterous number’ of fake references in paper from Springer Nature journal – Retraction Watch
AI is inventing academic articles – and scholars are citing them – the Observer
DataSeer develops AI system to track dataset reuse – Research Information
Journal Submissions Riddled With AI-Created Fake Citations – Inside Higher Ed
Account for AI in the environmental footprint of scientific publishing – Nature
Will AI Help or Hinder Scientific Publishing? – Undark
Hey ChatGPT, write me a fictional paper: these LLMs are willing to commit academic fraud. – Nature
Scientists are failing to disclose their use of AI despite journal mandates, finds study – Physics World
AI in the editorial workflow: Journals set the rules, institutions set the habits – Scholarly Futures
AI is turning research into a scientific monoculture - Nature
What happens when reviewers receive AI feedback in their reviews? – ArXiv
Fear of stigma blamed as 0.1 per cent of papers declare AI use - Times Higher Ed
Transhumanism - A philosophical movement that advocates attempting to unlock human potential through artificial intelligence and science, with the goal of overcoming biological limitations and combating aging and illness to achieve immortality. This might be achieved through humans merging with machines or upload human consciousness into digital realms. In effect, transhumanism seeks to redefine what it means to be human. In 1957, Julian Huxley summarized the term as “man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.” Critics warn that this effort could erode the very qualities that define humanity, such as empathy, vulnerability and shared experience while exacerbating social inequalities.
Our sleep habits both reveal and shape our loves. A decent indicator of what we love is that for which we willingly give up sleep.
My willingness to sacrifice sleep reveals less noble loves. I stay up late later than I should, drowsy, collapsed, on the couch, vaguely surfing the internet, watching cute puppy videos. Or I stay up trying to squeeze more activity into the day to pack it with as much productivity as possible. My disordered sleep reveals a disordered love, idols of entertainment or productivity.
My willingness to sacrifice much-needed rest and my prioritizing amusement or work over the basic needs of my body and the people around me reveal of that these good things—entertainment and work—have taken a place of ascendancy in my life.
Tish Warren, Liturgy of the Ordinary
What: Participants will edit existing Wikipedia entries and create new articles using a curated worklist of women who helped change laws, contributed new research, created new networks, and ultimately, bolstered economic independence for women. New editors are welcome and will receive an introduction to Wikipedia editing.
Who: Smithsonian curator Rachel Seidman; Ariel Cetrone of Wikimedia DC.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Smithsonian
What: You’ll learn how to build a clear, sales-focused social media marketing strategy that actually converts. This is not a theory session. You will have created a practical, written plan you can immediately use in your business.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center, Kutztown University
What: Join us for a collaborative virtual workshop where we'll explore the key elements of effective branding: what you want to be known for, how you want customers to feel when they interact with your business, and how to create consistency across all touchpoints. We'll connect these pieces back to your business goals, so your brand becomes a tool for growth, not just decoration.
Who: Jordan Hanna Gray, SBDC Advisor.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Virginia Small Business Development Center
What: Learn from experts about how to safely practice journalism and prepare for and respond to evolving safety challenges.
Who: Jeff Belzil is the International Women Media Foundation’s security director.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Collaborative Journalism Resource Hub, which is housed at the Center for Cooperative Media
What: We’ll discuss the growing presence of AI in news copy and why so many publications are turning to machines to do the work that was once done by people. We’ll look at what this has done for the quality of story production. And we’ll discuss how journalists can stand out in a sea of AI slop, why human journalists are more important than ever, and how to educate your audience and leadership about journalists’ value over AI.
Who: Jonathan Maze, editor-in-chief of Restaurant Business at Informa Connect, and Greg Friese, MS, NRP, digital content strategy leader.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: American Society of Business Publication Editors
What: Walk through the basics of AI-powered automation using Make, with practical examples from my real ministry work. You’ll see how to use AI to handle tasks that take up far too much time. By the end of the session, you will have a clear, practical understanding of how automation works and the confidence to start building simple automations for your own ministry context.
Who: Rob Laughter who helps lead the creative team at The Summit Church in the Raleigh, NC.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: AI for Church Leaders
What: We break down the IP framework -consisting of trademarks, patents, trade secrets and copyrights- that every founder needs to know.
Who: Sima S. Kulkarni, Duane Morris.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center at Temple University
What: How Der Spiegel in Germany is reaching younger audiences. We'll have an honest conversation about what worked, what didn't, and what those experiments reveal about serving young audiences.
Who: Aleksandra Janevska, Deputy Lead of Crossmedia Unit, Der Spiegel.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: International News Media Association
What: We will explore: Why community connection is a structural advantage driving trust, engagement and long-term viability; How uniquely local utility outperforms commoditized news, particularly in underserved communities; Why reader revenue is a signal as much as a funding source; What sustainable U.S. outlets consistently get right, regardless of model or market
Who: George Adelman, Director and Head of Partnerships, FT Strategies; Angilee Shah, CEO and Editor and Chief Charlottesville Tomorrow; Cheryl Phillips Founder, Big Local News at Stanford.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: FT Strategies
What: his workshop will introduce Copyright: the Card Game, a fun and interactive method of covering the basics of copyright and how they apply to faculty, students and the classroom. Participants will learn how the game was developed, and have the opportunity to play.
Who: Paul Bond of SUNY Broome Community College, one of the developers of the game and a librarian in the Southern Tier of New York.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Media Education Lab
What: This session will cut through the noise and provide a practical, responsible roadmap for using AI to expand impact while protecting data, reputation, and community relationships.
Who: Robert Friend, Fundraising Specialist at Eventgroove.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Nonprofit Tech for Good
What: We examine how Supermicro's accelerated computing and all‑flash storage servers, combined with WEKA’s Augmented Memory Grid software, transform inference memory into a scalable, distributed resource.
Who: Allen Liu, Project Manager, Supermicro; Val Bercovici, Chief AI Officer, WEKA; Awanish Verma, Director, Product Management, AMD; Wendell Wenjen, Sr., Director of Marketing, Storage Solutions, Supermicro.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: TechTarget
What: This session explores the fundamentals of effective crisis communications for public safety and government agencies. Participants will learn how to prepare for high-stakes situations, manage messaging during rapidly evolving incidents, and communicate with transparency and professionalism when public attention is at its highest.
When: 1 pm, Eastern
Where: Zoom
Cost: $49
Sponsor: TOC Public Relations
What: In this session, you’ll learn how to: Streamline communication and content creation; Organize information and reduce repetitive tasks; Support fundraising and outreach with beginner-friendly tools.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: TechSoup
What: We’ll explore an approach to advertising literacy education that takes an ethics- and systems-approach to analyzing digital ads.
Who: Michelle Ciccone, a PhD Candidate in the Department of Communication at the University of Massachusetts Amherst, and a former K-12 technology integration specialist; Cecilia Yuxi Zhou is an assistant professor in the Academy for Educational Development and Innovation at the Education University of Hong Kong.
When: 7 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Media Education Lab
What: An updated version of a guide published by Global Investigative Journalism Network in 2025. We will introduce new resources, tools, and investigative methods that journalists can use to identify AI-generated images.
Who: Henk van Ess, a leading expert in open source intelligence and digital verification.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Global Investigative Journalism Network
What: This webinar brings together leading voices to examine how trust can be rebuilt across scientific communication and the publication ecosystem. Our expert panelists will explore three critical challenges: Storytelling and public engagement; AI in peer review: Malfeasance and integrity.
Who: Michele Springer, Deputy Director of Medical Editing at Omnicom Health Medical Communications; Holden Thorp, Editor-in-Chief of Science; Ivan Oransky, MD, Co-founder of Retraction Watch and Executive Director, The Center For Scientific Integrity; Megan Ranney, Dean, Yale School of Public Health; Steve Smith, DPhil, Independent Consultant, STEM Knowledge Partners.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: International Society for Medical Publication Professionals
What: Topics include: FOIAs, The First Amendment, Algorithms, Pitches, Reporting, Investigation, Ethics, Solutions Journalism, Rural communities, Headlines, Newsroom rights, AP Style, Immigration coverage, Conflicts of Interest, Backgrounding, Copyright, Misinformation, Resilient News teams, Covering Suicide, Design, Criminal justice, Grant Writing, Usiong AI.
Who: Professional journalists and experts.
When: Thursday, 1 pm, Eastern through Friday, 8:30 pm, Eastern.
Where: Zoom
Cost: Free
Sponsor: Society of Professional Journalists
What: Audience Q&A
Who: Sarah Brown, The Chronicle’s news editor; Rick Seltzer, author of the Daily Briefing newsletter.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Chronicle of Higher Education
What: The application process, and a brief primer on how to cover issues of scientific integrity at your nearby institutions.
Who: Retraction Watch co-founder Ivan Oransky; Stephanie M. Lee, senior writer at The Chronicle of Higher Education.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsors: Retraction Watch & The Open Notebook
What: A practical session for IT leaders, chief data officers, and anyone responsible for safeguarding public‑sector data. We’ll break down what modern cloud backup and recovery look like and how security‑focused AI is helping agencies stay ahead of threats and recover faster.
Who: Vishal Chaudhry, Chief Data Officer, Washington State Health Care Authority; Jennifer Franks, Director, Center for Enhanced Cybersecurity, Government Accountability Office; Jeff Reichard, Vice President, Solution Strategy, Veeam.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: GovLoop
What: An inside look at how the field works, where it’s growing and the opportunities ahead.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: American Journalism Project
What: The start of an AI series where we take entrepreneurs through step by step on how to create an AI Native Business. In this session, we will run through the program information, talk about what makes an AI native business, how to construct and integrate AI into each area of your business.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center, Widener University
What: This webinar aims to teach news leaders worldwide how to reinvent themselves to best serve the public. The panel offer their unique perspectives on how the news industry must evolve to thrive in the age of AI.
Who: Experts from the University of Maryland’s Philip Merrill College of Journalism and Robert H. Smith School of Business team up with industry leaders
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Robert H. Smith School of Business at the University of Maryland
What: Our experts will unpack copyright issues affecting conservation, preservation and digitization. Specifically, the panel will review the status of the law and the status of best practices in libraries, archives and museums.
Who: Jillian Borders , Head of Preservation at UCLA Film and Television Archive; Eric Harbeson, Scholarly Communications and Copyright Strategist for Authors Alliance.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Open Copyright Education Advisory Network (OCEAN)
No sooner do we believe that God loves us than there an impulse to believe that he does so, not because he is love, but because we are intrinsically lovable. But then, how magnificently we have repented (so) we next offer our own humility to God’s admiration. Surely, he’ll like that. If not that, our clear-sighted and humble recognition that we still lack humility. Thus, depth beneath depth and subtlety within subtlety, there remains some lingering idea of our own, our very own, attractiveness.
It is easy to acknowledge but almost impossible to realize for long, that we are mirrors whose brightness if we are bright, is wholly derived from the sun that shines upon us. Surely we must have a little – however little – native luminosity?
We want to be loved for our cleverness, beauty, generosity, fairness usefulness. The first hint that anyone is offering us the highest love of all is a terrible shock.
CS Lewis, The Four Loves
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved