Love is Realizing
/Love is the extremely difficult realization that someone other than oneself is real. –Iris Murdoch
Love is the extremely difficult realization that someone other than oneself is real. –Iris Murdoch
Generative AI's refusal to produce ‘controversial’ content can create echo chambers – Fast Company
How I Built an AI-Powered, Self-Running Propaganda Machine for $105 – Wall Street Journal
AI can pretend to be stupider than it really is, Scientists find – Futurism
Lab reveals how AI safety features can be easily bypassed - The Guardian
New York's AI chatbot tells people to break laws and do crimes - Quartz
Why can’t anyone agree on how dangerous AI will be? – Vox
US says leading AI companies join safety consortium to address risks – Reuters
Despite the AI safety hype, a new study finds little research on the topic – Semafor
Jon Stewart On The False Promises of AI (video) – The Daily Show
Ukraine's attacks on Russian oil refineries shows the growing threat AI drones pose to energy markets – NBC Connecticut
AI deepfakes threaten to upend global elections. No one can stop them. – Washington Post
How Will Artificial Intelligence (AI) Affect Children? – Healthy Children
A National Security Insider Does the Math on the Dangers of AI – Wired
Could AI-generated content be dangerous for our health? – The Guardian
To understand the risks posed by AI, follow the money – The Conversation
Banks told to anticipate risks from using AI, machine learning – Reuters
The second most common misconception about love is the idea that dependency is love. Its effect is seem most dramatically in an individual who makes an attempt or gesture or threat to commit suicide or who becomes incapacitating depressed in response to a rejection or separation from a spouse or over.
Such a person says, “I do not want to live, I cannot live without my husband (wife, girlfriend, boyfriend), I love him (or her) so much.” And when I respond, as I frequently do, “You are mistaken; you don not love your husband (wife, girlfriend, boyfriend).” “What do you mean?” is the angry question. “I just told you I can’t live without him (or her).” I try to explain. “What you describe is parasitism, not love. When you require another individual for your survival, you are a parasite on that individual. There is no choice, no freedom involved in your relationship. It is a matter of necessity rather than love. Love is the free exercise of choice. Two people love each other only when they are quite capable of living without each other but choose to live with each other.
M. Scott Peck, The Road Less Traveled
How Meta YouTube TikTok & Others labels AI – Axios
AI Is Flooding Social Media. Here's How to Make Sure You Don't Get Lost in the Robotic Noise. – Entrepreneur
Does Generative AI Content Have a Place in Social Media? – SocialMediaToday
Meta's AI-everywhere push raises hackles - Axios
Facebook says Sorry its AI Flagged Aschwitz Museum Posts as Offensive – Futurism
More Generative AI Tools Are Coming to Social Apps — Is That a Good Thing? - SocialMediaToday
Meta debuts new AI assistant and chatbots - Axios
LinkedIn taps AI to make it easier for firms to find job candidates – Reuters
Slack’s New CEO Brings Generative AI to the Workplace Conversation - Wall Street Journal
LinkedIn expands its generative AI assistant to recruitment ads and writing profiles – Tech Crunch
Instagram Experiments With Range of Generative AI Elements - SocialMediaToday
LinkedIn Says ChatGPT-Related Job Postings Have Ballooned 21-Fold Since November – Forbes
How to Use AI Tools to Easily Make Short-Form TikTok and Reels Videos – Tech.no
TECH What happens when we train our AI on social media? – Fast Company
AI-generated images have become the latest form of social media spam – The Conversation
Meta’s AI chatbot is coming to social media. Misinformation may come with it. – Washington Post
Franck Schuurmans, a guest lecturer at the Wharton Business School at the University of Pennsylvania, has captivated audiences with explanations of why people make irrational business decisions. A simple exercise he uses in his lectures is to provide a list of 10 questions such as, “In what year was Mozart born?” The task is to select a range of possible answers so that you have 90 percent confidence that the correct answer falls in your chosen range. Mozart was born in 1756, so for example, you could narrowly select 1730 to 1770, or you could more broadly select 1600 to 1900. The range is your choice. Surprisingly, the vast majority choose correctly for no more than five of the 10 questions. Why score so poorly? Most choose too narrow bounds. The lesson is that people have an innate desire to be correct despite having no penalty for being wrong.
Gary Cokins
AI chatbots have thoroughly infiltrated scientific publishing. One percent of scientific articles published in 2023 showed signs of generative AI’s potential involvement, according to a recent analysis - Scientific American
The journey from research data generation to manuscript publication presents many opportunities where AI could, hypothetically, be used – for better or for worse. - Technology Network
Is ChatGPT corrupting peer review? There are telltale words that hint at AI use. A study of review reports identifies dozens of adjectives that could indicate text written with the help of chatbots. - Nature
Should researchers use AI to write papers? This group aims to release a set of guidelines by August, which will be updated every year - Science.org
Generative AI firms should stop ripping off publishers and instead work with them to enrich scholarship, says Oxford University Press’ David Clark. - Times Higher Ed
Here are three ways ChatGPT helps me in my academic writing. Generative AI can be a valuable aid in writing, editing and peer review – if you use it responsibly - Nature
New detection tools powered by AI have lifted the lid on what some are calling an epidemic of fraud in medical research and publishing. Last year, the number of papers retracted by research journals topped 10,000 for the first time. - DW News (video)
Estimating the prevalence of ChatGPT "contamination” in the scholarly literature: It is estimated that at least 60,000 papers (slightly over 1% of all articles) were LLM-assisted - ArXiiv
AI-Generated Texts from LLM has infiltrated the realm of scientific writing? We confirmed and quantified the widespread influence of AI-generated texts in scientific publications across many scientific domains - BioRxiv
Georgetown found that American scholarly institutions and companies are the biggest contributors to AI safety research, but it pales in comparison to the amount of overall studies into AI, raising questions about public and private sector priorities. - Semafor
Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history. - 404Media
The Association of Research Libraries announced a set of seven guiding principles for university librarians to follow in light of rising generative AI use. - Inside Higher Ed
The archetypal extrovert prefers actions to contemplation, risk-taking to heed-taking, certainty to doubt. He favors quick decisions, even at the risk of being wrong. She works well in teams and socializes in groups. We like to think that we value individuality, but all to often we admire one type of individual—the kind who’s comfortable “putting himself out there.” Sure, we allow technologically gifted loners who launch companies in garages to have any personality they please, but they are the exceptions, not the rule, and our tolerance extends mainly to those who get fabulously wealthy or hold the promise of doing so. Extroversion is an enormously appealing personality style, but we’ve turned it into an oppressive standard to which most of us feel we must conform.
Susan Cain, Quiet: The Power of Introverts in a World that Can't Stop Talking
Student journalists are covering their own campuses in convulsion. Here’s what they have to say - Associated Press
Campus Protests Over Gaza Spotlight the Work of Student Journalists - New York Times
As protests surge across college campuses, student journalists report from the front lines - EdSurge
“Everything Felt Really Dystopian”: Columbia Student Journalists on the Front Lines of Gaza Protests - Vanity Fair
High praise for the student journalists at Columbia University - Poynter
Student journalists discuss covering the campus protests - PBS
Pulitzer Prize Board recognizes ‘tireless efforts’ of student journalists covering college protests - The Hill
Student journalists praised for coverage on campus Gaza war protests - Axios
You’ve got briers below you and limbs above you. There's a log to step across. Then a hole to avoid. They all slow you down. Will getting past those obstacles really be worth the effort? The path of adventure and self-definition can be punctuated with periods of intense loneliness and nagging doubt. There’s no guarantee about how it all ends.
Stephen Goforth
There are limited guardrails to deter politicians and their allies from using AI to dupe voters, and enforcers are rarely a match for fakes that can spread quickly across social media or in group chats. The democratization of AI means it’s up to individuals — not regulators — to make ethical choices to stave off AI-induced election chaos. – Washington Post
Adobe surveyed more than 2,000 people in the U.S. and 63% of said they would be less likely to vote for someone who uses GenAI in their promotional content during an election. – Fast Company
Even a false-positive rate in the single digits will, at the scale of a modern social network, make tens of thousands of false accusations each day, eroding faith in the detector itself. - IEEE Spectrum
It took me two days, $105 and no expertise whatsoever to launch a fully automated, AI-generated local news site capable of publishing thousands of articles a day—with the partisan news coverage framing of my choice, nearly all rewritten without credit from legitimate news sources. I created a website specifically designed to support one political candidate against another in a real race for the U.S. Senate. And I made it all happen in a matter of hours.- Wall Street Journal
"Tools to detect AI-written content are notoriously unreliable and have resulted in what students say are false accusations of cheating and failing grades. OpenAI unveiled an AI-detection tool in Jan, but quietly scrapped it due to its “low rate of accuracy.” One of the most prominent tools to detect AI-written text, created by plagiarism detection company Turnitin.com, frequently flagged human writing as AI-generated, according to a Washington Post examination." – Washington Post
It’s important to remember that generative models shouldn’t be treated as a source of truth or factual knowledge. They surely can answer some questions correctly, but this is not what they are designed and trained for. It would be like using a racehorse to haul cargo: it’s possible, but not its intended purpose … Generative AI models are designed and trained to hallucinate, so hallucinations are a common product of any generative model … The job of a generative model is to generate data that is realistic or distributionally equivalent to the training data, yet different from actual data used for training. - InsideBigData
“No single tool is considered fully reliable yet for the general public to detect deepfake audio. A combined approach using multiple detection methods is what I will advise at this stage." Politifact
Too many educators think AI detectors are ‘a silver bullet and can help them do the difficult work of identifying possible academic misconduct.’ My favorite example of just how imperfect they can be: A detector called GPTZero claimed the US Constitution was written by AI. – Washington Post
Most deepfake audio detection providers “claim their tools are over 90% accurate at differentiating between real audio and AI-generated audio.” An NPR test of 84 clips revealed that the detection software often failed to identify AI-generated clips, or misidentified real voices as AI-generated, or both.” - NPR
In a year when billions of people worldwide are set to vote in elections, AI researcher Oren Etzioni continues to paint a bleak picture of what lies ahead. “I’m terrified. There is a very good chance we are going to see a tsunami of misinformation.” – New York Times
Google appears to have quietly struck a deal with one of the most controversial companies using AI to produce content online: AdVon Commerce, the contractor linked to Sports Illustrated's explosive AI scandal. Google is trying to have it both ways: modifying its algorithms to suppress AI sludge while actively supporting attempts to create vastly more of it. – Futurism
Most online detection tools do not provide sufficient information about their development, making it difficult to evaluate and trust the detector results and their significance. - Global Investigative Journalism Network
Run some of your other writing dated before the arrival of ChatGPT in the fall of 2022 through an AI detector, to see whether any of it gets flagged. If it does, the problem is clearly the detector, not the writing. (It’s a little aggressive, but one student told me he did the same with his instructor’s own writing to make the point.) – Washington Post
Men are quite confident (72%) in their ability to tell real news from fake news than women (59%), according to new polling from the Ipsos Consumer Tracker. We see a similar gender gap when it comes to our perceived ability to tell content that was created by AI. - Ipsos
A former high school athletic director was arrested after allegedly using AI to impersonate the school principal in a recording that included racist and antisemitic comments. The principal was temporarily removed from the school, and waves of hate-filled messages circulated on social media, while the school received numerous phone calls. – CBS News
Dubbed “model disgorgement,” AWS researchers have been experimenting with different computational methods to try and remove data that might lead to bias, toxicity, data privacy, or copyright infringement. – Semafor
There are two premises that lead Moran Cerf, a neuroscientist at Northwestern University, to believe personal company is the most important factor for long-term satisfaction.
The first is that decision-making is tiring. A great deal of research has found that humans have a limited amount of mental energy to devote to making choices. Picking our clothes, where to eat, what to eat when we get there, what music to listen to, whether it should actually be a podcast, and what to do in our free time all demand our brains to exert that energy on a daily basis.
The second premise is that humans falsely believe they are in full control of their happiness by making those choices. So long as we make the right choices, the thinking goes, we'll put ourselves on a path toward life satisfaction.
Cerf rejects that idea. The truth is, decision-making is fraught with biases that cloud our judgment. People misremember bad experiences as good, and vice versa; they let their emotions turn a rational choice into an irrational one; and they use social cues, even subconsciously, to make choices they'd otherwise avoid.
But as Cerf tells his students, that last factor can be harnessed for good.
His neuroscience research has found that when two people are in each other's company, their brain waves will begin to look nearly identical.
"This means the people you hang out with actually have an impact on your engagement with reality beyond what you can explain. And one of the effects is you become alike."
From those two premises, Cerf's conclusion is that if people want to maximize happiness and minimize stress, they should build a life that requires fewer decisions by surrounding themselves with people who embody the traits they prefer. Over time, they'll naturally pick up those desirable attitudes and behaviors. At the same time, they can avoid the mentally taxing low-level decisions that sap the energy needed for higher-stakes decisions.
Chris Weller writing in Business Insider
Imagine that you are preparing to go on a vacation to one of two islands: Moderacia (which has average weather, average beaches, average hotels, and average nightlife) or Extremia (which has beautiful weather and fantastic beaches but crummy hotels and no nightlife). The time has come to make your reservations, so which one would you choose? Most people pick Extremia.
But now imagine that you are already holding tentative reservations for both destinations and the time has come to cancel one of them before they charge your credit card. Which would you cancel? Most people choose to cancel their reservation on Extremia.
Why would people both select and reject Extremia? Because when we are selecting, we consider the positive attributes of our alternatives, and when we are rejecting, we consider the negative attributes.
Extremia has the most positive attributes and the most negative attributes, hence people tend to select it when they are looking for something to select and they reject it when they are looking for something to reject.
Of course, the logical way to select a vacation is to consider both the presence and the absence of positive and negative attributes, but that's not what most of us do.
Daniel Gilbert, Stumbling on Happiness
Meet the AI Expert Advising the White House, JPMorgan, Google and the Rest of Corporate America - Wall Street Journal
Meta Says It Plans to Spend Billions More on A.I. - New York Times
DeepMind CEO Says Google Will Spend More Than $100 Billion on AI – Bloomberg
Generative AI Is Changing the Hiring Calculus at These Companies – Wall Street Journal
Microsoft Makes a New Push Into Smaller A.I. Systems - New York Times
OpenAI prepares to fight for its life as legal troubles mount – Washington Post
Four Takeaways on the Race to Amass Data for A.I. – New York Times
AI is powering Google to a $2 trillion market cap – Quartz
Mistral, a French start-up considered a promising challenger to OpenAI and Google – New York Times
Humane releases its widely anticipated Ai Pin, a wearable badge that doubles as an AI-powered smart device – Tech Crunch
Tech Leaders Once Cried for AI Regulation. Now the Message Is ‘Slow Down’ - Wired
How Tech Giants Cut Corners to Harvest Data for A.I. – New York Times
Love seeks not only to fight for the good, but constantly to be reconciled with the ones we have had to oppose as we struggle for the good. -C. Stephen Evans
People are inclined to make decisions based on how readily available information is to them. If you can easily recall something, you are likely to rely more on this information than other facts or observations. This means judgements tend to be heavily weighted on the most recent piece of information received or the simplest thing to recall.
In practice, research has shown that shoppers who can recall a few low-price products—perhaps because of a prominent ads or promotions—tend to think that a store offers low prices across the board, regardless of other evidence. And in a particularly devious experiment, a psychology professor (naturally) got his students to evaluate his teaching, with one group asked to list two things he could improve and another asked to list 10. Since it’s harder to think of 10 bad things than just two, the students asked to make a longer list gave the professor better ratings—seemingly concluding that if they couldn’t come up with enough critical things to fill out the form, then the course must be good.
Eshe Nelson writing in Quartz
Every great cause begins as a moment, becomes a business, and eventually degenerates into a racket. -Charles Sykes
Every decision I make is also a decision about what kind of person I want to be. -C. Stephen Evans
Large numbers of American soldier had idyllic marriages to German, Italian or Japanese “war brides” (after World War II) with whom they could not verbally communicate. But when their brides learned English, the marriages began to fall apart. The servicemen could then no longer project upon their wives their own thoughts, feelings, desires and goals and feel the same sense of closeness one feels with a pet. Instead, as their wives learned English, the men began to realize that these women had ideas, opinions and aims different from their own. As this happened, love began to grow for some; for most, perhaps, it ceased.
The liberated woman is right to beware of the man who affectionately calls her his “pet.” He many indeed be an individual whose affection is dependent upon her being a pet, who lacks the capacity to respect her strength, independence and individuality.
Probably the most saddening example of this phenomenon is the very large number of women who are capable of “loving” their children only as infants.
As soon as a child begins to assert its own will- to disobey, to whine, to refuse to play, to occasionally reject being cuddled, to attach itself to other people, to move out into the world a little bit on its own – the mother’s love cease… At the same time, she will often feel an almost overpowering need to be pregnant again, to have another infant, another pet. Usually she will succeed, and the cycle is repeated.
The point is that nurturing can be and usually should be much more than simple feeding, and that nurturing spiritual growth is an infinitely more complicated process than can be directed by any instinct.
M Scott Peck, The Road Less Traveled
Henry Cavill James Bond Trailer Gets 2.3M Views Despite Being an AI Fake – Hollywood Reporter
23 of the best deepfake examples that terrified and amused the internet – CreativeBloq
How to spot AI-generated deepfake images – Associated Press
AI-generated audio deepfakes are increasing. We tested four tools designed to detect them. - PolitiFact
Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text - arXiv
Wait, Can Turnitin Actually Detect If You Use ChatGPT For A Paper? – Her Campus
How to Spot AI-Generated Images – Every Pixel
A machine-learning tool can easily spot when chemistry papers are written using the chatbot ChatGPT – Nature
AI bots are everywhere now. These telltale words give them away. - Washington Post
Disinformation poses an unprecedented threat in 2024 — and the U.S. is less ready than ever – NBC News
AI washing explained: Everything you need to know – Tech Target
How to Spot AI Fakes (For Now) – McGill University
The telltale signs of AI-generated images, video and audio, according to experts – News Nation
Spot the deepfake: The AI tools undermining our own eyes and ears – Politico
Hijacked Facebook Pages are pushing fake AI services to steal your data – ZDNet
Teen Girls Confront an Epidemic of Deepfake Nudes in Schools – New York Times
Many case studies read to me like school homework: they knew what the answer and the process were “supposed to be” according to the textbook, so made up the story to fit. In reality, it’s never smooth and linear. It’s messy and loopish. If you’re doing a good job, you rarely end up with anything remotely like you anticipated when you started out.
-Matej Latin
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved