Bias in AI

ChatGPT Replicates Gender Bias in Recommendation Letters – Scientific American

Gender Bias 'alive and well' across gen-AI platforms – Computing.co 

AI 'red teams' race to find bias and harms in chatbots like ChatGPT - The Washington Post

AI should be assumed prejudiced until proven otherwise – The Atlantic  

Because of their opacity, it's very hard to tell whether they're judging humans in a discriminatory manner - Axios

AI is biased. The White House is working with hackers to try to fix that – NPR

Racially biased AI can lead to false arrests, warns expert – Interesting Engineering

AI was asked to create images of Black African docs treating white kids. How'd it go? – NPR

Health providers say AI chatbots could improve care. But research says some are perpetuating racism – Washington Post

‘Wholly ineffective and pretty obviously racist’: Inside New Orleans’ struggle with facial-recognition policing - Politico

How to mitigate bias from AI tools in the hiring process – Fast Company

In an analysis of thousands of images created by Stable Diffusion, we found that image sets generated for every high-paying job were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.” Most occupations in the dataset were dominated by men, except for low-paying jobs like housekeeper and cashier. Bloomberg 

Eight years ago, Google disabled its A.I. program’s ability to let people search for gorillas and monkeys through its Photos app because the algorithm was incorrectly sorting Black people into those categories. As recently as May of this year, the issue still had not been fixed. Two former employees who worked on the technology told The New York Times that Google had not trained the A.I. system with enough images of Black people. New York Times

MIT student Rona Wang asked an AI image creator app called Playground AI to make a photo of her look "professional." It gave her paler skin and blue eyes, and "made me look Caucasian." Boston Globe 

We have things like recidivism algorithms that are racially biased. Even soap dispensers that don’t read darker skin. Smartwatches and other health sensors don’t work as well for darker skin. Things like selfie sticks that are supposed to track your image don’t work that well for people with darker skin because image recognition in general is biased. The Markup

AI text may be biased toward established scientific ideas and hypotheses contained in the content on which the algorithms were trained. Science.org 

No doubt AI-powered writing tools have shortcomings. But their presence offers educators an on-ramp to discussions about linguistic diversity and bias. Such discussions may be especially critical on U.S. campuses. Inside Higher Ed

Major companies behind A.I. image generators — including OpenAI, Stability AI and Midjourney — have pledged to improve their tools. “Bias is an important, industrywide problem,” Alex Beck, a spokeswoman for OpenAI, said in an email interview. She declined to say how many employees were working on racial bias, or how much money the company had allocated toward the problem. New York Times

As AI models become more advanced, the images they create are increasingly difficult to distinguish from actual photos, making it hard to know what’s real. If these images depicting amplified stereotypes of race and gender find their way back into future models as training data, next generation text-to-image AI models could become even more biased, creating a snowball effect of compounding bias with potentially wide implications for society. Bloomberg

AI generated images are biased, showing the world through stereotypes - Washington Post

Team develops a new deepfake detector designed to be less biased - Techxplore  

back to top

The Bigger Questions & AI

The Big Questions About AI in 2024 – The Atlantic 

AI and Trust - Bruce Schneier Blog 

‘Where does the bot end and human begin?’: what the legendary @Horse_ebooks can teach us about AI – The Guardian

AI’s Present Matters More Than Its Imagined Future – The Atlantic

Are we entering a new age of AI-powered narcissism? – Dazed Digital

The one job AI should actually replace: CEOs – Business Insider

A strong placebo effect works to shape what people think of a particular AI tool – Axios

Why ChatGPT isn’t conscious – but future AI systems might be – The Conversation 

AI girlfriends are ruining an entire generation of men – The Hill

People are behind everything that ChatGPT or AI "does" - Axios

AI is closer than ever to passing the Turing test for ‘intelligence’. What happens when it does? – The Conversation

Getting Beyond the AI Existential Crisis  - Medium

No, AI Machines Can’t Think – Wall Street Journal

If AI becomes conscious: here’s how researchers will know - Nature  

AI & Internet’s Existential Crisis – OM

Large language models aren’t people. Let’s stop testing them as if they were. - MIT Tech Review

Author Talks: In the ‘age of AI,’ what does it mean to be smart? - McKinsey

Why humans will never understand AI - BBC

Does an AI poet actually have a soul? - Washington Post 

Is AI Eroding Our Ability To Think? – Forbes

The future of accelerating intelligence - The Kurzweil Library

M.F.A. vs. GPT How to push the art of writing out of a computer’s reach - The Atlantic

What Stephen King — and nearly everyone else — gets wrong about AI and the Luddites - LA Times  

How the AI Revolution Will Reshape the World – TIME   

The ‘Manhattan Project’ Theory of Generative AI - Wired

56 percent of respondents think ‘people will develop emotional relationships with AI’ and 35 percent of people said they’d be open to doing so if they were lonely - The Verge 

What Kind of Mind Does ChatGPT Have? – The New Yorker

back to top

Business & AI

11 Articles about AI & Work Productivity  

How to Use A.I. to Automate the Dreaded Office Meeting – New York Times

First study to look at AI in the workplace finds it boosts productivity – Axios

How ChatGPT in Microsoft Office could change the workplace – Venture Beat  

Machines of mind: The case for an AI-powered productivity boom – Brookings

Companies want to use AI tracking to make you better at your job – Washington Post

Where AI's productivity revolution will strike first – Axios

To Work Fewer Hours, They Put AI on the Job: New tools like ChatGPT, Midjourney and Tome help professionals save time and boost their income – Wall Street Journal

Generative AI can change real estate, but the industry must change to reap the benefits - McKinsey

Is GenAI’s Impact on Productivity Overblown? - Harvard Business Review

Generative AI adoption at work hasn’t yet led to productivity gains, report says – HR Drive

The Impact of Technology on the Workplace: 2024 Report - Tech.co

How Companies are using AI

25 percent of CEOs plan to replace human workers with AI this year – Futurism

Duolingo cuts workers as it relies more on AI – Washington Post

Tropicana is one company that’s ditching AI - CNN

The Best-Managed Companies Have the Most AI Jobs Postings. What Explains That? – WSJ 

Companies using AI want human workers to ‘disappear’ – Semafor

How Walmart Is Leveraging Automation and AI to Deliver Faster – Wall Street Journal

A Consortium of Big Companies has Developed a Way to Identify A.I. – New York Times

Multinationals turn to generative AI to manage supply chains - Financial Times  

ChatGPT Helps, and Worries, Business Consultants, Study Finds – New York Times

How artificial intelligence is revamping customer call centers – CBS News 

AI Has a Trust Problem. Can Blockchain Help? – Wall Street Journal

AI ads are sweeping across Africa – Semafor

An Anticipated Wave of AI Specialist Jobs Ha Yet to Arrive – Wall Street Journal

Amazon’s AI-written product reviews aren’t as bad as you think - Washington Post

The Creepy AI-Driven Surveillance That May Be Infiltrating Your Workplace – Digg 

Inside the consulting industry's race to become AI rainmakers – Business Insider

ChatGPT provided better customer service than his staff. He fired them. – Washington Post

AI investments are a top priority for U.S. CEOs, KPMG survey finds – Axios

Your employer is (probably) unprepared for artificial intelligence - Economist

Amazon’s New AI Will Make Its Junk Problem Even Worse – Washington Post

Meta’s Free AI Isn’t Cheap to Use, Companies Say – The information

One of the most significant impacts of generative AI on enterprises is likely to be in the area of customer experience – Towards AI

Meet Your New AI Chatbot Co-Worker - Bloomberg

AI and the automation of work – Ben Evans

Why trying to "shape" AI innovation to protect workers is a bad idea – Noah Smith

Companies Put AI to Work Outside the Cloud, Trimming Costs - Wall Street Journal

How Do Companies Use Artificial Intelligence? – Data Science Central

As Businesses Clamor for Workplace A.I., Tech Companies Rush to Provide It – New York Times

U.S. employers are using AI to essentially reduce workers to numbers in the workplace – NPR  

Business Technology Chiefs Question ChatGPT’s Readiness for the Enterprise - Wall Street Journal

AI-native business models and experiences will allow small businesses to appear big and large businesses to move faster – Tech Target

Generative AI Tools Use Custom Data to Power More Business Functions - Wall Street Journal 

Businesses Aim to Harness Generative AI to Shake Up Accounting, Finance - Wall Street Journal

How businesses can break through the ChatGPT hype with ‘workable AI’ – Venture Beat

Companies Tap Tech Behind ChatGPT to Make Customer-Service Chatbots Smarter - Wall Street Journal

Employees Using AI at Work   

Employees want ChatGPT at work. Bosses worry they’ll spill secrets. – Washington Post

Panic and possibility: What workers learned about AI in 2023 – BBC

AI In The Workplace: Helpful Or Harmful? – JD Supra

Amazon employees are already using ChatGPT for software coding, to answer customer questions and write cloud training materials – Insider

How to use ChatGPT to make charts and tables – ZDnet 

5 ChatGPT Prompts To Feel Invincible At Work – Forbes

Despite Office Bans, Some Workers Still Want to Use ChatGPT – Wall Street Journal

New Gen Z graduates are fluent in AI and ready to join the workforce – Washington Post

A Guide to Collaborating With ChatGPT for Work - Wall Street Journal 

AI bots lack one critical skill for customer service jobs – Tech Target

10 most in-demand generative AI skills – CIO

The Do’s and Don’ts of Using Generative AI in the Workplace -  Wall Street Journal

The AI Economic Impact

How AI is Tipping the Scale of Job Vulnerability - Medium

Generative AI And The Future Of Jobs - Forbes

These are the jobs most likely to be taken over by AI - ZDNET

GenAI Will Change How We Design Jobs. Here’s How. – Harvard Business Review 

Generative A.I. Can Add $4.4 Trillion in Value to Global Economy, Study Says – New York Times

The economic potential of generative AI: The next productivity frontier - McKinsey

The Impact of AI-enabled Data Analytics Services Across Major Industries – Data Science Central

What AI means for travel—now and in the future - McKinsey 

The world is splitting between those who use ChatGPT to get better, smarter, richer — and everyone else – Business Insider

For those leading Companies   

The organization of the future: Enabled by gen AI, driven by people - McKinsey  

Gen AI: A guide for CFOs – McKinsey  

As Generative AI Reshapes the Workforce, These Companies May Be Most Affected - Wall Street Journal

Data leaders should consider seven actions to enable companies to scale their generative AI ambitions - McKinsey    

Harness the power of an AI-powered forecasting model to revitalize your business – Data Science Central

How AI May Change Entrepreneurship – Wall Street Journal

Generative AI and the future of HR – McKinsey  

AI Can Do as Bad a Job as Your PR Department - Wall Street Journal

How machine learning can work for business – Tech Central

In digital and AI transformations, companies should start with the problem, not the technology – McKinsey

Technology’s generational moment with generative AI: A CIO and CTO guide - McKinsey

What every CEO should know about generative AI - McKinsey

Companies with innovative cultures have a big edge with generative AI - McKinsey

Does your company need its own LLM? - TechTalks  

Four essential questions for boards to ask about generative AI – McKinsey 

The Big Question for Managers on AI: Who Gets the Job of Figuring It Out? – Wall Street Journal

How AI requires a new approach to work and management – Charter Work  

When AI Meets HR: Prepare Your Policies Now – Inc

Generative AI Can Make Business Intelligence Even Smarter – Here’s How – Inside Big Data

Companies Are Drowning in Too Much AI - Wall Street Journal

ChatGPT: Implications for Business – Medium

How ChatGPT Will Destabilize White-Collar Work - The Atlantic

AI & Jobs

5 types of new jobs that AI could create - Business Insider

The industry talking the most about AI jobs is not tech, according to LinkedIn – Fast Company

Why Walmart thinks AI won’t cut jobs – Semafor

The biggest winners — and losers — in the coming AI job apocalypse – Business Insider

AI threatens wages, not jobs - so far, Researchers find - Reuters

The New Jobs for Humans in the AI Era: Artificial intelligence threatens some careers, but these opportunities are on the rise – Wall Street Journal

A writer says he was laid off after a media company began using AI to translate articles: 'An AI took my job, literally' – Business Insider 

AI-related jobs surge rapidly - The Financial Express

Mid-career professionals, watch out. You're the most exposed to AI - ZDnet

Statement to the US Senate AI Insight Forum on “AI and the Workforce” - ITIF

LinkedIn Shares New Insights into the Impacts of Generative AI on the Workforce – Social Media Today

Study Reveals Professions Most Likely to Be Replaced by AI – Men’s Journal

LinkedIn allows users to use its A.I. to enhance their profiles — but it leaves something to be desired. – Washington Post 

AI-powered digital colleagues are here. Some 'safe' jobs could be vulnerable. - BBC

Employers willing to pay ‘premium’ for AI-skilled workers, survey finds - Higher Ed Dive

Will AI Cause Unemployment? - CATO Institute

AI is Coming for These Jobs 

The Jobs Most Exposed to ChatGPT – WSJ

ChatGPT Might Not Threaten Your Job as Much as the Hype Suggests It Will – The Street

Type in your job to see how much AI will affect it – Washington Post  

Is AI Coming for Our Jobs? (with David Autor) – Café

Here are the 10 roles that AI is most likely to replace – Insider  

Here’s How AI Will Come for Your Job – The Atlantic

Could ChatGPT do my job? – MIT Tech Review

Don't Believe Robots Are Taking Over Jobs: AI Will Open New Career Paths – Insider     

Fear of becoming obsolete hits a new generation of workers – Axios

AI and automation will take more jobs from women than men, report says – Washington Post

Why I'm not worried about AI causing mass unemployment – Understanding AI

The U.S. needs policies now to support workers made redundant by artificial intelligence – The Atlantic

Company Policies on the use of AI  

How WIRED Will Use Generative AI Tools - Wired 

Apple Restricts Employee Use of ChatGPT, Joining Other Companies Wary of Leaks – Wall Street Journal

Associated Press cements the AI era with newsroom guidance – Poynter

back to top

the Business of Running an AI Company 

TikTok owner ByteDance launches its answer to OpenAI’s GPTs, accelerating a generative AI push - South China Morning Post

OpenAI is working on AI education, safety initiative with Common Sense – CNBC

Data centers in the middle of nowhere - Semafor

How AI development fostered a digital ‘sweatshop’, and why it matters for the technology’s future | South China Morning Post - SCMP 

America Already has an AI Underclass – The Atlantic

AI is entering an era of corporate control  - The Verge

OpenAI delays launch of custom GPT store until early 2024- Axios

An Artist in Residence on A.I.’s Territory – New York Times

There Was Never Such a Thing as ‘Open’ AI Transparency isn’t enough to democratize the technology - The Atlantic

Why hot AI startup Anthropic wanted a lower valuation - Semafor

Fox Corp. launches blockchain platform to negotiate with AI firms – Axios

Microsoft briefly overtakes Apple as world's most valuable company - Reuters

Google may layoff 30,000 employees as AI improves operational efficiency: Report – Business Today 

‘Microsoft is back.’ How AI put the five-decade-old tech giant on top again. – Washington Post 

Meta is bucking just about every AI trend, including the ‘boys club’ - Semafor

How AI development fostered a digital ‘sweatshop’, and why it matters for the technology’s future | South China Morning Post - SCMP

America Already has an AI Underclass – The Atlantic

AI is entering an era of corporate control  - The Verge

OpenAI delays launch of custom GPT store until early 2024- Axois

An Artist in Residence on A.I.’s Territory – New York Times

There Was Never Such a Thing as ‘Open’ AI Transparency isn’t enough to democratize the technology - The Atlantic

Why hot AI startup Anthropic wanted a lower valuation - Semafor

Fox Corp. launches blockchain platform to negotiate with AI firms – Axios

Microsoft briefly overtakes Apple as world's most valuable company - Reuters 

Google may layoff 30,000 employees as AI improves operational efficiency: Report – Business Today

OpenAI Is in Early Talks to Raise New Funding at $100 Billion Valuation - Bloomberg

Ego, Fear and Money: How the A.I. Fuse Was Lit – The New York Times

OpenAI’s Custom Chatbots Are Leaking Their Secrets - Wired 

Expert survey: Don't trust tech CEOs on AI – Axios 

GitHub’s AI coding assistant, Copilot, is a moneymaker – Semafor

Meta disbanded its Responsible AI team - The Verge

OpenAI’s New Weapon in Talent War With Google: $10 Million Pay Packages for Researchers – The Information

Top Google executive: ‘We don’t believe in outsourcing’ AI development – Semafor  

Seeking a Big Edge in A.I., South Korean Firms Think Smaller - The New York Times

OpenAI unveils ambitions to compete more directly with Big Tech – Washington Post

3 ways to test your AI’s effectiveness – Legal Dive

AI Revolution: Top Lessons from OpenAI, Anthropic, CharacterAI, & More – a16z (podcast)

The TIME100 Most Influential People in AI - TIME

Silicon Valley startups lean into AI boom – Axios

These Prisoners Are Training AI – Wired

Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water – KBUR  

Meta is Developing its Own LLM to Compete with OpenAI – Social Media Today

Microsoft, Google rebuild around AI with Windows and Bard updates – Axios

The New ChatGPT Can ‘See’ and ‘Talk.’ Here’s What It’s Like. – New York Times

The State of Large Language Models – Scientific American

OpenAI has quietly changed its ‘core values’ - Semafor

Google Brain cofounder says Big Tech companies are inflating fears about the risks of AI wiping out humanity because they want to dominate the market – Business Insider

New synthetic data techniques could change the way AI models are trained - Semafor 

AI Startup Buzz Is Facing a Reality Check – Wall Street Journal 

Nearly 20% of the world's top 1,000 websites are blocking crawler bots that gather data for AI services – Originality.AI

Prediction: AI will add $4.4 trillion to the global economy annually – New York Times

Salesforce unveiled an acceptable use policy that governs what companies can and can't do with its generative AI technologies - Axios

Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ – Washington Post

How ChatGPT Kicked Off an A.I. Arms Race – New York Times

How ChatGPT became the next big thing - Axios

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic - TIME 

The state of AI in 2023: Generative AI’s breakout year – McKinsey

Sales for Nvidia for the current quarter will nearly triple their total a year ago said the company which makes chips essential to the development of AI systems – New York Times

Microsoft confirms it’s investing billions in the creator of ChatGPT - CNN

What to know about OpenAI, the company behind ChatGPT - Washington Post

AI is entering an era of corporate control  - The Verge

Inside Meta's scramble to catch up on AI - Reuters

Immigrants play outsize role in the AI game - Axios

Apple Is an AI Company Now - The Atlantic

Websites That Have Blocked OpenAI’s GPTBot CCBot Anthropic, a 1000 Website Study - Originality. ai

What OpenAI Really Wants - Wired

back to top

Creativity & AI  

Using AI for Accessibility - Moritz Giessmann

AI Art is the New Stock Image - iA 

The best AI image generators to create AI art – Fast Company

The creative future of generative AI - MIT

Meta launches AI-based video editing tools - Reuters

Contract for WGA, the Hollywood writers' union, includes historic AI rules - Axios

Amazon restricts authors from self-publishing more than three books a day after AI concerns – The Guardian

As AI Battle Lines Are Drawn, Studios Align With Big Tech in a Risky Bet - Hollywood Reporter

Art direction vs artificial intelligence: A helpful tool or an added hassle? - Its Nice That

DeepMind and YouTube release Lyria, a gen-AI model for music, and Dream Track to build AI tunes - Tech Crunch  

Generative AI in film & TV: A Special Report - Variety

YouTube Shorts Challenges TikTok With Music-Making AI for Creators - Wired  

How Frank Sinatra and Yo Gotti Are Influencing the Future of Music on YouTube - Wall Street Journal  

Will AI ruin audiobooks — for narrators and listeners? - The Washington Post

AI-Generated Art: Boom or Bust for Human Creativity? –-Center for Data Innovation 

Staying Human While Using Generative AI Tools for Content Marketing - CMS Wire 

Director Christopher Nolan reckons with AI’s ‘Oppenheimer moment’ - The Washington Post

AI study suggests famous Raphael painting was not entirely his own work – Euro News

How AI is transforming the creative economy and music industry - Athens Messenger

ChatGPT, DALL-E 2 and the collapse of the creative process – The Conversation

The Truth About AI Getting "Creative" - Marques Brownlee (video)

What Dreams & AI have in common – Kevin Kelly’s blog

Is Adobe's AI art feature as creepy as it sounds? – Creative Bloq

Generative AI may shorten the time it takes to create richer and more thoughtful content - Semafor

Your Creativity Won’t Save Your Job From AI – The Atlantic

8 Big Questions about AI – New York Times

Is AI better at picking and pairing fonts than you? – Better Web Type

Using AI to do the work you don’t want to do – The Dropbox Blog

How to defend against the rise of ChatGPT? Think like a poet – Washington Post

Spotify will not ban AI-made music - BBC

back to top

Dangers of AI

Once an AI model exhibits 'deceptive behavior' it can be hard to correct, researchers at OpenAI competitor Anthropic found – Business Insider

AI fears creep into finance, business and law - The Washington Post 

Making an image with generative AI uses as much energy as charging your phone - MIT Tech Review 

Survey identifies media literacy skills gap amidst rise in AI-generated content - Poynter 

Don't Fear ChatGPT's Brain. Worry About Its Very, Very Scary Body - Digg

How AI fake news is creating a ‘misinformation superspreader’ - The Washington Post

‘A certain danger lurks there’: how the inventor of the first chatbot turned against AI - The Guardian

AI's real risk is that people will make things worse - The Washington Post

Dark Corners of the Web Offer a Glimpse at A.I.’s Nefarious Future – New York Times 

Zuckerberg’s AGI remarks follow trend of downplaying AI dangers – Ars Technica

New Psychological and Ethical Dangers of 'AI Identity Theft' – Psychology Today  

A New Study Says AI Is Eating Its Own Tail – Popular Mechanics

Why the Godfather of A.I. Fears What He’s Built – The New Yorker

How AI fake nudes ruin teenagers’ lives - The Washington Post

IAC warns regulators generative AI could wreck the web – Axios

AI can imitate people and make them do things on screen and in reality it wasn’t even them – The Guardian

Cutting-edge AI raises fears about risks to humanity. Are tech and political leaders doing enough? - The Washington Post

A.I. Muddies Israel-Hamas War in Unexpected Way - The New York Times

AI-generated child sexual abuse images could flood the internet. A watchdog is calling for action - The Washington Post 

AI Search Is Turning Into the Problem Everyone Worried About – The Atlantic

Sam Altman's firing fuels myth of AI restraint – Axios

Global Leaders Warn A.I. Could Cause ‘Catastrophic’ Harm – The New York Times

A.I. Could Soon Need as Much Electricity as an Entire Country – New York Times

AI becoming sentient is risky, but that’s not the big threat. Here’s what is… - Science Focus 

Why humans can't trust AI: You don't know how it works, what it's going to do or whether it'll serve your interests – Japan Today

‘A.I. Obama’ and Fake Newscasters: How A.I. Audio Is Swarming TikTok – New York Times

Google and Microsoft Are Supercharging AI Deepfake Porn – Bloomberg

How the inventor of the first chatbot turned against AI – The Guardian

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use – Nature 

China AI & Semiconductors Rise: US Sanctions Have Failed – Semi Analysis

A viral TikTok account is doxing ordinary people on the internet using off-the-shelf facial recognition technology  - 404 Media

Craigslist founder Craig Newmark is pouring millions of dollars into combating AI’s dark side – CNBC

 The risk is that AI models will inevitably converge on a point at which they all share the same enormous training set collectivizing whatever inherent weaknesses that set might have. AIs don't know what they don't know. And that can be very dangerous. Axios

The perennial problem is that technology and computing are portrayed in popular media as magic. Even in this Mission Impossible movie, the idea is once the good guys get a key to access the Entity’s source code, the AI can be controlled. That’s a misunderstanding. Even if you had the actual source code of an AI, it wouldn’t tell you what you need to know. -Alex Hanna, director of research at the Distributed AI Research Institute Washington Post

Experts are raising alarms about the mental health risks and the emotional burden of navigating an information ecosystem driven by AI that's likely to feature even more misinformation, identity theft and fraud. Axios

“If you look at phishing filters, they have to learn first, and by the time they learn, they already have a new set of phishing emails coming,” Srinivas Mukkamala, chief product officer at cybersecurity software company Ivanti, told reporters. “So the chances of a phishing email slipping your controls is very, very high.” Route 55

AI technologies are bad for the planet too. Training a single AI model – according to research published in 2019 – might emit the equivalent of more than 284 tonnes of carbon dioxide, which is nearly five times as much as the entire lifetime of the average American car, including its manufacture. These emissions are expected to grow by nearly 50% over the next five years. The Guardian 

Tools like Amazon’s CodeWhisperer and Microsoft-owned GitHub Copilot suggest new code snippets and provide technical recommendations to developers.  By using such tools, it is possible that engineers could produce inaccurate code documentation, code that doesn’t follow secure development practices, or reveal system information beyond what companies would typically share. Wall Street Journal

Attackers are using artificial intelligence to write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal. Washington Post 

Doctored photos are "a nifty way to plant false memories" and "things are going to get even worse with deep fake technology," psychologist Elizabeth Loftus said at the Nobel Prize Summit last month that focused on misinformation. Axios 

In a world where talent is as scarce and coveted as it is in AI right now, it’s hard for the government and government-funded entities to compete. And it makes starting a venture capital-funded company to do advanced safety research seem reasonable, compared to trying to set up a government agency to do the same. There’s more money and there’s better pay; you’ll likely get more high-quality staff. Vox

“It’s possible that super-intelligent A.I. is a looming threat, or that we might one day soon accidentally trap a self-aware entity inside a computer—but if such a system does emerge, it won’t be in the form of a large language model.” New Yorker

AI will be at the center of future financial crises — and regulators are not going to be able to stay ahead of it. That's the message being sent by SEC chair Gary Gensler, arguably the most important and powerful regulator in the U.S. at the moment. Axios 

The challenge with generative AI is that the technology is developing so quickly that companies are rushing to figure out if it introduces new cybersecurity challenges or magnifies existing security weaknesses. Meanwhile, technology vendors have inundated businesses with new generative AI-based features and offerings—not all of which they need or have even paid for. Wall Street Journal 

An estimated 3,200 hackers will try their hand at tricking chatbots and image generators, in the hopes of exposing vulnerabilities. “We’re trying something very wild and audacious, and we’re hopeful it works out.” -Semafor

Researchers have found an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard. Metro

Facial Recognition Software leads Detroit Police to Wrongly Arrest Pregnant Woman – Click on Detroit

Supermarket AI meal planner app suggests recipe that would create chlorine gas – The Guardian  

AI is being used to give dead, missing kids a voice they didn’t ask for – Washington Post 

AI is sleepwalking us into surveillance – UX Design

The dangers of open source AI - Axios

Single mother taking night classes evicted after facial recognition software flagged the movements of her babysitter – Futurism

FBI issues warning about AI malware assaults – Analytics Insights

The $1 billion gamble to ensure AI doesn’t destroy humanity – Vox

ChatGPT falsely accused me of sexual harassment. Can we trust AI?  USA Today

A New Frontier for Travel Scammers: A.I.-Generated Guidebooks – New York Times 

Don't get scammed by fake ChatGPT apps: Here's what to look out for – ZD Net

Seven AI companies commit to safeguards at the White House's request – Engadget

The 'AI Apocalypse' Is Just PR – The Atlantic

This Is Why We Can't Have Nice Things Like AI (the problems with facial recognition)  - Above the Law  

'ChatGPT is the new crypto': Meta warns hackers are exploiting interest in the AI chatbot – CNN  

A.I. Needs an International Watchdog, ChatGPT Creators Say – New York Times

IBM researchers show ways ChatGPT, Bard can be tricked into helping with hacks – Axios

Assessing the existential risk of AI – MIT Tech Review  

Intelligence analysts confront the reality of deepfakes: AI-generated image of fake Pentagon explosion just an inkling of what’s to come - Space News

The potential dangers of using artificial intelligence as a weapon of war - NPR

India’s religious AI chatbots are speaking in the voice of god - Rest of World 

AI-generated child sex images spawn new nightmare for the web – Washington Post 

Princeton computer science professor says don't panic over 'bullshit generator' ChatGPT - Business Insider  

Developers Created AI to Generate Police Sketches. Experts Are Horrified - Vice

Calm Down. There is No Conscious A.I. – Gizmodo

AI can be racist, sexist and creepy. What should we do about it? - CNN

The case for slowing down AI – Vox

More than 1,000 tech leaders & researchers call for a six-month moratorium on AI development over “risks to society and humanity.” - New York Times

Claudia offers nude photos for pay but is a fake AI creation - Washington Post

If Wikipedia content is AI-generated, (it could create) a feedback loop of potentially biased information, if left unchecked - Vice  

2024 promises to be the first AI election cycle with artificial intelligence potentially playing a pivotal role at the ballot box - USA TODAY 

Why Hollywood Really Fears Generative AI - Wired 

How AI is already changing the 2024 election - Axios

Chatbots have faced criticism for messing up key historical facts, fabricating sources, and citing misinformation about each other - Columbia Journalism Review 

OpenAI CEO Sam Altman Says Government Intervention Is 'Crucial' – Entrepreneur 

The internet is filled with videos promising AI can make you rich. But there is little evidence to prove it can. – Washington Post

What happens when AI becomes so integrated into our daily decision-making that we become dependent on it? - Inside Higher Ed 

ChatGPT Is Cutting Non-English Languages Out of the AI Revolution—threatening to amplify existing bias in global commerce and innovation - Wired

Will AI replace coders? - The Guardian 

U.S. Grapples With Potential Threats From Chinese AI - Wall Street Journal 

Researchers failed to identify one-third of medical journal abstracts as written by AI - Bioxiv 

Why AI Will Make Our Children More Lonely - Wall Street Journal

Hackers are already abusing ChatGPT to write malware - Axiox

Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots - Forbes

Mutating malware can be built using the ChatGPT - CSO 

Machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent - The Atlantic

Experts have already seen and documented more than 60 smaller-scale examples of AI systems trying to do something other than what their designer wants - Google Spreadsheet list 

My No. 1 concern about AI right now is AI systems can do more things than their creators know that they can do - NPR

Tech security firm Zscaler (cites) AI as a factor in the 47 percent surge in phishing attacks it saw last year - Washington Post

Many of these applications are potentially vulnerable to prompt injection and it’s not clear to me that this risk is being taken as seriously as it should - Simon Wilson’s Blog 

The Security Hole at the Heart of ChatGPT and Bing - Wired

back to top

Definitions (basic terms are starred)

Agents - Unlike AI prompts requiring user conversations, AI agents work in the background. Users provide a goal (from researching competitors to buying a car) and the agent acts independently, generating task list and starting to work. 

Artificial General Intelligence (AGI) - AI that possesses human-level intelligence that can evaluate complex situations, apply common sense, and learn and adapt.  Beyond the goal of AGI lies the more speculative notion of "sentient AI," the idea that these programs might cross some boundary to become aware of their own existence and even develop their own wishes and feelings.

AI Evolution

  1. Generative AI sounds like a person.

  2. AGI (artificial general intelligence) reasons like a person.

  3. Sentient AI thinks it's a person.

AI model collapse - The idea that AI can eat itself by training on internet data until it runs out of fresh data and trains on it’s on product or the product of another AI. Thus, errors and bias are magnified and rare data is more likely to be lost.

AI winter  - A period where funding and interest in the field subsided considerably. 

*Algorithms - Direct, specific instructions for computers created by a human through coding that tells the computer how to perform a task. This set of rules has a finite number of steps that instruct the computer how to perform a task. More specifically, it is code that follows the algorithmic logic of “if”, “then”, and “else.”  

The code follows the algorithmic logic of “if”, “then”, and “else.”  An example of an algorithm would be:        

IF the customer orders size 13 shoes, THEN display the message ‘Sold out, Sasquatch!’;  ELSE ask for a color preference.     

The two approaches by algorithms:

1. Rule-based algorithms – direct, specific instructions are created by a human

2. Machine-learning algorithms – under the larger umbrella of AI, the data and goal is given to the algorithm, which works out for itself how to reach the goal.. There is a popular perception that algorithms provide a more objective, more complete view of reality, but they often will simply reinforce existing inequities, reflecting the bias of creators and the materials used to train them.

Apache Spark - This data processing tool can be used on very large data sets. Its “cluster computing” uses resources from many computer processors linked together for rapid data processing and real-time analytics. Thus, it supports "predictive analytics." For instance, it can analyze video or social media data automatically. It's a scalable solution meaning that if more oomph is needed, you can simply introduce more processors into the system. It has basically replaced MapReduce as the batch processing engine in Hadoop. 

API - (application programming interface) This software acts as a go-between for applications, programs, or systems to allow them to talk to each other. APIs are essentially acting as translators for AI platforms. 

*Artificial Intelligence (AI) - Basically, it means “making machines intelligent”, so they can take some decisions on their own according to the situations without the need of any human interference. The phrase was coined, says The Economist, in a research proposal written in 1956. The defining feature of artificial intelligence is that behavior is learned from data rather than explicitly programmed. The current excitement about the field was kick-started in 2012 by an online contest called the ImageNet Challenge, in which the goal was getting computers to recognize and label images automatically.

Bard AI - Now Gemini.

*Big Data - Data that’s too big to fit on a single server. Typically, it is unstructured and fast moving. In contrast, small data fits on a single server, is already in structured form (rows and columns), and changes relatively infrequently. If you are working in Excel, you are doing small data. Two NASA researchers (Michael Cox and David Ellsworth) first wrote in a 1997 paper that when there’s too much information to fit into memory or local hard disks, “We call this the problem of big data.” Many companies wind up with big data, not because they need it, they just haven’t bothered to delete it. Thus, big data is sometimes defined as “when the cost of keeping data around is less than the cost of figuring out what to throw away.”    

Big Data looks to collect and manage large amounts of varied data to serve large-scale web applications and vast sensor networks. Meanwhile, data science looks to create models that capture the underlying patterns of complex systems, and codify those models into working applications. Although big data and data science both offer the potential to produce value from data, the fundamental difference between them can be summarized in one statement: collecting does not mean discovering. Big data collects. Data science discovers.  

C and C++ - These programming languages are a good choice for data scientists working on projects that require high performance or massive scalability. It can compile data quickly and efficiently.    

Causal AI - While large language models, traditional machine learning needs a lot of data, causal AI focuses on cause-effect relationships and needs less data. Beyond connecting data points, it looks for direction between the data points.

*ChatGPT - This OpenAI chatbot remembers what you've written or said, so the interaction has a dynamic conversational feel. Give the software a prompt and it creates articles. GPT-4 can use both images and text as inputs, process up to 25K words. It can write and explain code. It doesn’t do sourcing and is limited to info from April 2023. Can browse the internet with Bing. There is a limited free version or pay $20 a month for ChatGPT Plus.  

*Claude - This AI is from Anthropic, a startup co-founded by ex-OpenAI execs with funding from Google. Like ChatGPT, it can act on text or uploaded files. Indexed through 2023. Useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. Can analyze up to 75K words at a time. Free.

Constitutional AI - This type of AI is similar to reinforcement learning with human feedback (RLHF for short). Rather than use human feedback, the researchers present a set of principles (or “constitution”) and ask the model to revise its answers to prompts to comply with these principles.

*Dall-E - OpenAI’s tool that turns written text into images using AI. Named after painter Salvador Dali and Disney Pixar’s WALL-E.  A limited number of images are free. 

Data Lake - Giant, messy swamps of data where no one really knows what’s in the data or whether it is safe to clean them up.   

Data Poisoning – An attack on a machine-learning algorithm where malicious actors insert incorrect or misleading information into the data used to train an AI model to pollute the results. It also can be used as a defensive tool to help creators reassert some control over the use of their work.

Data Science - Using machine learning to make predictions, combining ML with other disciplines (like big data analytics and cloud computing) to solve real-world problems.

Data Scientist - A data scientist is a person who has the responsibility to glean insight from the massive pool of data. Data scientists typically have advanced degrees in a quantitative field, like computer science, physics, statistics, or applied mathematics. They have a strong understanding of math and statistics, and possess the knowledge to invent new algorithms to solve data problems. They will typically use programming languages like Python, R, and SQL. They will be familiar with using big data tools like Hadoop and Apache Spark and have experience working with unstructured data. If you don't see these types of skills on a resume, then that person probably isn't a data scientist. 

Deepfake – AI-produced images, photos or videos produced by AI tools designed to fool people into thinking the images are real.

Deep Learning – Training computers to use neural networks and solve problems. It involves a particular kind of mathematical model. The word “deep” means that the composition has many “blocks” of neural networks stacked on top of each other, and the trick is adjusting the blocks that are far from the output, since a small change there can have very indirect effects on the output. It is the dominant way to help machines sense and perceive the world around them. It powers the image-processing operations of firms like Facebook and Google, self-driving cars, and Google’s on-the-fly language translations. 

The ELIZA effect - where humans mistake unthinking chat from machines for that of a human.

Extractive summarization - Identifying the important sections of a text and then producing a subset of sentences from that original text. On the other hand, abstractive summarization, uses natural language techniques to interpret and understand the important aspects of a text in order to generate a more “human” friendly summary. While abstractive summarization generates entirely new sentences that are sometimes not  in the source material, extractive summarization sticks to the original text. This is particularly helpful when accuracy and maintaining the author's original intent are the priority.

*Existential risk – The danger that an AI system might threaten humanity's future as the result of a malfunction. 

Facial recognition -  This AI technology uses statistical measurements of a person’s face to identify them against a digital database of other faces. For instance, Clearview AI was trained on billions of images. These AI-powered systems are used to unlock phones, verify passports, and scan crowds at events for malicious actors. It’s used by many US agencies including the FBI and Department of Homeland Security. It has a serious problem with false positives and a history of unintended harms and intentional misuse based on racial and gender bias.

Foundation models - FMs are large deep learning neural networks trained on massive datasets. Data scientists use a foundation model as a starting point to develop machine learning models. FMs are adaptable, able to perform a wide range of tasks with accuracy. This is in contrast to traditional machine learning models, which typically perform specific tasks. Foundation models are also called Large X Models, or LXMs.

*Gemini AI - Google’s conversational AI (formally Bard). It lacks attribution and links to background articles. Free to use. 

*Generative AI - Artificial intelligence that can produce content (text, images, audio, video, etc.). It operates similarly to the “type ahead” feature on smartphones that makes next-word suggestions. Gen AI is more sophisticated and based on the particular content it was trained on (exposed to).

GPT - A LLM designed AI that goes through an unsupervised period followed by a supervised "fine-tuning" phase. The “GPT” in ChatGPT stands for Generative Pre-Trained Transformer.

*Hallucinations - This is when an AI provides responses that are inaccurate responses or not based in facts. It’s important to remember that generative models shouldn’t be treated as a source of truth or factual knowledge. They surely can answer some questions correctly, but this is not what they are designed and trained for. Generative AI models are designed and trained to hallucinate, so hallucinations are a common product of any generative model. The job of a generative model is to generate data that is realistic or distributionally equivalent to the training data, yet different from actual data used for training. Sometimes the results reflect the real world and sometimes they do not.

*Jasper AI - AI story writing tool for fiction and nonfiction. Pick a tone of voice for style. Pre-built templates available. A more business-focused AI that is particularly helpful for advertising and marketing. Remembers past queries, However, no sources are provided and limited to pre-2022 information. Short free trial. $29 month. 

Java - Data scientists may choose to use this programming language to perform tasks related to machine learning data analysis and data mining.

Large Language Models (LLMs) - AI trained on billions of language uses, images and other data. It can predict the next word or pixel in a pattern based on the user’s request. ChatGPT and Google Bard are LLMs.

The kinds of text LLMs can parse out include grammar and language structure, word meaning and context (ex: The word green may mean a color when it is closely related to a word like “paint,” “art,” or “grass”), proper names (Microsoft, Bill Clinton, Shakira, Cincinnati), and emotions (indications of frustration, infatuation, positive or negative feelings, or types of humor).

Large X Models (LXM) – Another name for foundation models.  

*Machine learning (ML) - This is subset of AI that spots patterns and improves on its own without explicit programming. The AI is then able to evolve and adapt when exposed to new data. An example would be algorithms recommending ads for users, which become more tailored the longer it observes the users‘ habits (someone’s clicks, likes, time spent, etc.). Data scientists use ML to make predictions by combining ML with other disciplines (like big data analytics and cloud computing) to solve real-world problems. However, while this process can uncover correlations between data, it doesn’t reveal causation. It is also important to note that the results provide probabilities, not absolutes. There are four types of machine learning: supervised, unsupervised, semi-supervised, and reinforcement learning. Not all AI is machine learning. A clever computer program that mimics human-like behavior can be AI. However, it is not machine learning unless its parameters are automatically learned from data. Video: Introduction to Machine Learning

Machine Vision - The ability of software to identify the contents of an image. 

*MidJourney - Probably the best AI image generator, it uses machine learning to create pictures based on text. However, it is hard for a beginner because the poor user interface. 

Narrow AI - The use of artificial intelligence for a very specific task. For instance, general AI would mean an algorithm that is capable of playing all kinds of board game while narrow AI will limit the range of machine capabilities to a specific game like chess or scrabble. 

Natural-language processing - This is a type of ML that makes human language intelligible to machines.

*Neural Network - In this type of machine learning computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data in humans and find complex associations. Neural networks were first proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts) who moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969. The technique then enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips. Also see “transformers.” 

NoSQL - Real-time transactional databases for fast data storage and update.

Opaque AI - When an AI algorithm operates as a black box that we can’t understand. This can lead to AI systems that inadvertently perpetuate and amplify biases. On the other hand, AI transparency allows for the examination and understanding of how these biases occur, leading to more ethical and fair AI systems. The level of AI opacity varies depending on the industry. For example, in highly regulated industries, transparency is paramount for legal and regulatory compliance.

*Open Source AI - When the source code of an AI is available to the public, it can be used, modified, and improved by anyone. Closed AI means access to the code is tightly controlled by the company that produced it. The closed model gives users greater certainty as to what they are getting, but open source allows for more innovation. Open-source AI would include Stable Diffusion, Hugging Face, and Llama (created by Meta). Closed Source AI would include ChatGPT and Google’s Bard. 

Perishable insights - Insights contained in live flowing data. 

*Perplexity AI - Acts like a search engine but includes results from the web (unlike ChatGPT). Automatically generates citations of sources and suggests follow-up prompts. Free. 

Predictive analytics - Using statistical models based on past data, this is a method of speculating about future events and then make recommendations. Researchers create complex mathematical algorithms to discover patterns in data about online behavior, human conduct, and nature. One doesn't know in advance what data is important. The predictive analytics is designed to discover which data will be predictive of the desired outcome the users wish to predict. Predictive analytics uses the past to predict the future. Although correlation is not causation, a cause and effect relationship is not necessarily needed to make predictions.   

*Prompts - Instructions for an AI. It is the main way to steer the AI in a particular direction, indicate your intent, and give it a context to work in. It can be time-consuming if the task is complex.  

*Prompt Engineer  - An advanced user of AI models, a prompt engineer doesn’t possess special technical skills but is able to give clear instructions, so the AI returns results that most closely match expectations. This skill can be compared to a psychologist who is working with a client who needs help expressing what they know.   

Prompt Injection - Like prompt engineering, but with the goal of working around AI to produce harmful content. Hackers use carefully crafted prompts or text-based instructions to manipulate generative AI systems into sharing sensitive information or perform unintended actions by making the model ignore previous instructions. 

Python - A popular programming language choice for data scientists, used to building machine learning, data analytics, and data visualization. The Python language is often used to automate tasks.

Quantum Computers – The computers we use today operate on a traditional binary code, which represents information with 0s and 1s. Quantum machines, on the other hand, use quantum bits, or qubits. The unusual properties of qubits make quantum computers far more powerful for some kinds of calculations, including the mathematical problems that underpin much of modern encryption.

R - This scripting language is open-source and widely supported. It is used by data scientists managing large, complex data sets. Considered the best language to combine statistical computing with mathematics and graphics.    

Red Teaming - Testing an AI by trying to force it to act in unintended or undesirable ways, thus uncovering potential harms. The term comes from a military practice of taking on the role of an attacker to devise strategies. 

Reinforcement learning - This type of AI training sits somewhere in between supervised and unsupervised learning. It involves training an AI to interact with an environment after it is deployed with only occasional feedback in the form of a reward. The system is asked to take an action, and is given a reward. In essence, the training involves adjusting the network’s weights to search for a strategy that consistently generates higher rewards. Rather than given specific goals, this machine learning navigates by trial and error, similar to a person learning how to work through levels of a video game. And reinforcement learning is indeed used in video game development.   

RLHF - Reinforcement learning with human feedback. 

Semi-supervised learning - In this type of AI training, the model works with both labeled and unlabeled data. 

Shadow AI - Generative AI use inside organizations without the approval or supervision of IT.

Small Language Models (SLMs) – Requiring less data and training time than large language models, SLMs have fewer parameters making them more useful on the spot or when using smaller devices. Perhaps the best advantage of SLMs is their ability to be fine-tuned for specialized for specific tasks or domains. They are also more useful for enhanced privacy and security and are less prone to undetected hallucinations. Google’s Gemma is an example.

Spark – See Apache Spark. 

SQL - This programming language is second in importance for data scientists after Python. The industry uses it for interfacing with relational database systems and data scientists are often required to use this language when dealing with structured data.  

Stable Diffusion - Generates visual creations through AI. Since it is open-sourced, anyone can view the code. Fewer restrictions on how it can be used than DALL-E.

Supervised training - In this type of AI training, the data is labeled by humans before giving it to the AI. For example, a database of messages with each labelled either “spam” or “not spam.”  It is the most common type of machine learning, and it is expensive and time consuming. This area includes voice recognition, language translation, and self-driving cars. Anything that takes only a second for a person to do is something that might be performed by this type of AI. Jobs that are a series of one-second tasks are at risk from it (such as security guard). Most of the present economic value of AI comes from this type.

Temperature  - a setting within some generative AI models that determines the randomness of the output. The higher the temperature set by the user the more variability there is in the result.

Token – The words and sentences used by people are broken down by LLMs into tokens, mostly for computing efficiency. Think of a token as the root of a word. “Creat” is the “root” of many words including Create, Creative, Creator, Creating, and Creation. “Create” would be an example of a token. Examples: https://platform.openai.com/tokenizer

*Training data - The data initially provided to an AI model for it to create its map of relationships. Relying on a wide variety of data sources from the web rather than curated, locked-down data sets, can make the training more vulnerable to the insertion of poisoned data by hackers and the model more suspectable to hallucinations.

Transfer learning - This allows a reinforcement-learning system to build on previously acquired knowledge, rather than having to be trained from scratch every time.  

Transformer – A deep learning architecture known first discussed at length in Google’s 2017 research paper “Attention Is All You Need.” Every major AI model today (ChatGPT, GPT-4, Midjourney) is built using neural networks called transformers. Previously, recurrent neural networks (RNNs) process data sequentially—that is, one word at a time, in the order in which the words appear. An “attention mechanism” was included to enable a model to consider the relationships between words. Transformers advanced this process by analyzing all the words in a given body of text at the same time rather than in sequence. With transformers, it became possible to create higher-quality language models that could be trained more efficiently and with more customizable features. 

The Turing test - Proposed by computing pioneer Alan Turing in 1950, the Turing test measures whether a computer program could fool a human into believing it was human too.  

Unsupervised training - In this type of AI training, the AI is turned loose on raw data without a human labeling the data first. The AI isn’t told what to look for. Instead, the network learns to recognize features and cluster similar examples. This reveals hidden groups, links, and patterns within the data. This is helpful when the user cannot describe the thing they are looking, such as a new type of cyberattack. Not as expensive as supervised learning, it can work in real time but is less accurate.

Vector databases - A data stored as mathematical representations to make it easier for machine learning models to remember previous inputs, draw comparisons, identify relationships, and understand context. It’s similar being able to provide a purchase suggestion under the heading "Customers also bought..."  Vector databases enable machine learning models to identify objects that can be grouped together, enabling the creation of advanced AI programs like large language models.

More sources of definitions

 A jargon-free explanation of how AI large language models work - Arstechnica 

No, chatbots aren’t sentient. Here’s how their underlying technology works. – New York Times

 Everything you wanted to know about AI – but were afraid to ask – The Guardian

Demystifying ChatGPT! – Toward AI

ChatGPT explained: what is it and why is it important? – Tom’s Guide

AI's scariest mystery – Axios

AutoGPT basics – KD Nuggets

What is ChatGPT? Everything you need to know – Tom’s Guide back to top

Ethics & AI  

Anthropic wants to create a better constitution for AI – Axios

AI researchers uncover ethical, legal risks to using popular data sets – Washington Post

Should A.I. Accelerate? Decelerate? A professor of both A.I. and A.I. ethics says the answer Is both. – New York Times  

Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare – Nature

AI has social consequences, but who pays the price? Tech companies’ problem with ‘ethical debt – The Conversation  

A.V. Club's Al Reporter Plagiarized IMDb – Plagiarism Today 

New Psychological and Ethical Dangers of 'AI Identity Theft' – Psychology Today

AI's next fight is over whose values it should hold – Axios

Generative AI Is a (ethical) Disaster, and Companies Don’t Seem to Really Care – Vice  

Artificial Intelligence comes with risks. How can companies develop AI responsibly? - NPR

USC Invests $1 Billion in New Computing School to Teach Ethical AI Use - dot.LA

The Green Glass Approach to Responsible AI – Expert AI

AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it – Washington Post

Pope Francis: AI should be used in a responsible and ethical way – Market Watch

For artificial intelligence to thrive, it must explain itself - Economist 

Colonizing Art – Openmind Mag

AI operations create a huge carbon footprint and often rely on low-paid workers in developing countries. Some professors and students may decide it’s ethically questionable to use these tools. – Chronicle of Higher Ed

Leading companies including Anthropic and Google DeepMind are creating “AI constitutions”—a set of values and principles that their models can adhere to, in an effort to prevent abuses. – Artechnica

The ethics of AI-powered marketing technology – Mark Tech

Ethical considerations in the use of AI – Reuters

Answering AI’s biggest questions requires an interdisciplinary approach – Tech Crunch

OpenAI's 'unreasonable claims' exhaust AI-ethics researchers – Insider  

Generative AI Is Making Companies Even More Thirsty for Your Data – Wired  

Amazon created an AI resume-reading software and worked on this project for two years, trying various kinds of bias-mitigation techniques. And at the end of the day, they couldn’t sufficiently de-bias it, and so they threw it out.  - CNN

Yes, you need data scientists and data engineers. You need those tech people. You also need people like sociologists, attorneys, especially civil rights attorneys, and people from risk. You need that cross-functional expertise because solving or mitigating bias in AI is not something that can just be left in the technologists’ hands. - CNN

Can AI chatbots like ChatGPT help us make ethical decisions rationally? - Vox 

Teaching AI Ethics - Leon Furze Blog 

People Using Generative AI ChatGPT Are Instinctively Making This AI Rookie Mistake, A Vexing Recipe For AI Ethics And AI Law - Forbes

Online mental health company uses ChatGPT to help respond to users in experiment — raising ethical concerns around healthcare and AI technology - Business Insider

When AI Overrules the Nurses Caring for You - Wall Street Journal

A.I. Is Becoming More Conversant. But Will It Get More Honest? - New York Times

back to top

Fakes & Detecting AI

AI deepfakes of Taylor Swift spread on X. Here’s what to know. – Washington Post

A Photographer Who Found Instagram Fame for His Striking Portraits Has Confessed His Images Were Actually A.I.-Generated - ArtNet

A New Kind of AI Copy Can Fully Replicate Famous People. The Law Is Powerless. - Politico

Wait, Can Turnitin Actually Detect If You Use ChatGPT For A Paper? – Her Campus

How to Spot AI-Generated Images – Every Pixel

A machine-learning tool can easily spot when chemistry papers are written using the chatbot ChatGPT – Nature

Google, Bing put deepfake porn at the top of some search results – NBC News

AI bots are everywhere now. These telltale words give them away. - Washington Post

Disinformation poses an unprecedented threat in 2024 — and the U.S. is less ready than ever – NBC News

"Tools to detect AI-written content are notoriously unreliable and have resulted in what students say are false accusations of cheating and failing grades. OpenAI unveiled an AI-detection tool in Jan, but quietly scrapped it due to its “low rate of accuracy.” One of the most prominent tools to detect AI-written text, created by plagiarism detection company Turnitin.com, frequently flagged human writing as AI-generated, according to a Washington Post examination." – Washington Post

Too many educators think AI detectors are ‘a silver bullet and can help them do the difficult work of identifying possible academic misconduct.’ My favorite example of just how imperfect they can be: A detector called GPTZero claimed the US Constitution was written by AI. – Washington Post

Run some of your other writing dated before the arrival of ChatGPT in the fall of 2022 through an AI detector, to see whether any of it gets flagged. If it does, the problem is clearly the detector, not the writing. (It’s a little aggressive, but one student told me he did the same with his instructor’s own writing to make the point.) – Washington Post

It’s important to remember that generative models shouldn’t be treated as a source of truth or factual knowledge. They surely can answer some questions correctly, but this is not what they are designed and trained for. It would be like using a racehorse to haul cargo: it’s possible, but not its intended purpose … Generative AI models are designed and trained to hallucinate, so hallucinations are a common product of any generative model … The job of a generative model is to generate data that is realistic or distributionally equivalent to the training data, yet different from actual data used for training. - InsideBigData

How Easy Is It to Fool A.I.-Detection Tools? – New York Times

Can you tell which poem was written by ChatGPT? – Al Jazeera

20 Questions (with Answers) to Detect Fake Data Scientists: ChatGPT Edition, Part 1 – KD Nuggets

ChatGPT sparks surge of AI detection tools - Axios

AI-Created Images Are So Good Even AI Has Trouble Spotting Some – Wall Street Journal

Can We No Longer Believe Anything We See? – New York Times

Did a Fourth Grader Write This? Or the New Chatbot? - New York Times  

Only Half of Americans Can Differentiate Between AI and Human Writing – PC Mag

back to top

Students Using AI

For students who do not self-identify as writers, for those who struggle with writer’s block or for underrepresented students seeking to find their voices, it can provide a meaningful assist during initial stages of the writing process. Inside Higher Ed

Let’s be honest. Ideas are more important than how they are written. So, I use ChatGPT to help me organize my ideas better and make them sound more professional. The Tech Insider 

Students could (use AI to) look for where the writing took a predictable turn or identify places where the prose is inconsistent. Students could then work to make the prose more intellectually stimulating for humans. Inside Higher Ed 

If you’re a college student preparing for life in an A.I. world, you need to ask yourself: Which classes will give me the skills that machines will not replicate, making me more distinctly human? A.I. often churns out the kind of impersonal bureaucratic prose that is found in corporate communications or academic journals. You’ll want to develop a voice as distinct as those of George Orwell, Joan Didion, Tom Wolfe and James Baldwin, so take classes in which you are reading distinctive and flamboyant voices so you can craft your own. New York Times

Imagine if the platform extracted campus-specific information about gen ed and major requirements. It could then provide quality academic advice to students that current chat bots can’t. Inside Higher Ed 

ChatGPT may be able to help with more basic functions, such as assisting with writing in English for those who do not speak it natively. Tech Radar

What if the platform had access to real-time local or regional job market data and trends and data about the efficacy of various skills certificates? It could then serve as initial-tier career counseling. Inside Higher Ed

On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool writing papers and solving coding problems. New York Times 

The student who is using it because they lack the expertise is exactly the student who is not ready to assess what it’s doing critically. Some argue that it’s not worth the time spent ferreting out a few cheaters and would rather focus their energy on students who are there to learn. Others say they can’t afford to look the other way. Chronicle of Higher Ed

It used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content. Inside Higher Ed

Don’t fixate on how much evidence you have but on how much evidence will persuade your intended audience. ChatGPT distills everything on the internet through its filter and dumps it on the reader; your flawed and beautiful mind, by contrast, makes its mark on your subject by choosing the right evidence, not all the evidence. Find the six feet that your reader needs, and put the rest of your estate up for auction. Chronicle of Higher Ed 

A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable,departing from the conventional. New York Times 

We surpass the AI by standing on its shoulders. Boris Steipe, associate professor of molecular genetics at the University of Toronto, for example, encourages students to engage in a Socratic debate with ChatGPT as a way of thinking through a question and articulating an argument. “You will get the plain vanilla answer—what everybody thinks—from ChatGPT,” Steipe said, “That’s where you need to start to think. That’s where you need to ask, ‘How is it possibly incomplete?’” Inside Higher Ed

Students can leverage ChatGPT as a tutor or homework supplement, especially if they need to catch up. ChatGPT’s ability to make curated responses is unparalleled, so if a student needs a scientific explanation for a sixth-grade reading level, ChatGPT can adapt. New York Magazine

The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. The more effective, and increasingly popular, strategy is to tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better. Chronicle of Higher Ed 

Marc Watkins, lecturer in composition and rhetoric at the University of Mississippi: “Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology.” Inside Higher Ed

These tools can function like personal assistants: Ask ChatGPT to create a study schedule, simplify a complex idea, or suggest topics for a research paper, and it can do that. That could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts. Chronicle of Higher Ed

Students who lack confidence in their ability to learn might allow the products of these AI tools to replace their own voices or ideas.  Chronicle of Higher Ed

Students describe using OpenAI’s tool as well as others for much more than generating essays. They are asking the bots to create workout plans, give relationship advice, suggest characters for a short story, make a joke and provide recipes for the random things left in their refrigerators. Washington Post 

Bots like ChatGPT show great promise as a “writing consultant” for students. “It’s not often that students have a chance to sit down with a professor and have long discussions about how to go about this paper, that paper, how to approach research on this topic and that topic. But ChatGPT can do that for them, provided…they know how to use the right ethics, to use it as a tool and not a replacement for their work.” CalMatters

Don’t rely on AI to know things instead of knowing them yourself. AI can lend a helping hand, but it’s an artificial intelligence that isn’t the same as yours. One scientist described to me how younger colleagues often “cobble together a solution” to a problem by using AI. But if the solution doesn’t work, “they don’t have anywhere to turn because they don’t understand the crux of the problem” that they’re trying to solve. Chronicle of Higher Ed

Janine Holc thinks that students are much too reliant on generative AI, defaulting to it, she wrote, “for even the smallest writing, such as a one sentence response uploaded to a shared document.” As a result, wrote Holc, a professor of political science at Loyola University Maryland, “they have lost confidence in their own writing process. I think the issue of confidence in one’s own voice is something to be addressed as we grapple with this topic.” Chronicle of Higher Ed

It’s a conversation that can be evoked at will. But it’s not different in the content. You still have to evaluate what someone says and whether or not it’s sensible. CalMatters

Helena Kashleva, an adjunct instructor at Florida SouthWestern State College, spots a sea-change in STEM education, noting that many assignments in introductory courses serve mainly to check students’ understanding. “With the advent of AI, grading such assignments becomes pointless.” Chronicle of Higher Ed

Given how widely faculty members vary on what kinds of AI are OK for students to use, though, that may be an impossible goal. And of course, even if they find common ground, the technology is evolving so quickly that policies may soon become obsolete. Students are also getting more savvy in their use of these tools. It’s going to be hard for their instructors to keep up. Chronicle of Higher Ed

In situations when you or your group feel stuck, generative AI can definitely help. The trick is to learn how to prompt it in a way that can help you get unstuck. Sometimes you’ll need to try a few prompts up until you’ll get something you like.  UXdesign.cc 

Proponents contend that classroom chatbots could democratize the idea of tutoring by automatically customizing responses to students, allowing them to work on lessons at their own pace. Critics warn that the bots, which are trained on vast databases of texts, can fabricate plausible-sounding misinformation — making them a risky bet for schools. New York Times

Parents are eager to have their children use the generative AI technology in the classroom. Sixty-four percent said they think teachers and schools should allow students to use ChatGPT to do schoolwork, with 28 percent saying that schools should encourage the technology’s use. Ed Week

Student newspaper editors at Middlebury College have called for a reconsideration of the school’s honor code after a survey found two-thirds of students admitted to breaking it—nearly twice as many as before the pandemic. Wall Street Journal

If you are accused of cheating with AI Google Docs or Microsoft Word could help. Both offer a version history function that can keep track of changes to the file, so you can demonstrate how long you worked on it and that whole chunks didn’t magically appear. Some students simply screen record themselves writing. Washington Post

There is no bright line between “my intelligence” and “other intelligence,” artificial or otherwise. It’s an academic truism that no idea exists in an intellectual vacuum. We use other people’s ideas whenever we quote or paraphrase. The important thing is how. Chronicle of Higher Ed 

Quizlet has announced four new AI features that will help with student learning and managing their classwork, including Magic Notes, Memory Score, Quick Summary, and AI-Enhanced Expert Solutions.  ZDnet

James Neave, Adzuna’s head of data science, recommends interested job applicants build up their AI skills and stand out from the competition in three key ways: Stay on top of developments, use AI in your own work, and show how you’ve used AI successfully to achieve a specific goal. CNBC 

Bots like ChatGPT show great promise as a “writing consultant” for students. “It’s not often that students have a chance to sit down with a professor and have long discussions about how to go about this paper, that paper, how to approach research on this topic and that topic. But ChatGPT can do that for them, provided…they know how to use the right ethics, to use it as a tool and not a replacement for their work.” CalMatters

Don’t rely on AI to know things instead of knowing them yourself. AI can lend a helping hand, but it’s an artificial intelligence that isn’t the same as yours. One scientist described to me how younger colleagues often “cobble together a solution” to a problem by using AI. But if the solution doesn’t work, “they don’t have anywhere to turn because they don’t understand the crux of the problem” that they’re trying to solve. Chronicle of Higher Ed

Janine Holc thinks that students are much too reliant on generative AI, defaulting to it, she wrote, “for even the smallest writing, such as a one sentence response uploaded to a shared document.” As a result, wrote Holc, a professor of political science at Loyola University Maryland, “they have lost confidence in their own writing process. I think the issue of confidence in one’s own voice is something to be addressed as we grapple with this topic.” Chronicle of Higher Ed

It’s a conversation that can be evoked at will. But it’s not different in the content. You still have to evaluate what someone says and whether or not it’s sensible. CalMatters

Helena Kashleva, an adjunct instructor at Florida SouthWestern State College, spots a sea-change in STEM education, noting that many assignments in introductory courses serve mainly to check students’ understanding. “With the advent of AI, grading such assignments becomes pointless.” Chronicle of Higher Ed 

Given how widely faculty members vary on what kinds of AI are OK for students to use, though, that may be an impossible goal. And of course, even if they find common ground, the technology is evolving so quickly that policies may soon become obsolete. Students are also getting more savvy in their use of these tools. It’s going to be hard for their instructors to keep up. Chronicle of Higher Ed

In situations when you or your group feel stuck, generative AI can definitely help. The trick is to learn how to prompt it in a way that can help you get unstuck. Sometimes you’ll need to try a few prompts up until you’ll get something you like.  UXdesign.cc

Proponents contend that classroom chatbots could democratize the idea of tutoring by automatically customizing responses to students, allowing them to work on lessons at their own pace. Critics warn that the bots, which are trained on vast databases of texts, can fabricate plausible-sounding misinformation — making them a risky bet for schools. New York Times

Parents are eager to have their children use the generative AI technology in the classroom. Sixty-four percent said they think teachers and schools should allow students to use ChatGPT to do schoolwork, with 28 percent saying that schools should encourage the technology’s use. Ed Week

Student newspaper editors at Middlebury College have called for a reconsideration of the school’s honor code after a survey found two-thirds of students admitted to breaking it—nearly twice as many as before the pandemic. Wall Street Journal

If you are accused of cheating with AI Google Docs or Microsoft Word could help. Both offer a version history function that can keep track of changes to the file, so you can demonstrate how long you worked on it and that whole chunks didn’t magically appear. Some students simply screen record themselves writing. Washington Post 

There is no bright line between “my intelligence” and “other intelligence,” artificial or otherwise. It’s an academic truism that no idea exists in an intellectual vacuum. We use other people’s ideas whenever we quote or paraphrase. The important thing is how. Chronicle of Higher Ed

Quizlet has announced four new AI features that will help with student learning and managing their classwork, including Magic Notes, Memory Score, Quick Summary, and AI-Enhanced Expert Solutions.  ZDnet

James Neave, Adzuna’s head of data science, recommends interested job applicants build up their AI skills and stand out from the competition in three key ways: Stay on top of developments, use AI in your own work, and show how you’ve used AI successfully to achieve a specific goal. CNBC

For students who do not self-identify as writers, for those who struggle with writer’s block or for underrepresented students seeking to find their voices, it can provide a meaningful assist during initial stages of the writing process. Inside Higher Ed 

Let’s be honest. Ideas are more important than how they are written. So, I use ChatGPT to help me organize my ideas better and make them sound more professional. The Tech Insider

Students could (use AI to) look for where the writing took a predictable turn or identify places where the prose is inconsistent. Students could then work to make the prose more intellectually stimulating for humans. Inside Higher Ed

If you’re a college student preparing for life in an A.I. world, you need to ask yourself: Which classes will give me the skills that machines will not replicate, making me more distinctly human? A.I. often churns out the kind of impersonal bureaucratic prose that is found in corporate communications or academic journals. You’ll want to develop a voice as distinct as those of George Orwell, Joan Didion, Tom Wolfe and James Baldwin, so take classes in which you are reading distinctive and flamboyant voices so you can craft your own. New York Times

Imagine if the platform extracted campus-specific information about gen ed and major requirements. It could then provide quality academic advice to students that current chat bots can’t. Inside Higher Ed 

ChatGPT may be able to help with more basic functions, such as assisting with writing in English for those who do not speak it natively. Tech Radar

What if the platform had access to real-time local or regional job market data and trends and data about the efficacy of various skills certificates? It could then serve as initial-tier career counseling. Inside Higher Ed 

On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool writing papers and solving coding problems. New York Times 

The student who is using it because they lack the expertise is exactly the student who is not ready to assess what it’s doing critically. Some argue that it’s not worth the time spent ferreting out a few cheaters and would rather focus their energy on students who are there to learn. Others say they can’t afford to look the other way. Chronicle of Higher Ed 

It used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content. Inside Higher Ed

Don’t fixate on how much evidence you have but on how much evidence will persuade your intended audience. ChatGPT distills everything on the internet through its filter and dumps it on the reader; your flawed and beautiful mind, by contrast, makes its mark on your subject by choosing the right evidence, not all the evidence. Find the six feet that your reader needs, and put the rest of your estate up for auction. Chronicle of Higher Ed

A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable,departing from the conventional. New York Times

We surpass the AI by standing on its shoulders. Boris Steipe, associate professor of molecular genetics at the University of Toronto, for example, encourages students to engage in a Socratic debate with ChatGPT as a way of thinking through a question and articulating an argument. “You will get the plain vanilla answer—what everybody thinks—from ChatGPT,” Steipe said, “That’s where you need to start to think. That’s where you need to ask, ‘How is it possibly incomplete?’” Inside Higher Ed 

Students can leverage ChatGPT as a tutor or homework supplement, especially if they need to catch up. ChatGPT’s ability to make curated responses is unparalleled, so if a student needs a scientific explanation for a sixth-grade reading level, ChatGPT can adapt. New York Magazine 

The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. The more effective, and increasingly popular, strategy is to tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better. Chronicle of Higher Ed 

Marc Watkins, lecturer in composition and rhetoric at the University of Mississippi: “Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology.” Inside Higher Ed

These tools can function like personal assistants: Ask ChatGPT to create a study schedule, simplify a complex idea, or suggest topics for a research paper, and it can do that. That could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts. Chronicle of Higher Ed

Students who lack confidence in their ability to learn might allow the products of these AI tools to replace their own voices or ideas.  Chronicle of Higher Ed

Students describe using OpenAI’s tool as well as others for much more than generating essays. They are asking the bots to create workout plans, give relationship advice, suggest characters for a short story, make a joke and provide recipes for the random things left in their refrigerators. Washington Post

Basak-Odisio will use it only, he said, if he has procrastinated too much and is facing an impossible deadline. “If it is the day or night before, and I want to finish something as quickly as possible — ” he said, trailing off. “But,” he added, “I want to be better than that.” Washington Post 

back to top

Teaching with AI

Microsoft unveils first professional certificate for generative AI skills – ZDnet  

Confused About Which AI Tools to Use? These Teachers Have Advice – Education Week  

The Sentient Syllabus Project – a collaborative effort launched by Professor Boris Steipe 

4 Steps to Help You Plan for ChatGPT in Your Classroom -Chronicle of Higher Ed

Is ChatGPT being embraced in classrooms this semester? – Semafor

AI Guidance for Faculty from Harvard’s Office of Undergraduate Education – Harvard

Schools Need to Help Students Use AI Tools Effectively, Expert Says – EdWeek

What I Learned From an Experiment to Apply Generative AI to My Data Course - EdSurge News 

Why You Should Rethink Your Resistance to ChatGPT – Chronicle of Higher Ed

1 in 10 teens already use ChatGPT for school. Here’s how to guide them. – Washington Post 

Research shows that when students feel confident that they can successfully do the work assigned to them, they are less likely to cheat. And an important way to boost students’ confidence is to provide them with opportunities to experience successChatGPT can facilitate such experiences by offering students individualized support and breaking down complex problems into smaller challenges or tasks. The Conversation

 Rather than trying to stop the tools and, for instance, telling students not to use them, in my class I’m telling students to embrace them – but I expect their quality of work to be that much better now they have the help of these tools. Ultimately, by the end of the semester, I'm expecting the students to turn in assignments that are substantially more creative and interesting than the ones last year’s students or previous generations of students could have created. We Forum 

ChatGPT can be directed to deliver feedback using positive, empathetic and encouraging language. For example, if a student completes a math problem incorrectly, instead of merely telling the student “You are wrong and the correct answer is …,” ChatGPT may initiate a conversation with the student. The Conversation 

AI can help with lesson planning,” Kerry O’Grady, an associate professor of public relations at Columbia University wrote, “ including selecting examples, reviewing key concepts before class, and helping with teaching/activity ideas.” This, she says, can help professors save both time and energy. Chronicle of Higher Ed

I don’t think that AI is going to necessarily destroy education. I don’t think it’s going to revolutionize education, either. I think it’s just going to sort of expand the toolbox of what’s possible in our classrooms. CalMatters

AI could analyze an individual learner's strengths, weaknesses and learning styles during online training and then recommend the most effective teaching methods and most relevant resources. Eventually, AI-powered virtual assistants could become standard features in learning platforms by providing real-time support and feedback to learners as they progress through their courses. TechTarget

Use these tools to help you understand challenging passages in assigned readings, or to build preliminary foundational knowledge to help you understand more difficult concepts. Don’t use AI to cheat — use it as a tool to help you learn. Chronicle of Higher Ed

As AI-enabled cheating roils colleges, professors turn to an ancient testing method— oral examinations, which date at least to ancient Greece, are getting new attention. Wall Street Journal

Even as some educators raise concerns, others see potential for new AI technology to reduce teacher workloads or help bring teaching materials to life in new ways. EdSurge

Professors can use the new technology to encourage students to engage in a range of productive ChatGPT activities, including thinking, questioning, debating, identifying shortcomings and experimenting. Inside Higher Ed 

Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School of Business said ChatGPT has already changed his expectations of his students. “I expect them to write more and expect them to write better,” he said. “This is a force multiplier for writing. I expect them to use it.” Forbes

ChatGPT can create David, said David Chrisinger, who directs the writing program at the Harris School of Public Policy at the University of Chicago, referring to the famous Michelangelo statue. “But his head is too big and his legs are too short. Now it’s our job to interrogate the evidence and improve on what it gives us,” he said. Wall Street Journal 

For some educators, the chatbot helps to make their job easier by creating lesson plans and material for their students. Mashable 

We can teach students that there is a time, place and a way to use GPT3 and other AI writing tools. It depends on the learning objectives. Inside Higher Ed 

Judging from the reaction on TikTok, teachers on the app see ChatGPT as a tool to be treated the same way calculators and cell phones are used in class — as resources to help students succeed but not do the work for them. Mashable 

Faculty members need time to play with new tools and explore their implications. Administrators can carve out time for faculty training support. How does bias play out in your area within the model? Inside Higher Ed 

Here’s what I plan to do about chatbots in my classes: pretty much nothing. Washington Post

If a program can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them. The next task for higher education, then, is to prepare graduates to make the most effective use of the new tools and to rise above and go beyond their limitations. That means pedagogies that emphasize active and experiential learning, that show students how to take advantage of these new technologies and that produce graduates who can do those things that the tools can’t. Inside Higher Ed 

Are new rubrics and assignment descriptions needed? Will you add an AI writing code of conduct to your syllabus? Divisions or departments might agree on expectations across courses. That way, students need not scramble to interpret academic misconduct across multiple courses. Inside Higher Ed

We should be telling our undergraduates that good writing isn’t just about subject-verb agreement or avoiding grammatical errors—not even good academic writing. Good writing reminds us of our humanity, the humanity of others and all the ugly, beautiful ways in which we exist in the world. Inside Higher Ed

(Some) professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract. Chronicle of Higher Ed

After discovering my first ChatGPT essay, I decided that going forward, students can use generative A.I. on assignments, so long as they disclose how and why. I’m hoping this will lead to less banging my head against the kitchen table–and, at its best, be its own kind of lesson. Slate

There’s plenty to agree on, such as motivating students to do their own work, adapting teaching to this new reality, and fostering AI literacy. Chronicle of Higher Ed

As academe adjusts to a world with ChatGPT, faculty will need to find fresh ways to assess students’ writing.The same was true when calculators first began to appear in math classrooms, and professors adapted the exams. “Academic integrity is about being honest about the way you did your work.” Spell checkers, David Rettinger, president emeritus at the International Center for Academic Integrity, pointed out, are a prime example of artificial intelligence that may have been controversial at first, but are now used routinely without a second thought to produce papers. Chronicle of Higher Ed

For those tasked to perform tedious and formulaic writing, we don’t doubt that some version of this tool could be a boon. Perhaps ChatGPT’s most grateful academic users will not be students, but deans and department headsracking their brains for buzzwords on “excellence” while talking up the latest strategic plan. Public Books

These technologies introduce opportunities for educators to rethink assessment practices and engage students in deeper and more meaningful learning that can promote critical thinking skills. World Economic Forum 

Khan Academy founder Sal Khan says the latest version of the generative AI engine makes a pretty good tutor.Axios

Information that was once dispensed in the classroom is now everywhere: first online, then in chatbots. What educators must now do is show students not only how to find it, but what information to trust and what not to, and how to tell the difference. MIT Tech Review

Don’t wait until you feel like an expert to discuss AI in your courses. Learn about it in class alongside your students. Chronicle of Higher Ed

The old education model in which teachers deliver information to later be condensed and repeated will not prepare our students for success in the classroom—or the jobs of tomorrow. Brookings

What if we could train it on our own rules and regulations, so if it hits an ethical issue or a problem, it could say to students: ‘you need to stop here and take that problem to the ethical lead.’ Columbia Journalism Review

I look at it as the future of: What if we could program it to be our substitute teacher at school? EdSurge 

Once you start to think of a chatbot as a tool, rather than a replacement, its possibilities become very exciting. Vice

Training ourselves and our students to work with AI doesn’t require inviting AI to every conversation we have. In fact, I believe it’s essential that we don’t.  Inside Higher Ed

A US survey of 1,002 K–12 teachers and 1,000 students between 12 and 17, commissioned by the Walton Family Foundation in February, found that more than half the teachers had used ChatGPT—10% of them reported using it every day—but only a third of the students. Nearly all those who had used it (88% of teachers and 79% of students) said it had a positive impact. MIT Tech Review

For my students and for the public, the quickest way to feel hopeless in the face of seemingly unstoppable technological change is to decide that it is all-powerful and too complicated for an ordinary person to understand. Slate 

Consider the tools relative to your course. What are the cognitive tasks students need to perform without AI assistance? When should students rely on AI assistance? Where can an AI aid facilitate a better outcome? Are there efficiencies in grading that can be gained? Are new rubrics and assignment descriptions needed? Will you add an AI writing code of conduct to your syllabus? Do these changes require structural shifts in timetabling, class size or number of teaching assistants? Inside Higher Ed

Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic. The Atlantic

Its ability to do so well in that niche might be a reminder to us that we’ve allowed academic writing to become a little bit too tightly bound up in a predictable pattern. Maybe forcing us to stretch the kind of assignments we’re giving students is not a bad thing. Inside Higher Ed 

The teaching of writing has too often involved teaching students to follow an algorithm. Your essay will have five paragraphs; start the first one in with a sentence about your main idea, then fill in three paragraphs with supporting ideas, then wrap it up with a conclusion. Call it a format or a template or an algorithm. Schools have taught students to assemble essays to satisfy algorithms for judging their writing—algorithms that may be used by either humans or software, with little real difference. If this kind of writing can be done by a machine that doesn’t have a single thought in its head, what does that tell us about what we’ve been asking of students. The unfortunate side effect is that teachers end up grading students not on the quality of their end product, but on how well they followed the teacher-required algorithm. Forbes

AI writing tools bring urgency to a pedagogical question: If a machine can produce prose that accomplishes the learning outcomes of a college writing assignment, what does that say about the assignment? Inside Higher Ed

ChatGPT is a dynamic demonstration that if you approach an essay by thinking “I’ll just write something about Huckelberry Finn,” you get mediocre junk. Better thinking about what you want the essay to be about, what you want it to say, and how you want to say it gets you a better result, even if you’re having an app do the grunt work of stringing words together. Forbes 

AI is trained on large data sets; if the data set of writing on which the writing tool is trained reflects societal prejudices, then the essays it produces will likely reproduce those views. Similarly, if the training sets underrepresent the views of marginalized populations, then the essays they produce may omit those views as well. Inside Higher Ed 

Artificial intelligence is likely to have some impact on how students write, according to John Gallagher, a professor in the English department at the University of Illinois. When word processors replaced typewriters, written sentences got longer and more complicated, he said. Wall Street Journal

In-class exams — the ChatGPT-induced alternative to writing assignments — are worthless when it comes to learning how to write, because no professor expects to see polished prose in such time-limited contexts. Washington Post

Students will only gravitate to chat bots if the message they are getting from their writing instructors is that the most important qualities of writing are technical proficiency and correctness. Inside Higher Ed 

Hold individual conferences on student writing or ask students to submit audio/video reflections on their writing. As we talk with students about their writing, or listen to them talk about it, we get a better sense of their thinking. By encouraging student engagement and building relationships, these activities could discourage reliance on automated tools. Critical AI

It’s not easy to write like a human, especially now, when AI or the worn-in grooves of scholarly habits are right there at hand. Resist the temptation to produce robotic prose, though, and you’ll find that you’re reaching new human readers, in the way that only human writers can. Chronicle of Higher Ed

Here’s an idea for extracting something positive from the inevitable prominence that chatbots will achieve in coming years. My students and I can spend some class time critically appraising a chatbot-generated essay, revealing its shortcomings and deconstructing its strengths. Washington Post

David Chrisinger, who directs the writing program at the Harris School of Public Policy at the University of Chicago is asking his students to generate a 600-word essay using ChatGPT. Then their assignment is to think of more incisive questions to elicit a stronger response. Finally, they are required to edit the essay for tone and voice and to tailor it to the intended audience. Wall Street Journal

Instead of just presenting conclusions, give the reader a glimpse of your origin story as a researcher, a sense of the stumbling blocks you encountered along the way, and a description of the elation or illumination you felt when you experienced your eureka moment. If you tell stories, tell them well. Chronicle of Higher Ed 

Students may be more likely to complete an assignment without automated assistance if they’ve gotten started through in-class writing. (Note: In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities). Critical AI 

In a world where students are taught to write like robots, a robot can write for them. Students who care more about their GPA than muddling through ideas and learning how to think will run to The Bot to produce the cleanest written English. The goal is to work through thoughts and further research and revision to land on something potentially messy but deeply thought out. Inside Higher Ed 

ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content. The tool seems more beneficial for those who already have a lot of experience writing–not those learning how to develop ideas, organize thinking, support propositions with evidence, conduct independent research, and so on. Critical AI 

What many of us notice about art or prose generated by A.I. It’s often bland and vague. It’s missing a humanistic core. It’s missing an individual person’s passion, pain, longings and a life of deeply felt personal experiences. It does not spring from a person’s imagination, bursts of insight, anxiety and joy that underlie any profound work of human creativity. New York Times

The most obvious response, and one that I suspect many professors will pursue, involves replacing the standard five-page paper assignment with an in-class exam. Others expect to continue with the papers but have suggested that the assigned topics should be revised to focus on lesser-known works or ideas about which a chatbot might not “know” too much. Washington Post 

Assigning personal writing may still help motivate students to write and, in that way, deter misuse of AI. Chronicle of Higher Ed

We’re expecting students to use ChatGPT to write a first draft of their paper but then not use it to revise the paper.  I don’t consider myself a pessimist about human nature, but in what world do we humans take a perfectly good tool that helped us get from point A to point B and then decline its offer to take us from point B to point C? Inside Higher Ed 

Writing teacher John Warner wrote, “If AI can replace what students do, why have students keep doing that?” He recommended changing “the way we grade so that the fluent but dull prose that ChatGPT can churn out does not actually pass muster.” Chronicle of Higher Ed

Assign writing that is as interesting and meaningful to students as possible. Connecting prompts to real-world situations and allowing for student choice and creativity within the bounds of the assignment can help. Chronicle of Higher Ed

No one creates writing assignments because the artifact of one more student essay will be useful in the world; we assign them because the process itself is valuable. Through writing, students can learn how to clarify their thoughts and find a voice. If they understand the benefits of struggling to put words together, they are more likely not to resort to a text generator. Chronicle of Higher Ed 

Really soon, we’re not going to be able to tell where the human ends and where the robot begins, at least in terms of writing. Chronicle of Higher Ed

Many teachers have reacted to ChatGPT by imagining how to give writing assignments now—maybe they should be written out by hand, or given only in class—but that seems to me shortsighted. The question isn’t “How will we get around this?” but rather “Is this still worth doing?” The Atlantic

Rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

As the co-editors of a book series on teaching in higher education, we receive many queries and proposals from academic writers. A significant percentage of those proposals — which often include sample chapters — are written in prose that reads like it was generated by ChatGPT. The author’s ideas are laid out like bullet points on a whiteboard, the citations are dense and numerous, and the examples and stories (if there are any) are pale and lifeless. The most successful books in our series are the ones that don’t read like that. Their authors have demolished — or at least weakened — the wall that separates their subject matter from their lives. Chronicle of Higher Ed

(A professor) plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.“What’s happening in class is no longer going to be, ‘Here are some questions — let’s talk about it between us human beings,’” he said, but instead “it’s like, ‘What also does this alien robot think?’” New York Times

Prof Jim is a software company that can turn existing written materials—like textbooks, Wikipedia pages or a teacher’s notes—into these animated videos at the push of a button. A teacher could use the software to turn a Wikipedia page about, say, the Grand Canyon into a video. EdSurge 

Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones. New York Times

There is no understanding or intent behind AI outputs. But warning students about the mistakes that result from this lack of understanding is not enough. It’s easy to pay lip service to the notion that AI has limitations and still end up treating AI text as more reliable than it is. There’s a well-documented tendency to project onto AI; we need to work against that by helping students practice recognizing its failings. One way to do this is to model generating and critiquing outputs and then have students try on their own. Can they detect fabrications, misrepresentations, fallacies and perpetuation of harmful stereotypes? If students aren’t ready to critique ChatGPT’s output, then we shouldn’t choose it as a learning aid. Inside Higher Ed 

ChatGPT could help teachers shift away from an excessive focus on final results. Getting a class to engage with AI and think critically about what it generates could make teaching feel more human “rather than asking students to write and perform like robots.” MIT Tech Review

Reverting to analog forms of assessment, like oral exams, can put students with disabilities at a disadvantage. And outright bans on AI tools could cement a culture of distrust. “It’s going to be harder for students to learn in an environment where a teacher is trying to catch them cheating,” says Trust. “It shifts the focus from learning to just trying to get a good grade.” Wired

I’ve given students assignments to “cheat” on their final papers with text-generating software. In doing so, most students learn—often to their surprise—as much about the limits of these technologies as their seemingly revolutionary potential. Some come away quite critical of AI, believing more firmly in their own voices. Others grow curious about how to adapt these tools for different goals or about professional or educational domains they could impact. Inside Higher Ed 

ChatGPT can play the role of a debate opponent and generate counterarguments to a student’s positions. By exposing students to an endless supply of opposing viewpoints, chatbots could help them look for weak points in their own thinking. MIT Tech Review 

Assign reflection to help students understand their own thought processes and motivations for using these tools, as well as the impact AI has on their learning and writing. Inside Higher Ed

In March, Quizlet updated its app with a feature called Q-Chat, built using ChatGPT, that tailors material to each user’s needs. The app adjusts the difficulty of the questions according to how well students know the material they’re studying and how they prefer to learn. Some educators think future textbooks could be bundled with chatbots trained on their contents. Students would have a conversation with the bot about the book’s contents as well as (or instead of) reading it. The chatbot could generate personalized quizzes to coach students on topics they understand less well. MIT Tech Review

Encourage students to use peer-reviewed journals as sources. These types of journals are not available to ChatGPT, so by teaching our students about them and requiring their use in essays, we can ensure that the content being presented is truly original. The Tech Insider 

Students must then take apart and improve upon the ChatGPT-generated essay—an exercise designed to teach critical analysis, the craft of precise thesis statements, and a feel for what “good writing” looks like. Wired 

Show students examples of inaccuracy, bias, logical, and stylistic problems in automated outputs. We can build students’ cognitive abilities by modeling and encouraging this kind of critique. Critical AI

Far from being just a dream machine for cheaters, many teachers now believe, ChatGPT could actually help make education better. Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more. MIT Tech Review

When possible, scaffold your assignments to promote revision and growth over time, with opportunities for feedback from peers, TAs, and/or the instructor. Build assignment pre-writing or brainstorming into class time and invite students to share and discuss these ideas in small groups or with the class as a whole. Barnard College 

Nontraditional learners could get more out of tools like ChatGPT than mainstream methods. It could be an audio-visual assistant where students can freely ask as many clarifying questions as necessary without judgment. Teachers juggling countless individualized education plans could also take advantage of ChatGPT by asking how to curate lesson plans for students with disabilities or other learning requirements. New York Magazine 

Discuss students’ potentially diverse motivations for using ChatGPT or other generative AI software. Do they arise from stress about the writing and research process? Time management on big projects? Competition with other students? Experimentation and curiosity about using AI? Grade and/or other pressures and/or burnout? Invite your students to have an honest discussion about these and related questions. Cultivate an environment in your course in which students will feel comfortable approaching you if they need more direct support from you, their peers, or a campus resource to successfully complete an assignment. Barnard College 

We will need to teach students to contest it. Students in every major will need to know how to challenge or defend the appropriateness of a given model for a given question. To teach them how to do that, we don’t need to hastily construct a new field called “critical AI studies.” The intellectual resources students need are already present in the history and philosophy of science courses, along with the disciplines of statistics and machine learning themselves, which are deeply self-conscious about their own epistemic procedures. Chronicle of Higher Ed 

Spend some time discussing the definition (or definitions) of academic honesty and discuss your own expectations for academic honesty with your students. Be open, specific, and direct about what those expectations are. Barnard College

Experiential learning will become the norm. Everyone will need an internship. Employers will want assurances that a new graduate can follow directions, complete tasks, demonstrate judgment. Chronicle of Higher Ed

Khan Academy released the Khanmigo project which is able to help students as a virtual tutor or debating partner and helps teachers with administrative tasks such as generating lesson plans. Columbia Journalism Review

One situation in which I have found ChatGPT extremely useful is writing multiple-choice questions. It’s quite easy to write a question and the right answer, but coming up with three plausible wrong answers is tricky. I found that if I prompted ChatGPT with the following: “Write a multi-choice question about <topic of interest> with four answers, and not using ‘all of the above’ as an answer,” it came up with good wrong answers. This was incredibly helpful. Nature

ChatGPT outperformed most of his (journalism) students who were in the early part of the course. But students would have to seek out sources, do on-the-ground reporting, and find the important trends in the data. “And all of that, you’re not gonna get from ChatGPT.” Columbia Journalism Review

There is a reason why educational video games are not as engaging as regular video games. There is a reason why AI-generated educational videos will never be as engaging as regular videos. Brenda Laurel pointed to the ‘chocolate-covered broccoli’ problem over 20 years ago … her point still stands. EdSurge

While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Jenna Lyle, a spokesperson for the New York City Department of Education. Mashable 

This tech is being primarily pitched as a money-saving device—so it will be taken up by school authorities that are looking to save money. As soon as a cash-strapped administrator has decided that they’re happy to let technology drive a whole lesson, then they no longer need a highly-paid professional teacher in the room—they just need someone to trouble-shoot any glitches and keep an eye on the students. EdSurge

Some commentators are urging teachers to introduce ChatGPT into the curriculum as early as possible (a valuable revenue stream and data source). Students, they argue, must begin to develop new skills such as prompt engineering. What these (often well-intentioned) techno-enthusiasts forget is that they have decades of writing solo under their belts. Just as drivers who turn the wheel over to flawed autopilot systems surrender their judgment to an over-hyped technology, so a future generation raised on language models could end up, in effect, never learning to drive. Public Books

Some professors have leapt out front, producing newsletters, creating explainer videos, and crowdsourcing resources and classroom policies. The one thing that academics can’t afford to do, teaching and tech experts say, is ignore what’s happening. Sooner or later, the technology will catch up with them, whether they encounter a student at the end of the semester who may have used it inappropriately, or realize that it’s shaping their discipline and their students’ futures in unstoppable ways. Chronicle of Higher Ed 

(There is a) notion that college students (can) learn to write by using chatbots to generate a synthetic first draft, which they afterwards revise, overlooks the fundamentals of a complex process. Since text generators do a good job with syntax, but suffer from simplistic, derivative, or inaccurate content, requiring students to work from this shallow foundation is hardly the best way to empower their thinking, hone their technique, or even help them develop a solid grasp of an LLM’s limitations. The purpose of a college research essay is not to teach students how to fact-check and gussy up pre-digested pablum. It is to enable them to develop and substantiate their own robust propositions and truth claims. Public Books 

If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself.  The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.” Chronicle of Higher Ed

We are dangerously close to creating two strata of students: those whom we deem smart and insightful and deeply thoughtful, if sometimes guilty of a typo, and those who seem less engaged with the material, or less able to have serious thoughts about it. Inside Higher Ed

The challenge here is in communicating to students that AI isn’t a replacement for real thinking or critical analysis, and that heavy reliance on such platforms can lead away from genuine learning. Also, because AI platforms like ChatGPT retrieve information from multiple unknown sources, and the accuracy of the information cannot be guaranteed, students need to be wary about using the chatbot’s content. The Straits Times 

It seems futile for faculty members to spend their energies figuring out what a current version can’t do. Chronicle of Higher Ed

It is important to be aware that ChatGPT’s potential sharing of personal information with third parties may raise serious privacy concerns for your students and perhaps in particular for students from marginalized backgrounds. Barnard College 

How might chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental-health challenges? Chronicle of Higher Ed 

Students need considerable support to make sure ChatGPT promotes learning rather than getting in the way of it. Some students find it harder to move beyond the tool’s output and make it their own. “It needs to be a jumping-off point rather than a crutch.” MIT Tech Review

back to top

Using AI

How to use ChatGPT to brainstorm anything – Geeky-Gadgets

How to Use AI Tools to Easily Make Short-Form TikTok and Reels Videos – Tech.co

How to Use ChatGPT in Non-Evil Ways – Vice

Want More Clarity on Generative AI? Experiment Widely – MIT Tech Review

How to Use A.I. to Edit and Generate Stunning Photos – New York Times  

Specific steps in how to use ChatGPT - Wharton School 

ChatGPT Vision lets you submit images in your prompts: 7 wild ways people are using it -Mashable  

Generative AI is now a part of everyday life, for good and bad. Here’s how to make the tech work for you – Technical.ly

The 4 Best AI Generator Tools For Writing Essays, Blogs & More – Hive.com

This is the best AI technology you’re probably not using – Washington Post  

YouTube has AI creator tools, but creators are too busy battling AI to care - Polygon

New AI Dev Platform Allows You to Customize Open Source LLMs – The New Stack

How to write fiction and non-fiction books using ChatGPT – Geeky-Gadgets

Google's AI note-taking service 'NotebookLM' is now available – TechSpot  

From bench to bot: How to use AI tools to convert notes into a draft – The Transmitter

Bard can now watch YouTube videos for you – The Verge

You can now create AI images right from Google Search — here’s how – Tom’s Guide

7 things I’ll tell my best friend, who is just getting started with Midjourney – Medium

How to Use AI Tools to Easily Make Short-Form TikTok and Reels Videos – Tech.co

Amazon Launches Free AI Classes in Bid to Win Talent Arms Race – Wall Street Journal

I tried Microsoft's AI-powered assistant, Copilot. The tool helpfully attends meetings and summarizes emails, but it's best to treat it as a rookie intern – Business Insiider  

7 AI Tools That Help You Write Emails - MakeUseOf 

Create Stunning Data Viz in Seconds with ChatGPT – KD Nuggets

How To Use Google's New AI Image Generator in Search – Tech.co

How to Use AI to Get Your Next Job, According to Career Experts – Reader’s Digest

Prompt Structure in Conversations with Generative AI – Nielsen Norman Group 

back to top

Health Care & AI

AI’s big test: Making sense of $4 trillion in medical expenses - Politico 

How to Use ChatGPT for Health: Doctors, Professionals Give Tips - Bloomberg 

AI is accelerating drug discovery but if clinical development fails to keep pace, the benefits to patients will be delayed - McKinsey

Medical AI Tools Can Make Dangerous Mistakes. Can the Government Help Prevent Them? - WSJ

UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges – Ars Technica  

AI that reads brain scans shows promise for finding Alzheimer’s genes – Nature

New A.I. Tool Diagnoses Brain Tumors on the Operating Table – New York Times

Health data in the UK is about to flow more freely, like it or not (podcast) – The Guardian

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight – New York Times

Researchers at Northwestern Medicine have created a generative AI system that can create text reports interpreting chest radiographs as accurately as radiologists.  – Health IT Analytics

Where healthcare needs to focus for AI – Fast Company 

ChatGPT was 72% accurate in clinical decision-making on medical cases drawn from textbooks, from diagnoses to care decisions - Axios

How to Use ChatGPT for Cognitive Behavioral Therapy - MakeUseOf

Balancing The Pros And Cons Of AI In Healthcare – Forbes  

Google reveals new generative AI models for healthcare – Health Care Dive  

Eliminating Racial Bias in Health Care AI – Yale School of Medicine  

Why AI Is Medicine’s Biggest Moment Since Antibiotics - Wall Street Journal

I’m an ER doctor: Here’s what AI startups get wrong about “ChatGPT for telehealth” – Fast Company 

Hospital bosses love AI. Doctors and nurses are worried. – Washington Post

Deep Learning Model Detects Diabetes Using Routine Chest Radiographs – Health IT Analytics 

AI Helps a Stroke Patient Speak Again, a Milestone for Tech and Neuroscience - New York Times 

Google DeepMind’s AI Model Scours Our Genes to Guess Who Might Get Sick - Wall Street Journal

A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the right diagnosis - NBC Today Show

AI might be listening during your next health appointment - Axios

A step towards AI-based precision medicine - Science Daily

Is the Eye the Window to Alzheimer’s? New AI tools could diagnose the disease with visual scans - Wall Street Journal

Predicting epileptic seizures with AI - RIU Research  

AI’s potential to accelerate drug discovery needs a reality check - Nature  

An AI Tool That Can Help Forecast Viral Outbreaks – Harvard Medical School

Microsoft announces new AI tools to help doctors deliver better care – CNBC

Cigna Accused of Using AI, Not Doctors, to Deny Claims: Lawsuit – Medscape

Here's what AI-powered doctor's visits are like – CNBC

AI-supported mammogram screening increases breast cancer detection by 20%, study finds – CNN

New AI tool can help treat brain tumors more quickly and accurately, study finds – The Guardian

Google’s medical AI chatbot is already being tested in hospitals – The Verge  

The AI Opportunity for Life Sciences and Pharma in the Age of ChatGPT – Expert.ai

A mental health tech company ran an AI experiment on real users. Nothing’s stopping apps from conducting more – NBC News

IBM is using generative systems to develop new semiconductors and molecules that can help fight cancer or bacterial infection – IMB

AI-Generated Data Could Be a Boon for Healthcare—If Only It Seemed More Real – Wall Street Journal

AI-brain implant helped patient gain feeling in his hand again – Mobile Syrup 

The AI Will See You Now - Wall Street Journal

AI tool could help spot lung cancer years in advance – Washington Post

ChatGPT Will See You Now: Doctors Using AI to Answer Patient Questions - Wall Street Journal

ChatGPT improves their ability to communicate empathetically with patients – New York Times

A Doctor Published Several Research Papers With Breakneck Speed. ChatGPT Wrote Them All - Digg

Patients were told their voices could disappear. They turned to AI to save them - Washington Post  

The algorithm has been trained to make medical predictions based on reading genomes - Washington Post  

Scientists have used AI to discover a new antibiotic that can kill a deadly species of superbug - BBC

AI Tool Assists in Predicting the Likelihood of Pancreatic Cancer - Healthy Analytics

For now, the new AI in health care is going to be less a genius partner than a tireless scribe - New York Times

back to top 

Predictions about AI

An OpenAI employee says prompt engineering is not the skill of the future — but knowing how to talk to humans will be – Business Insider 

Generative AI will move from hype to actually being helpful – Semafor

How ‘A.I. Agents’ That Roam the Internet Could One Day Replace Workers – New York Times

Why AI struggles to predict the future – NPR 

How AI will upend the customer service industry - Semafor 

OpenAI’s chief scientist, on his hopes and fears for the future of AI - MIT Technology Review

Forrester’s 2024 Predictions Report warns of AI ‘shadow pandemic’ as employees adopt unauthorized tools – VentureBeat  

2024: The year AI gets real - Axios

The biggest winners — and losers — in the coming AI job apocalypse – Business Insider

Now That Generative AI Is Here, Where Will All The Data Come From? – Forbes

Researchers think there’s a 5% chance AI could wipe out humanity – Semafor

Generative AI a la ChatGPT is pushing investors to new extremes of hype – Axios  

The Generative AI Bubble Will Burst Soon – KD Nuggets

Wall Street Watchdog Says AI Will Cause 'Unavoidable' Economic Collapse – Gizmodo

Experts Predict the Future of Technology, AI & Humanity – Wired 

An English professor long interested in the statistical analysis of literature & he thinks AI is a game-changer in our understanding of texts – Business Insider

How AI Is Impacting Society And Shaping The Future – Forbes

In its own words: The future of AI in sports – Sports Business Journal

iPhone 16 is poised to be an AI superphone — 5 rumors you need to know – Tom’s Guide

Everyone gets an AI agent – The Nieman Lab

Klarna CEO on how AI will make online shopping more 'emotional' – Semafor

Where is AI Heading in 2024? Looking Ahead To AI In 2024 – Forbes

Why AI struggles to predict the future – NPR

How AI will upend the customer service industry - Semafor 

OpenAI’s chief scientist, on his hopes and fears for the future of AI - MIT Technology Review

Forrester’s 2024 Predictions Report warns of AI ‘shadow pandemic’ as employees adopt unauthorized tools – VentureBeat

The biggest winners — and losers — in the coming AI job apocalypse – Business Insider

Now That Generative AI Is Here, Where Will All The Data Come From? – Forbes

Generative AI a la ChatGPT is pushing investors to new extremes of hype – Axios

The Generative AI Bubble Will Burst Soon – KD Nuggets

Wall Street Watchdog Says AI Will Cause 'Unavoidable' Economic Collapse – Gizmodo

Experts Predict the Future of Technology, AI & Humanity – Wired 

An English professor long interested in the statistical analysis of literature & he thinks AI is a game-changer in our understanding of texts – Business Insider

How AI Is Impacting Society And Shaping The Future – Forbes

In its own words: The future of AI in sports – Sports Business Journal

Within five years everyone would have access to an AI personal assistant. He referred to this function as a personal chief-of-staff. In this vision, everybody will have access to an AI that knows you, is super smart, and understands your personal history. -Venture Beat 

Some experts in generative AI predict that as much as 90% of content on the internet could be artificially generated within a few years. -Bloomberg 

Currently, most AI falls under narrow or specialized intelligence — good at one thing but pretty useless otherwise. However, we’re inching closer to Artificial General Intelligence (AGI), where machines can understand, learn, and apply knowledge across different domains. -Christophe Atten writing in Medium

It is certainly the case that many new technologies have led to bad outcomes – often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about.  But a moral panic is by its very nature irrational – it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.  And wow do we have a full-blown moral panic about AI right now. -Marc Andreesen writing in a16z

All of the software we’ve ever used was engineered to work backward from an outcome. Its creators wanted to help you find a webpage or play a game or operate a laptop. Perhaps you’ve noticed that the major AI chatbots arrived with almost no user documentation or instructions. A lump of clay doesn’t come with instructions either. That’s what makes this moment unique — and so worthy of species-level #1 foam-finger pride. We humans have created a tool for potentially infinite tasks. Its imperfections are ours to solve — and its powers still ours to shape. – Washington Post 

“AI may cause a new Renaissance, perhaps a new phase of the Enlightenment,” Yann LeCun, one of the godfathers of modern artificial intelligence, suggested earlier this year. AI can already make some existing scientific processes faster and more efficient, but can it do more, by transforming the way science itself is done? Such transformations have happened before. – The Economist

DeepMind’s cofounder says generative AI is just a phase. What’s next is interactive AI: bots that can carry out tasks you set for them by calling on other software and other people to get stuff done. “Technology is going to be animated. It’s going to have the potential freedom, if you give it, to take actions. It’s truly a step change in the history of our species that we’re creating tools that have this kind of, you know, agency.” -MIT Tech Review 

What If the Robots Were Very Nice While They Took Over the World? First it was chess and Go. Now AI can beat us at Diplomacy, the most human of board games. The way it wins offers hope that maybe AI will be a delight. -Wired 

People need to develop “rugged flexibility,” to manage change most effectively. In other words, people need to learn how to be strong and hold on to what is most useful but also to bend and adapt to change by embracing what is new. -Venture Beat

Imagine if your brain got 10 times smarter every year over the past decade, and you were on pace for more 10x compounding increases in intelligence over at least the next five. Throw in precise recall of everything you’ve ever learned and the ability to synthesize all those materials instantly in any language. You wouldn’t be just the smartest person to have ever lived — you’d be all the smartest people to have ever lived. (Though not the wisest.) That’s a plausible trajectory of the largest AI models. -Washington Post

We seem to be in what I can only call an “AI lull.” The initial excitement about ChatGPT, which started in January, has receded. Do not be deceived. While the hype and marketing may have died down, at least on the retail side, the AI revolution will continue. -Bloomberg

What Happens When AI Has Read Everything? – The Atlantic  

AI Is About to Make Social Media Much More Toxic - The Atlantic 

Hallucinations Could Blunt ChatGPT’s Success - Spectrum    

The AI-powered, totally autonomous future of war Is here – Wired

The AI emotions dreamed up by ChatGPT – BBC

When the Movies Pictured A.I., They Imagined the Wrong Disaster – New York Times

The ChatGPT buzz and why it will be over sooner than you think – Venture Beat

What happens when we can no longer differentiate a human from a machine? – The Hill 

The Hype Cycle of AI – Expert.ai

Attackers (will) use artificial intelligence to write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal. Washington Post  

Actor Tom Hanks believes he will be starring in new film roles long after his death, as he speculated on the possibility that his likeness could be captured by AI. Forbes

Any site that depends on contributions from the public — text messages, product reviews, photo or video uploads — is preparing to be swamped with AI-generated input that will make finding signal in the noise even harder for human users. Axios

Robots presented at an AI forum said they expected to increase in number and help solve global problems, and would not steal humans' jobs or rebel against us. Reuters

While much of the media attention has been on large language models, the field of causal AI has gotten comparatively little. If causal reasoning is combined with large language models, it could have a major impact on humanity. Semafor 

In a way, I’m agnostic to that question of “do we need more breakthroughs or will existing systems just scale all the way?” My view is it’s an empirical question, and one should push both as hard as possible. And then the results will speak for themselves.(DeepMind CEO Demis Hassabis) The Verge

Artificial intelligences that are trained using text and images from other AIs, which have themselves been trained on AI outputs, could eventually become functionally useless. New Scientist

One need not even know how to program to construct attack software. “You will be able to say, ‘just tell me how to break into a system,’ and it will say, ‘here’s 10 paths in’,” said Robert Hansen, who has explored AI as deputy chief technology officer at security firm Tenable. “They are just going to get in. It’ll be a very different world.” Washington Post

Fifty-six percent of respondents (in a recent survey) think ‘people will develop emotional relationships with AI’ and 35 percent of people said they’d be open to doing so if they were lonely. The Verge 

In 2019, Christian Szegedy, a computer scientist formerly at Google and now at a start-up in the Bay Area, predicted that a computer system would match or exceed the problem-solving ability of the best human mathematicians within a decade. Last year he revised the target date to 2026. New York Times

What to Expect from AI in 2023 – Towards AI

The Prospect of an AI Winter – Erich Grunewald Blog

A.I. May Change Everything, but Probably Not Too Quickly - New York Times

ChatGPT could make life easier — here’s when it’s worth it – Washington Post

A.I. Technology: 8 Questions About the Future - New York Times

As AI Spreads, Experts Predict the Best and Worst Changes in Digital Life by 2035 – Pew Research  

Think AI was impressive last year? Wait until you see what’s coming - Vox  

The 2024 election cycle 'is poised to be the first election where A.I.-generated content is prevalent  - New York Times

Humans will specialize in whatever AI does worst. - Chronicle of Higher Ed

AI will certainly force us to concentrate on those talents and skills that will remain uniquely human. - Chronicle of Higher Ed

Will writers start proclaiming they are “natural” writers, with no AI use in their work, akin to bodybuilders who choose not to use performance-enhancing drugs? - Washington Post 

It’s going to creep into our lives in ways we least expect it  - Wall Street Journal

While I think that A.I. tools help express our creativity, creativity will still be the driving force behind the future of art. - New York Magazine

The new web is struggling to be born, and the decisions we make now will shape how it grows  - The Verge 

The role of software engineers will evolve into one of guiding and overseeing the AI's work, providing input and feedback, and ensuring that the generated code meets the project's requirements.  Prompt engineering will be critical in using automated code generators as prompts must be carefully crafted to accurately capture the intent of the desired code.  Forbes

Many types of work will be taken over by machines, and jobs will vanish. This change is typically seen as a cause for gloom. I suggest we see it as an opportunity to revitalize education by replacing unsatisfying work with meaningful labor. Chronicle of Higher Ed

back to top 

Possibilites: Things People are Trying to Get AI To Do

A new tool to counter California’s housing crisis: AI - Semafor 

Can AI Replace Your Financial Adviser? Not Yet. But Wait. - Wall Street Journal

AI models can analyze thousands of words at a time. A Google researcher has found a way to increase that by millions.– Business Insider 

New deep learning AI tool helps ecologists monitor rare birds through their songs – Phys.org

When AI Denies Your Loan Application, Should You Be Able to Appeal to a Human? – Wall Street Journal  

Edith Piaf AI-Generated Biopic in the Works at Warner Music – Variety  

ChatGPT and Midjourney bring back the dead with generative AI – Axios

How advances in AI can make content moderation harder — and easier - Semafor

Can AI Rescue Recycling? - Wall Street Journal  

The US has a new plan for wielding AI to fight climate change - Semafor

AI Doom Calculator is predicting people's death – USA Today  

Jeff Bezos Bets on a Google Challenger Using AI to Try to Upend Internet Search  - Wall Street Journal

Can A.I. solve rape cases? To find out, a Cleveland professor programmed a computer to analyze thousands of police reports -Cleveland.com 

Some in the (book) publishing world are already experimenting with AI programs in areas such as marketing, advertising, audiobook production and even writing, weighing their promise of supporting work done by humans against the threat that the machines ma. -NY Times

AI-powered technology may also help revitalize endangered languages, including by processing and storing languages and identifying language patterns. Additionally, AI may help accomplish these tasks at unprecedented speeds or just in time, before an endangered language goes extinct. -Inside Higher Ed

Many in publishing are taking action to protect their work. The Authors Guild recently organized a petition signed by thousands of writers demanding that companies seek their approval before using their work to train A.I. programs. Agencies representing illustrators have also revised their contracts to keep their work from being used to feed A.I. programs. Penguin Random House, the country’s largest book publisher, said it considers the “unauthorized ingestion” of content to train A.I. models to be a copyright infringement. New York Times

Text With Jesus replicates an instant messaging platform, with biblical figures impersonated by the artificial intelligence program ChatGPT. The launching of the app stirred reactions ranging from amusement to accusations of blasphemy and heresy. -Religious News Service 

Can ChatGPT become a content moderator? The technique is still not as effective as experienced human moderators, OpenAI found. But it outperforms moderators that have had light training.-Semafor

Can A.I. Detect Wildfires Faster Than Humans? California Is Trying to Find Out. -New York Times

AI providers begin to explore new terrain: chatbots in salary negotiations – Axios 

Coca-Cola launches beverage created with the help of artificial intelligence -Food Dive

Get Ready for AI Chatbots That Do Your Boring Chores - Wired 

Alexa, will generative AI make you more useful? -Semafor

Can AI predict, and try to prevent, homelessness? -NPR

Can AI Flirt?

Can You Flirt Better Than Artificial Intelligence? – Wall Street Journal

Could AI read my thoughts?

A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thoughts – Scientific American

A.I. Is Getting Better at Mind-Reading In a recent experiment, researchers used large language models to translate brain activity into words. – New York Times 

Can AI help me find a date?

AI apps are being used to help people connect on dating apps – NPR

Will AI change the self-help industry?

The Goopification of AI A new generation of chatbots is poised to become the next frontier of self-help – The Atlantic

Can AI be a therapist?

Virtual therapists can help veterans reluctant to open up to a person – Wired

Startups are using ChatGPT to meet soaring demand for chatbot therapy - Semafor

Can AI provide therapy in someone’s native language?

Virtual therapists can help people struggling to access in-person therapy in their native languages. – Wiley

Can AI help with text & Tinder?

How to use ChatGPT for texting and Tinder without being a jerk - The Washington Post

Can AI read my mind?

New AI system could help people who lost their ability to speak - CBS News

Can AI rap?

A Swedish newspaper is having AI rap its articles in an attempt to get young people interested in the news - Business Insider 

Are AI pets available?

AI pets are booming: They can include realistic programmed personalities — plus tails that wag - AI Time Journal   

Can AI past an MBA test?

ChatGPT passes Wharton Business School's MBA exam, gets a B - Interesting Engineering

Can AI simulate large-scale economic or political events?

Generative AI landscape: Potential future trends - Tech Target  

Can AI contest parking tickets?

I asked ChatGPT to contest my parking ticket - Fast Company

Can AI set insurance rates?

Generative AI is helping figure out who is riskier for insurers – Semafor

Can AI create fashion?

Generative AI: Unlocking the future of fashion - McKinsey  

Can AI explain history?

How AI is helping historians better understand our past - MIT Tech Review 

Can an AI be my lawyer’s assistant?

Why it’s imperative lawyers adopt a ‘legal copilot’ model with AI – Legal Dive

LexisNexis has launched a generative AI tool that can draft documents, conduct research & summarize legal issues - LawNext  

Can AI make decent movies?

Another Reason Hollywood Will Love AI - Wall Street Journal

Welcome to the new surreal. How AI-generated video is changing film - MIT Tech Review 

Can AI spot materials inside of images?

Researchers use AI to identify similar materials in images - MIT Tech Review

Can AI pick hit songs?

Accurately predicting hit songs using neurophysiology and machine learning - Frontiers

Neuro-forecasting the next No. 1 song - Axios

Can AI replace data scientists?

Are data scientists still needed in the age of generative AI? - KD Nuggets

Can AI plan your trip better than you can?

In Milan, Putting an A.I. Travel Adviser to the Test - New York Times 

Can AI build a website?

How to use AI Art and ChatGPT to Create Insane Web Designs - Codex Community (video) 

Can AI play Minecraft?

They Plugged GPT-4 Into Minecraft—and Unearthed New Potential for AI - Wired

Can AI provide commentary at tennis matches?

Wimbledon to introduce AI-powered commentary to coverage this year – The Guardian

Can AI Read my mind?

A.I. Is Getting Better at Mind-Reading In a recent experiment, researchers used large language models to translate brain activity into words. – New York Times

Can AI translate the Bible?

USC researchers use AI to help translate Bible into very rare languages – Religious News Service

Can AI make Astrological Readings?

Is A.I. the Future of Astrology? – New York Times

Can AI Do your Taxes?

Ready for AI to help you do your taxes? Taxfyle’s got you covered – Refresh Miami

How about answering questions from a ‘biblical’ perspective?

Christian creators build chatbots with ‘biblical’ worldview – Religious News Service

Can AI change the way wars are fought?

Our Oppenheimer Moment: The Creation of AI Weapons – New York Times

Can AI Replace Humans?

We Went to the Fast-Food Drive-Through to Find Out – Wall Street Journal

Can AI Build Websites?

Mobile website builder Universe launches AI-powered designer – Tech Crunch

Can AI write sermons?

Start-up AI Platform Aims to Help Pastors Make the Most of Their Sunday Sermons – Christian Standard

Can AI write a song?

We asked Google’s new AI music bot to write us a song. We instantly regretted it – Science Focus

Can AI pilot airplanes & drones?

AI pilots, the future of aerial warfare – Air Force Tech

Can AI bring historical figures to life?

AI Chatbots Now Let You Talk to Historical Figures Like Shakespeare and Andy Warhol – My Modern Met

Can AI create decent headshots?

I Used AI To Create My Professional Headshots And The Results Were Either Great Or Hilarious – Digg

back to top 

Possibilities: Things AI Can Do Now

Build Walls

An autonomous excavator can build a wall out of nearby boulders by an AI system that uses the data to determine the best placement for each boulder – HackaDay

Predict Weather

NASA and IBM are building an AI for weather and climate applications – Engadget

Map Icebergs

Researchers at the University of Leeds have created an AI system that can map icebergs in satellite images 10,000 times faster than humans - ESA

Mute Chip Crunching

Frito-Lay has created an AI-powered mic filter that can remove the crunching sound created by eating chips during online gaming sessions – Marketing Drive

Predict New Stable Compounds

Google DeepMind has created an AI system that can predict the structure of crystalline materials much faster than humans – The Next Web

Track Tomatoes

A data visualization tracking tomato production in Europe – Data Innovation

Make a Movie

Make I asked ChatGPT to create a Hallmark Christmas movie — and it went better than expected – Tom’s Guide  

Be a Personal Assistant

Personalized A.I. Agents Are Here. Is the World Ready for Them? – New York Times 

Win Awards

The Grammys will consider that viral song with Drake and The Weeknd AI vocals for awards after all – Engadet  

Let you Speak in Other Languages

This new AI video tool clones your voice in 7 languages — and it's blowing up – Tom’s Guide

Edit Major Movies

How Will Editors Use AI? The Tech’s Role in Production and Post Scrutinized at IBC – Hollywood Reporter   

Translate Podcasts

Spotify develops ai-powered voice cloning tool that can translate podcasts into multiple languages – Music Business Worldwide 

Fashion Modeling

Spanish influencer agency designed this AI model after deciding real-life influencers are a pain – BGR 

Create Anime  

Tezuka Fans Unimpressed by Black Jack's First Official AI-Generated Manga – CBR

Read Books for you

Why Read Books When You Can Use Chatbots to Talk to Them Instead? - WIRED

Haggle with Sellers

See how well you can haggle with AI to purchase real products – AI Garage Sale  

Create a Dead Actors Voice

AI-Generated Jimmy Stewart Narrates Bedtime Story for Calm App – Variety

And do other chores

6 ChatGPT mind-blowing extensions to use it anywhere – Medium  

 AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1. – Business Insider

AI Has Already Created As Many Images As Photographers Have Taken in 150 Years. Statistics for 2023 – Every Pixel

A.I. Can’t Build a High-Rise, but It Can Speed Up the Job – New York Times

AI-generated books are infiltrating online bookstores - Axios

GenAI Is Making Data Science More Accessible - Datanami

Can AI summaries save you from endless virtual meetings? – Washington Post

Amazon is bringing a whole lot of AI to Thursday Night Football this season – The Verge

7 Projects Built with Generative AI by Data Scientists – KD Nuggets

The Novel Written about—and with—Artificial Intelligence – The Walrus

The IRS will use AI to crack down on wealthy potential tax violators - Axios 

Stability AI has now released a code generator called StableCode – VentureBeat

AI is already helping 911 operators. Here’s what the future of emergencies looks like – Fast Company

Generative AI: Here are the use cases across industries – Economic Times

How AI is bringing film stars back from the dead – BBC

How AI is Revolutionizing the Insurance Industry - Stack Diary

How to Create QR Code Art using Stable Diffusion – Urvashi on Medium

8 questions CISOs should be asking about AI – CSO  

How to Use A.I. for Family Time – New York Times

Sweetspot is an AI search engine for the U.S. government contract maze - Semafor 

ChatGPT Code Interpreter: What is It and What Can You Do With It – Stack Diary

AI can write code

Making basic coding obsolete - Semafor

AI changes the software-making game - Axios

Now, folks who don’t know how to code can easily write automation scripts to run in browsers - KHOI    

It is becoming more feasible for AI systems to take over the role of coding – Forbes

AI can write email

Microsoft Will Use OpenAI Tech to Write Emails for Busy Salespeople – Bloomberg

AI write SEO

24 Experts share how they are using ChatGPT to help with SEO efforts - Matt Tutt

AI can spot phishing sites

ChatGPT shows promise in detecting phishing sites – Help Net Security 

AI can fight El Niño

AI is helping scientists and startups fight El Niño - Semafor

AI can help build video games

Game makers put generative AI to imaginative work - Axios

AI can write political speeches

ChatGPT writes lawmakers speech – Boston Globe

back to top 

xx

xx

 back to top