Machine thinking is great for understanding the behavioral patterns across populations. It is not great for understanding the unique individual right in front of you. If you can understand another person’s perspective, you have a more valuable skill than the skill possessed by some machine vacuuming up vast masses of data about no one in particular. New York Times
Ian Bogost suggests that ChatGPT produces “an icon of the answer … rather than the answer itself.” The Atlantic
A large language model is not capable of conducting independent research or gathering new information. It is only capable of generating text based on the input it is given, so it would not be able to provide original insights or perspectives on the topic at hand. Inside Higher Ed
The ability to create and give a good speech, connect with an audience, and organize fun and productive gatherings seem like a suite of skills that A.I. will not replicate. New York Times
The idea that “AI” can navigate contested terrain by flagging “disagreement” and synthesizing links to “both sides” is hardly sufficient. Such illusions of balance obscure the need to situate information and differentiate among sources: precisely the critical skills that college writing was designed to cultivate and empower. Public Books
Something I noticed when I asked ChatGPT to write a short story: It makes everything sound like an unfunny parody. New York Magazine
I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture. Chronicle of Higher Ed
If AI-generated forensic sketches are ever released to the public, they can reinforce stereotypes and racial biases and can hamper an investigation by directing attention to people who look like the sketch instead of the actual perpetrator. Vice
AI feels mundane. It just feels like using any other technology. So we really need to reckon with our own expectations, turn down the hype, and close the gap between what we imagine and what the reality is. The Markup
The information produced by AI language models and chatbots is often incorrect. The tricky thing is that when it’s wrong, it’s wrong in ways that are difficult to spot. The Verge
Our tests found that it sometimes offers responses that potentially include plagiarism, contradict itself, are factually incorrect or have grammatical errors, to name a few — all of which could be problematic at work. Washington Post
When we discuss hallucinations and out-of-date databases, we should be careful about reaching summative judgments. These products are still very much in development; there will be new innovations, and there will be bigger and better pools of data that will stir the pot among ranking brands and products. Inside Higher Ed
CNET started quietly publishing articles explaining financial topics using “‘automated technology’ – a stylistic euphemism for AI,” CNET had to issue corrections on 41 of the 77 stories after uncovering errors despite the articles being reviewed by humans prior to publication. Some of the errors came down to basic math. Columbia Journalism Review
I think the questionable accuracy of responses provided by ChatGPT is its biggest downside. It means the user is responsible for verifying the information, which takes away the ease people are attributing to ChatGPT. Demand Sage
ChatGPT has proven inept at reproducing even the simplest ideas in rocketry. In addition to messing up the rocket equation, it bungled concepts such as the thrust-to-weight ratio, a basic measure of the rocket's ability to fly. NPR
ChatGPT can write poemlike streams of regurgitated text, but . . . they don’t satisfy the minimal criterion of a poem, which is a pattern of language that compresses the messy data of experience, emotion, truth, or knowledge and turns those, as W. H. Auden wrote in 1935, into “memorable speech.” The Atlantic
Even if researchers trained these systems solely on peer-reviewed scientific literature, they might still produce statements that were scientifically ridiculous. Even if they learned solely from text that was true, they might still produce untruths. Even if they learned only from text that was wholesome, they might still generate something creepy. New York Times