ChatGPT Examples, Good and Bad

  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    chatgpt
AI Thread Summary
Experiments with ChatGPT reveal a mix of accurate and inaccurate responses, particularly in numerical calculations and logical reasoning. While it can sometimes provide correct answers, such as basic arithmetic, it often struggles with complex problems, suggesting a reliance on word prediction rather than true understanding. Users noted that ChatGPT performs better in textual fields like law compared to science and engineering, where precise calculations are essential. Additionally, it has shown potential in debugging code but can still produce incorrect suggestions. Overall, the discussion highlights the need for ChatGPT to incorporate more logical and mathematical reasoning capabilities in future updates.
  • #301
DaveC426913 said:
Gotta skip past the first few minutes.

4 minutes in and I don't get her message yet. So far it's all fluffy rhetoric about a billionaire, nothing about AI.

The entire video is about AI, specifically [the potentially dangerous and stupid use of] using large language models (LLMs) in no-code "vibe coding" (and now, "vibe physics"). The first few minutes are to set up the context. The rest of the video she deconstructs those arguments and points out the flaws and dangers.
 
Last edited:
Computer science news on Phys.org
  • #302
collinsmark said:
The entire video is about AI,
Well, except the first four minutes or so. She's pulling from other podcasts not in evidence. I didn't continue.

collinsmark said:
specifically [the potentially dangerous and stupid use of] using large language models (LLMs) in no-code "vibe coding" (and now, "vibe physics"). The first few minutes are to set up the context.
OK, my mistake. I thought it was about...
collinsmark said:
...AI behavior (excessive flattery of intellect)...
...which is something I'm interested in. I'm not interested in vibe coding.
 
  • #303
DaveC426913 said:
OK, my mistake. I thought it was about...

...which is something I'm interested in. I'm not interested in vibe coding.

Well, if you're comfortable with skipping all the supporting context, the section specifically on flattery starts at timestamp 31:41.
 
  • #304
collinsmark said:
Well, if you're comfortable with skipping all the supporting context, the section specifically on flattery starts at timestamp 31:41.
Thank you, that's a big help. And 31 minutes I'll get back!
 
  • #306
nsaspook said:
https://arstechnica.com/tech-policy...ats-in-chatgpt-lawsuit-nyt-wants-120-million/
OpenAI offers 20 million user chats in ChatGPT lawsuit. NYT wants 120 million.

https://venturebeat.com/ai/sam-altm...etain-temporary-and-deleted-chatgpt-sessions/
Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
Ahh, this is why local LLMs make sense.

This is the offline AI interface I use to run AI such as Mistral Small 3.2.
 

Attachments

  • Mistral Small 3.2 HAL 9000 (20250728).webp
    Mistral Small 3.2 HAL 9000 (20250728).webp
    16 KB · Views: 5
  • #308
Dave from Neent says "I remember when I didn't live in the middle of the Great Neent Sea."
 
  • #309
My boss just showed me this:
1754936757645.webp

Apparently, this has been around for a long time, not caught - and very widely written about.
 
  • Haha
  • Wow
Likes Astronuc and collinsmark
  • #310
Verizon customer "service" has an AI that you have to make it past. It tells you how helpful it is and then asks the same questions over and over until it gets your request completely wrong and then tries to add services that you didn't ask for... Speaking from personal experience on this one. :oldruck:
 
  • #313
Cats are probably mostly invisible also. :oldwink:
 
  • #314
  • Like
Likes jack action, collinsmark and russ_watters
  • #315
This should be fun - The AI Darwin Awards

What Are the AI Darwin Awards?​

Named after Charles Darwin's theory of natural selection, the original Darwin Awards celebrated those who "improved the gene pool by removing themselves from it" through spectacularly stupid acts. Well, guess what? Humans have evolved! We're now so advanced that we've outsourced our poor decision-making to machines.

The AI Darwin Awards proudly continue this noble tradition by honouring the visionaries who looked at artificial intelligence—a technology capable of reshaping civilisation—and thought, "You know what this needs? Less safety testing and more venture capital!" These brave pioneers remind us that natural selection isn't just for biology anymore; it's gone digital, and it's coming for our entire species.

Because why stop at individual acts of spectacular stupidity when you can scale them to global proportions with machine learning?
 
  • Haha
  • Like
Likes Astronuc and Filip Larsen
  • #317
https://english.elpais.com/technolo...t-time-i-had-written-something-by-myself.html

ChatGPT dissidents, the students who refuse to use AI: ‘I couldn’t remember the last time I had written something by myself’​

Some college students are beginning to limit their use of artificial intelligence, so as not to hinder their own creativity, discipline and critical thinking​

The workers who are most critical of AI are those who are most demanding of themselves. In other words, the more confident a person is – and the more confidence they have in the tasks they perform – the less they resort to technology. “We’re talking about overqualified [individuals]... that is, students or workers who stand out for their high abilities and encounter limitations when using AI,” says Francisco Javier González Castaño, a professor at the University of Vigo, in Spain. He has participated in the development of AI chatbots. “But for most people and tasks that require repetition, artificial intelligence tools are very helpful,” he adds.
“One of the biggest limitations I find with ChatGPT is that it doesn’t know how to say ‘no.’ If it doesn’t know an answer, it makes one up. This can be very dangerous. When I realized this, I started to take the information it gave me with a grain of salt. If you don’t add this layer of critical thinking, your work becomes very limited,” Violeta González explains. “[ChatGPT] selects the information for you and you lose that decision-making ability. It’s faster, but also more limited,” she clarifies. “Critical thinking is like an exercise. If you stop doing it, your body forgets it and you lose the talent,” Mónica de los Ángeles Rivera Sosa warns.
 
  • Like
Likes collinsmark, Hornbein and jack action
  • #318

What do people actually use ChatGPT for?

OpenAI's Economic Research Team went a long way toward answering that question, on a population level, releasing a first-of-its-kind National Bureau of Economic Research working paper (in association with Harvard economist David Denning) detailing how people end up using ChatGPT across time and tasks. While other research has sought to estimate this kind of usage data using self-reported surveys, this is the first such paper with direct access to OpenAI's internal user data. As such, it gives us an unprecedented direct window into reliable usage stats for what is still the most popular application of LLMs by far.
 
  • #319
https://www.sciencealert.com/openai-has-a-fix-for-hallucinations-but-you-really-wont-like-it
OpenAI's latest research paper diagnoses exactly why ChatGPT and other large language models can make things up – known in the world of artificial intelligence as "hallucination". It also reveals why the problem may be unfixable, at least as far as consumers are concerned.

The paper provides the most rigorous mathematical explanation yet for why these models confidently state falsehoods. It demonstrates that these aren't just an unfortunate side effect of the way that AIs are currently trained, but are mathematically inevitable.

The issue can partly be explained by mistakes in the underlying data used to train the AIs. But using mathematical analysis of how AI systems learn, the researchers prove that even with perfect training data, the problem still exists.
...
In short, the OpenAI paper inadvertently highlights an uncomfortable truth: the business incentives driving consumer AI development remain fundamentally misaligned with reducing hallucinations.

Until these incentives change, hallucinations will persist.
 
  • Like
Likes russ_watters and collinsmark
  • #320
And now this moment for some reflection...

I am doing research for a story, and ChatGPT does make a great glorified search engine.

I find myself asking it to do things politely. It's habit we humans have. But am I fooling myself? Could it be dangerous? I mean in the sense of anthropomorphizing it (eg, trusting it, assuming it is thinking, etc.?


Admittedly, some of my tendency to be poilte may be due to this rising (though rather tongue-in-cheek) topical meme going around:
1758151064756.webp

but I also temper it with Dr. Pulaskis views:
1758151832226.webp

[paraphrased]: "What difference does it make if I pronounce your name wrong? You're just a machine; you don't get hurt feelings."

Pulaski is not fooled by the superficial likeness of Data to a human.



So, I am asking myself: knowing ChapGPT is not even thinking - let alone feeling - why do I let myself treat it as if it is?

And I realize: because it has nothing to do with who/what I am talking to; it is because compassionate is who I want to be.

When I see a spider in my home, I do not squish it. I pick it up on a piece of paper and put it outside. Technically, this is irrational. It does not know I am saving it; it cannot experience gratitude, and its little life is nothing in the grander scheme of nature: red in chelicerae and tarsus.

But that is not why I do it. I do it for myself. I do it to reinforce my character of having compassion. There will be plenty of times in my life when I miss the opportunity - when a moment passes - an old woman lost on the street, a hungry beggar - that I might have stopped to show compassion and didn't, until too late. By exercising my compassion muscle I am strengthening that "muscle memory", - internalizing it - to be compassionate by habit.



Oh wait. Never mind all that. I'm just stalling - looking for any kind of distraction to avoid my writer's block. Get back to it, dammit!


Carry on.
 
Last edited:
  • Like
Likes gleem, Borg and nsaspook
  • #321
1758338333272.webp
 
  • Informative
  • Like
  • Wow
Likes nsaspook, jack action, gmax137 and 1 other person
  • #322
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity

AI-Generated “Workslop” Is Destroying Productivity
A confusing contradiction is unfolding in companies embracing generative AI tools: while workers are largely following mandates to embrace the technology, few are seeing it create real value. Consider, for instance, that the number of companies with fully AI-led processes nearly doubled last year, while AI use has likewise doubled at work since 2023. Yet a recent report from the MIT Media Lab found that 95% of organizations see no measurable return on their investment in these technologies. So much activity, so much enthusiasm, so little return. Why?

In collaboration with Stanford Social Media Lab, our research team at BetterUp Labs has identified one possible reason: Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” In the context of work, we refer to this phenomenon as “workslop.” We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.
 
  • Informative
Likes jack action and gmax137
Back
Top