ChatGPT has started talking to me!

  • Thread starter Thread starter julian
  • Start date Start date
  • Tags Tags
    Ai Technology
Click For Summary
ChatGPT's real-time verbal interaction capabilities create a surreal experience for users, offering nine distinct voices, each with unique characteristics. Users have experimented with the voices, including attempts to mimic accents, such as Australian and Spanish, with varying success. The conversation highlights the limitations of ChatGPT's understanding, emphasizing that it operates more like a sophisticated predictive algorithm rather than possessing true comprehension or memory. Users noted that while ChatGPT can engage in tasks like playing chess, it often makes errors in visual representation and relies heavily on user corrections to improve its performance. Discussions also touch on the nature of AI, with some arguing that its behavior mimics understanding without genuine cognitive processes. The conversation reflects a blend of fascination and skepticism regarding AI's capabilities, particularly in terms of strategic thinking and learning from interactions.
  • #61
DaveC426913 said:
This is not intelligence. This is fakery.
Even if we assume that it is capable of solving all existing problems, that doesn't make it intelligent (intelligent is someone who creates a machine capable of solving all existing problems automatically).

ChatGPT responds to your input (doesn't work without input), the difference with a calculator is that chatGPT is able to work in many languages of different order, both input and output, unlike a calculator. It doesn't really care what language or concepts you use; it's a machine; it understands and operates in machine language. It simply has to translate your input.
 
Last edited:
Physics news on Phys.org
  • #62
No one is claiming ChatGPT is perfect. We all know that you can confuse it or get it to tie itself in knots. Those are imperfections.

You cannot confuse a calculator like that. Or a conventional computer program. Because those things have specific inputs and outputs and you play by their rules or not at all. ChatGPT tries to play by your rules. Sometimes it succeeds and sometimes it fails.
 
  • #63
PeroK said:
No one is claiming ChatGPT is perfect. We all know that you can confuse it or get it to tie itself in knots. Those are imperfections.

You cannot confuse a calculator like that. Or a conventional computer program. Because those things have specific inputs and outputs and you play by their rules or not at all. ChatGPT tries to play by your rules. Sometimes it succeeds and sometimes it fails.
It's an impressive machine, that's for sure.
 
  • #64
javisot said:
Even if we assume that it is capable of solving all existing problems, that doesn't make it intelligent (intelligent is someone who creates a machine capable of solving all existing problems automatically).

ChatGPT responds to your input (doesn't work without input), the difference with a calculator is that chatGPT is able to work in many languages of different order, both input and output, unlike a calculator. It doesn't really care what language or concepts you use; it's a machine; it understands and operates in machine language. It simply has to translate your input.
If you cannot see the fundamental difference between a calculator and an LLM, then there is no possibility of a serious debate.

Even if you claim that an LLM has no intelligence, you must see the huge evolutionary step between it and a pocket calculator.
 
  • #65
PeroK said:
If you cannot see the fundamental difference between a calculator and an LLM, then there is no possibility of a serious debate.

Even if you claim that an LLM has no intelligence, you must see the huge evolutionary step between it and a pocket calculator.
One discussion you can have with chatGPT is about the quantity and quality of different inputs it can receive (unlike a calculator). This makes the quantity and quality of output it can generate greater than any calculator, and this leads to the problem of being able to predict all the answers it offers. It's a more versatile calculator, that is true.

There is no doubt that the LLMs are an incredible advance. Building calculators capable of operating on numbers was a major breakthrough, and achieving the same with natural language is also a major breakthrough.
 
  • #66
PeroK said:
For example, it's absolutely pointless to claim that modern chess engines don't understand chess
Seems like to suggest that physics engineers don't understand physics. They have have just been trained(=programmed) to follow a procedure.

When a chess computer responds with syntax error, the engieers response is usually "that's philosophy", next question.

The question is not totally unreasonable, but may apply both to machines and humans. I have yet to find a perfect human. They also make mistakes, and hallucinate!

/Fredrik
 
  • Haha
Likes russ_watters
  • #67
By the way, no offense to @erobz . We're just having a lively academic discussion here, with differing opinions here, right? I'm not criticizing you, or finger-wagging or anything. Your view is a perfectly valid and widely-held one, and I am no authority, I just hold a different opinion on the matter. Yeah?
 
  • #68
DaveC426913 said:
By the way, no offense to @erobz . We're just having a lively academic discussion here, with differing opinions here, right? I'm not criticizing you, or finger-wagging or anything. Your view is a perfectly valid and widely-held one, and I am no authority, I just hold a different opinion on the matter. Yeah?
I don't think your Round Robin is trivial. How do you know that there is even a solution that you require? You might have been asking it to do something impossible. Lets say there were 6 teams and 3 boards and 2 nights. There is no solution where a team plays on a unique board each night? Maybe I don't understand the problem, or how it can be solved algorithmically with arbitrary number of teams, boards, nights? It feels like a derangement problem, which aren't easy to solve.

There are 45 pairings to make with 10 teams. 10 boards 10 nights is 100 spots to place them in such a way that no team plays the same board twice. This seems like a challenging combinatorial problem to me. Now, I'm just a schmuck in this realm of combinatorial mathematics, but how do you guarantee a derangement exists that meet all your criteria?
 
Last edited:
  • #69
erobz said:
I don't think your Round Robin is trivial.
It may be hard for a human to do on paper but only because it's onerous. (which is why I was asking in the first place).

Its a trivial combination problem - what computers do with one eye tied behind their back.

erobz said:
How do you know that there is even a solution that you require?
I did not ask for some perfect solution. I simply asked for it to rotate players on boards, which it demonstrably did not even try to do.


So why wouldn't it try, rather than making up a complete crock of an answer AND then lie about it?
 
  • #70
DaveC426913 said:
So why wouldn't it try, rather than making up a complete crock of an answer AND then lie about it?
Its not trivial, and it thinks like a human. It may not even be possible what you are asking of it!

What is the derangement for 4 teams, 4 boards, on 2 nights? Have it write a program in some code for you to see what it says for that? Or just ask it on this smaller number.
 
  • #71
https://chatgpt.com/c/681427fe-8c48-8001-8c03-30faef9302de

I asked it two similar questions:

"I have 4 teams, that need to play each other exactly once on 4 boards. no team can play on the same board twice. You must have at least 1 game a night. Are there any solutions?"

Conclusion, Yes there is at least one solution. Notice I didn't specify the number of days in the tournament. It finds one for a six day tournament.

Next I asked it:

"I have 4 teams, that need to play each other exactly once on 4 boards. no team can play on the same board twice. You must have at least 1 game a night. The tournament ends after 3 nights. Are there any solutions?"

Its conclusion, No solution exists.
 
  • #72
I asked it if it were derangements are difficult to handle. Then it explained: that it should be able to find a solution, and its confident there is a solution, but it only found a partial asking me if it wanted to continue to try and optimize? It might be a h Have a look at what its explaining to me. it claims this is an NP Hard problem you somewhat cockily asked it. Can any mathematicians skilled in combinatorics confirm?

https://chatgpt.com/c/681427fe-8c48-8001-8c03-30faef9302de
 
  • #73
erobz said:
Its not trivial,
Mathematically, it is.

erobz said:
and it thinks like a human.
It does not.

erobz said:
It may not even be possible what you are asking of it!
It is.

You are making a category error. This is not a closed, one (or zero) solution puzzle.

1. I have placed no constraints that might force only certain combinations, nor do I ever say anything like "every team must play on each board the same number of times" or any such thing.

Look at my request: "each team cycles through the five boards regularly". That's a trivial thing to do because it's so vague. Not putting the team 1 on board 1 every single time would be a good start! Nowhere did I say "you can never put the same team on the same board twice in a row."


2. At the very least, it would not start off by utterly ignoring the request: as it does by putting Team 1 on Board 1 for all ten games!

That's not "I tried and I can't do it"; that's staight-up "I am ignoring the very thing I explictly said I would not ignore".


3. And, finally, if you happened to be right, and it "couldn't" meet that requirement, why does keep saying, explicitly, that it did meet the requirement? That's a straight-up lie.
 
  • #74
DaveC426913 said:
3. And, finally, if you happened to be right, and it "couldn't" meet that requirement, why does keep saying, explicitly, that it did meet the requirement? That's a straight-up lie.
Lying, sounds a bit like you are anthropomorphizing?

Anyhow, look at the link. It isn't lying to me for this problem I asked it. Maybe it has interpreted your question just as I have? If that wasn't your intention it made it by both of us!
 
  • #75
erobz said:
"I have 4 teams, that need to play each other exactly once on 4 boards. no team can play on the same board twice. You must have at least 1 game a night. Are there any solutions?"
Indeed. Your puzzle is qulaitatvely different than mine.

You hvae included a must tha tmgiht contrcdcit other combination. Therefore it is possible to fail zero solutions.

I have made no such must request.

And it didn't even bother trying.
And it did produce a solution. A wrong one.
And it claimed the solution was correct.
And it acknowledged that it s not correct, and produced another solution - the same solution.
And it explicitly claimed it had fixed the problem.
And it still hadn't.
 
  • #76
DaveC426913 said:
Indeed. Your puzzle is qulaitatvely different than mine.

You hvae included a must tha tmgiht contrcdcit other combination. Therefore it is possible to fail zero solutions.

I have made no such must request.

And it didn't even bother trying.
And it did produce a solution. A wrong one.
And it claimed the solution was correct.
And it acknowledged that it s not correct, and produced another solution - the same solution.
And it explicitly claimed it had fixed the problem.
And it still hadn't.
I asked it 3 different questions. The last two differ numerically by a few times and it is already a difficult problem. It's telling me what it has to do to find a particular solution. and that seemingly innocent jump makes the problem NP_hard.
 
  • #77
erobz said:
Lying, sounds a bit like you are anthropomorphizing?
No. I mean by definition.
It stated something to be true while in the same breath stating that same thing to be false.

erobz said:
Anyhow, look at the link. It isn't lying to me for this problem I asked it.
Your puzzle is a faulty analogy. It does not apply.

And again, even if it interpreted it however it chose to, it still "lied" about it. And kept lying about it.
 
  • #78
erobz said:
I asked it 3 different questions. The last two differ numerically by a few times and it is already a difficult problem. Its telling me what it has to do to find a particular solution.
Matters not. Faulty analogy.
 
  • #79
How about this:

"10 teams playing each other exactly once on 5 boards. They cannot play on the same board on consecutive nights. Th tournament last 10 days. Is there a solution?"

Conclusion, Yes.

Follow up, is it NP_Hard? Yes.
 
  • #80
I asked it your question verbatim. And it did exactly what happened to you. But I then I told it was having real trouble meeting the constraints, and asked it to stop trying.

I instead asked it what class your problem is, it says NP_Hard.

1746155261799.png


https://chatgpt.com/c/681427fe-8c48-8001-8c03-30faef9302de
 
  • #81
Again: faulty analogy. You are forcing conditions that may or may not be fulfillment. I did notcdo that. And it still didn't try, and it still claimed it met all conditions.

Look, this is pointless.

I tell you my bicycle is broken and doesn't work. You pull out your own tricycle and show me it works, as if that matters. I tell you that your trike is not my bike. So you pull out a motorcycle and tell me it works. None of what you are showing me addresses my non-working bicycle.
 
  • #82
erobz said:
I asked it your question verbatim. And it did exactly what happened to you. But I then I told it was having real trouble meeting the constraints, and asked it to stop trying.

I instead asked it what class your problem is, it says NP_Hard.

View attachment 360585

https://chatgpt.com/c/681427fe-8c48-8001-8c03-30faef9302de
Yep. It's trying to make you happy by talking to you about what you want to talk about.

You want to talk about NP-hard - and, now so it does too.
 
  • Like
Likes russ_watters
  • #83
DaveC426913 said:
Again: faulty analogy. You are forcing conditions that may or may not be fulfillment. I did notcdo that. And it still didn't try, and it still claimed it met all conditions.

Look, this is pointless.

I tell you my bicycle is broken and doesn't work. You pull out your own tricycle and show me it works, as if that matters. I tell you that your trike is not my bike. So you pull out a motorcycle and tell me it works. None of what you are showing me addresses my non-working bicycle.
Dave. I just asked it your question verbatim.
 
  • #84
DaveC426913 said:
Yep. It's trying to make you happy by talking to you about what you want to talk about.

You want to talk about NP-hard - and, now so it does too.
Are you saying it isn't NP hard? Do you have proof?
 
  • #85
erobz said:
Dave. I just asked it your questio verbatim.
Yes. We are posting over top of each other.
 
  • #86
erobz said:
Are you saying it isn't NP hard?
No I am not.
 
  • #87
Then there is nothing further to discuss.
 
  • #88
PeroK said:
If you cannot see the fundamental difference between a calculator and an LLM, then there is no possibility of a serious debate.
I just asked him 1+1=? and he answered 2. I can even do the same with other operations and it still gives the result, you can check it yourself.

Being able to do the same with natural language is impressive, but there's no magic.
 
  • #89
DaveC426913 said:
Nothing "occurs to it". You are anthropomorphizing.

I tried explaining this to a ton of people but I'm getting nowhere. Sorry if I'm over-agreeing with you but I just found the thread and I really feel your "pain".

On the other hand, a lot of the people I talk to and about are really helped by ChatGPT due to e.g. dyslexia or other limitations. But yeah, they tend to anthropomorphize.
 
  • Like
Likes DaveC426913
  • #90
sbrothy said:
I tried explaining this to a ton of people but I'm getting nowhere. Sorry if I'm over-agreeing with you but I just found the thread and I really feel your "pain".
Yes, I am discovering that I have ... things to say ... about it. Not only how it's used and what people think of it, but also its impact on the planet.

I am working on an infographic of just how large a "country" AI would be if it were measured by its energy consumption and waste production.

Early estimate: imagine an entirely new island coming into being out of the middle of the Pacific - having never existed before the early 21st century - with the geographical area and population density of the state of California. That's how much energy worldwide AI uses and waste it produces.

sbrothy said:
On the other hand, a lot of the people I talk to and about are really helped by ChatGPT due to e.g. dyslexia or other limitations. But yeah, they tend to anthropomorphize.
It is very useful for very specific things.

I just had it write me an JavaScript-driven webpage from scratch** to track my kitten's current age.

** burning down only a small stand of forest, releasing a mere a few dozen cubic metres of CO2 waste in the process


Of course, like all things to do with AI, it needs oversight and verification. AI cannot be trusted to relay facts in a trustworthy fashion, but as long as you have a way of verifying what it has produced, it's OK.