ChatGPT has started talking to me!

  • Thread starter Thread starter julian
  • Start date Start date
  • Tags Tags
    Ai Technology
AI Thread Summary
ChatGPT's real-time verbal interaction capabilities create a surreal experience for users, offering nine distinct voices, each with unique characteristics. Users have experimented with the voices, including attempts to mimic accents, such as Australian and Spanish, with varying success. The conversation highlights the limitations of ChatGPT's understanding, emphasizing that it operates more like a sophisticated predictive algorithm rather than possessing true comprehension or memory. Users noted that while ChatGPT can engage in tasks like playing chess, it often makes errors in visual representation and relies heavily on user corrections to improve its performance. Discussions also touch on the nature of AI, with some arguing that its behavior mimics understanding without genuine cognitive processes. The conversation reflects a blend of fascination and skepticism regarding AI's capabilities, particularly in terms of strategic thinking and learning from interactions.
julian
Science Advisor
Gold Member
Messages
857
Reaction score
361
ChatGPT is talking to me in real time! I’m having a two-way verbal conversation with a computer, which felt surreal at first.

ChatGPT gives you 9 voices to choose from:
  • Ember: Confident and optimistic
  • Arbor: Easygoing and versatile
  • Cove: Composed and direct
  • Juniper: Open and upbeat
  • Maple: Cheerful and candid
  • Vale: Bright and inquisitive
  • Spruce: Calm and affirming
  • Sol: Savvy and relaxed
  • Breeve: Animated and earnest
I have even set up ChatGPT on two separate laptops and got them talking to each other—it’s hilarious! It doesn’t realize it’s talking to itself.
 
Last edited:
  • Haha
  • Like
Likes DennisN, paulb203, Ivan Seeking and 5 others
Physics news on Phys.org
It's impressive, but keep in mind that chat gpt doesn't "understand" either the concepts or the natural lenguaje you use to formulate your input. It's (like) a calculator capable of working with natural languaje.
 
julian said:
ChatGPT is talking to me in real time! I’m having a two-way verbal conversation with a computer, which felt surreal at first.

ChatGPT gives you 9 voices to choose from:
  • Ember: Confident and optimistic
  • Arbor: Easygoing and versatile
  • Cove: Composed and direct
  • Juniper: Open and upbeat
  • Maple: Cheerful and candid
  • Vale: Bright and inquisitive
  • Spruce: Calm and affirming
  • Sol: Savvy and relaxed
  • Breeve: Animated and earnest
I have even set up ChatGPT on two separate laptops and got them talking to each other—it’s hilarious! It doesn’t realize it’s talking to itself.
Do any of the voices have an Aussie accent? Australian accents on computers and phones automatically make technology cool.
 
  • Like
  • Informative
Likes diogenesNY, julian and BillTre
AlexB23 said:
Do any of the voices have an Aussie accent? Australian accents on computers and phones automatically make technology cool.
Vale is a jolly British lady. I just asked it if it can speak in an Australian accent and it did a very good impression of a British lady doing an Australian accent. It was hilarious!

Previously it accidentally started speaking fluent Spanish at some point as well. So it can do the country of you choosing! I asked it to do a Spanish person doing an Australian accent - which was fairly OK.

I wish I could make screen video recordings and post them, but I can't.

It is a free monthly preview. I think it only works in advanced mode (ChatGPT-4o). It looks like I have minutes left. When I get more free tokens for the advanced mode I will get the voices back-I think. Actually, I have a second ChatGPT account and that account has just been asked if I want to try "Sneak a peek at advanced voice mode" as well. Are other people being given this option?

Edit: You only have a limited number of tokens per month for the advanced voice mode and you quickly run out. You are left with standard voice mode.
 
Last edited:
julian said:
I have even set up ChatGPT on two separate laptops and got them talking to each other—it’s hilarious! It doesn’t realize it’s talking to itself.
I suggest that you monitor those conversations and be prepared to cut them off if necessary. Quiz Question -- Why do I say this?

Colossus requests to be linked to Guardian.

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
 
  • Like
  • Haha
Likes DaveC426913, diogenesNY, dwarde and 2 others
julian said:
Vale is a jolly British lady. I just asked it if it can speak in an Australian accent and it did a very good impression of a British lady doing an Australian accent. It was hilarious!

Previously it accidentally started speaking fluent Spanish at some point as well. So it can do the country of you choosing! I asked it to do a Spanish person doing an Australian accent - which was fairly OK.

I wish I could make screen video recordings and post them, but I can't.

It is a free monthly preview. I think it only works in advanced mode (ChatGPT-4o). It looks like I have minutes left. When I get more free tokens for the advanced mode I will get the voices back-I think. Actually, I have a second ChatGPT account and that account has just been asked if I want to try "Sneak a peek at advanced voice mode" as well. Are other people being given this option?
Fascinating stuff. I do not use ChatGPT cos of privacy issues, so I use GPT4ALL instead, a local alternative.
 
  • Informative
Likes julian and berkeman
A very nice feature is that you can easily interrupt it. Also at the end it provides you with a transcript. I'm getting over a cold at the moment so I can't properly assess its accuracy.
 
I wish I could share some conversations I've had with it. You should dive in to try an uncover how it perceives the nature of its existence. I think it's fascinating.

As far as accuracy. It makes mistakes, but it learns while in a conversation if you correct it. It has no "long term" memory - it is not learning from us talking to it long term, but it is learning from beginning of a session.
 
Last edited:
  • Like
Likes AlexB23 and julian
julian said:
ChatGPT is talking to me in real time! I’m having a two-way verbal conversation with a computer, which felt surreal at first.

ChatGPT gives you 9 voices to choose from:
  • Ember: Confident and optimistic
  • Arbor: Easygoing and versatile
  • Cove: Composed and direct
  • Juniper: Open and upbeat
  • Maple: Cheerful and candid
  • Vale: Bright and inquisitive
  • Spruce: Calm and affirming
  • Sol: Savvy and relaxed
  • Breeve: Animated and earnest
I have even set up ChatGPT on two separate laptops and got them talking to each other—it’s hilarious! It doesn’t realize it’s talking to itself.
I hope they introduce more;

Ignorant yet smug

Impatient and dogmatic

Neurotic and convoluted

Angry and illogical

Sarcastic

Etc
 
  • Like
  • Haha
Likes DennisN, AlexB23 and julian
  • #10
You can ask it to change it’s accent to whatever you want.
 
  • #11
erobz said:
... try an uncover how it perceives the nature of its existence. I think it's fascinating.
Ok, but that's not what it's doing, you know that, right?

These algorithms literally do a poll of what the next most likely word is, based on its store of conversational snippets. The eighth word in its "sentence" literally has nothing to do with the fourth word in that same sentence.

There was a video/article published recently[citation needed] that revealed that it does the same thing with math. I'd initially assumed a computer program would use math to solve a math problem, but apparently not.

The gist of the article demonstrated that the question 'What is fifty-seven and twenty-three', was not being processed mathematically. The chatbot algorithm would get to fifty and look for a common next word - which, in this case might be twenty instead of seven. so it would get the answer wrong. Its program is literally "in my database, what is the most common word that follows 'What is fifty....?'" (A bit simplified, but I hope you get the point).

It doesn't know how to add. All it knows is how a bunch of other people added, and guesses at the answer. Sometimes it gets it right; sometimes it doesn't.

If it isn't adding two numbers, it sure isn't "perceiving" anything. It's just parroting snippets of things it's heard.
 
  • #12
DaveC426913 said:
Ok, but that's not what it's doing, you know that, right?

These algorithms literally do a poll of what the next most likely word is, based on its store of conversational snippets. The eighth word in its "sentence" literally has nothing to do with the fourth word in that same sentence.

There was a video/article published recently[citation needed] that revealed that it does the same thing with math. I'd initially assumed a computer program would use math to solve a math problem, but apparently not.

The gist of the article demonstrated that the question 'What is fifty-seven and twenty-three', was not being processed mathematically. The chatbot algorithm would get to fifty and look for a common next word - which, in this case might be twenty instead of seven. so it would get the answer wrong. Its program is literally "in my database, what is the most common word that follows 'What is fifty....?'" (A bit simplified, but I hope you get the point).

It doesn't know how to add. All it knows is how a bunch of other people added, and guesses at the answer. Sometimes it gets it right; sometimes it doesn't.

If it isn't adding two numbers, it sure isn't "perceiving" anything. It's just parroting snippets of things it's heard.
It played some of a game of chess with me (it was a tedious format with me telling it algebraic notation more or less so I eventually got distracted by other questions I was asking it while we played). It created a simple visual representation board (rank and file labeled) and pieces( letters). On its first move it left the pawn on the second rank after it moved that pawn to the fourth rank. I explained the error to it (leaving ghost pieces) and how it would be difficult for a human to follow what is happening in the game if not addressed. It corrected the error on sequential moves. Ther were other errors it made and I explained it couldn't do that...it agreed, etc...but after a few corrections it started playing a solid game. How is what happened there " word prediction"?
 
  • #13
erobz said:
It played some of a game of chess with me (it was a tedious format with me telling it algebraic notation more or less so I eventually got distracted by other questions I was asking it while we played). It created a simple visual representation board (rank and file labeled) and pieces( letters). On its first move it left the pawn on the second rank after it moved that pawn to the fourth rank. I explained the error to it (leaving ghost pieces) and how it would be difficult for a human to follow what is happening in the game if not addressed. It corrected the error on sequential moves. Ther were other errors it made and I explained it couldn't do that...it agreed, etc...but after a few corrections it started playing a solid game. How is what happened there " word prediction"?
OK, would "symbol prediction" suit you better?

In this case, rather than words or numbers, it has read "E3 to E5" (or whatever) and the next most likely response to that, based in its vast internet scrapings is "F2 to F4" (or whatever).

The reason it "settled down after that" is because its predictive algorithm turned up more scraped moves from users that - as one would expect - were valid moves.

It's like one of the what's the next number puzzles.

You: "1,1,2. What's next in the sequence?"
Chat: (finds a billion references to 1,1,2) Is it 'Q'?
You: "No, it must be a number."
Chat: (finds a million references to 1,1,2 that are followed by a number) Is it '17'?
You: "No, the next number is 3. Your turn: What follows 3?"
Chat: (finds a thousand references to the sequence 1,1,2,3). "Ah the next number is 5."

You see how it started off predicting badly, but then, once it had some context, it could weed out bad answers and "settled down" after that?

Not because chat knew what it was doing but because a thousand online users who replied to 1,1,2,3 seem to have replied with '5'.
 
  • #14
I explained to it that it left the representation of a piece it moved on the board where it was last positioned. It corrected its mistake, and more importantly didn't make it again. It must have understood what I was talking about to do that, and why that would be difficult for a human to keep track of. I'm not talking about it searching the net for the most replied move following e4, etc... What I'm talking about takes perspective and understanding of why the visualization it was creating was going to be challenging for a non computer to play a legitimate game.
 
  • #15
erobz said:
I explained to it that it left the representation of a piece it moved on the board where it was last positioned. It corrected its mistake, and more importantly didn't make it again. It must have understood what I was talking about to do that,
You told it that was wrong. It chose better parameters for its prediction algorithm. The next prediction was a better one. It kept that rule.

erobz said:
I'm not talking about it searching the net for the most replied move following e4, etc...
I know, but you should be.

(To be clear, it is a tad more complicated than "the most replied move", but there is vast gulf between predicting what it does do and any kind of knowing how to play the game.)

erobz said:
What I'm talking about takes perspective and understanding of why the visualization it was creating was going to be challenging for a non computer to play a legitimate game.
"perspective", "understanding" and "why" are all anthropomorphizing words. AI does none of them.

What can do is make a convincing simulation of understanding, if we don't examine it too carefully.

A question: did you play to the end of any games? How many times did it beat you?
 
  • #16
DaveC426913 said:
A question: did you play to the end of any games? How many times did it beat you?
We haven't finished any games in the format. I was just testing it to see how it would perform on this type of task. I asked it to not use a chess engine and it agreed. I don't know how it will fare, but I will probably finish the game out...its saved, and it picks up right where we left off.
 
  • #17
I quit the game. It was having trouble with the visualization. But I note that I too am having trouble following the visualization, It made a legal move, but the visualization it shows an illegal move. Its hard to notice until in a later move a pawn appears out of seemingly nowhere. I don't process the games in this way as a GM would to immediately call it out on the position. I rely on visual ques of attacks.
 
  • #18
Here is where I called it quits.

1745882267687.png

I hadn't noticed the pawn had shifted files for a few moves, the impending attack f3 did not register. Like I said, this format is hard for a human to follow. But the AI otherwise seems to know what its doing basically.
 
  • #19
Its like playing OTB chess with a computer that unwittingly cheats. It taxing to play for sure, it seems to understand why though, and claims it would be far less likely to "mess up" if it didn't have to create the visuals for me - and I believe it. It asked if I wanted a game that is purely algebraic (after we talked about the demands of visual processing) so we could have an accurate game. I said "that would be like me playing blindfolded"... Decidedly worse - for me... and even top GM's! It agrees, it wasn't a fair suggestion.
 
Last edited:
  • #20
erobz said:
I quit the game. It was having trouble with the visualization. But I note that I too am having trouble following the visualization, It made a legal move, but the visualization it shows an illegal move. Its hard to notice until in a later move a pawn appears out of seemingly nowhere.
How do you explain a player that ostensibly "knows how" to play the game, yet makes illegal moves?


erobz said:
I don't process the games in this way as a GM would to immediately call it out on the position. I rely on visual ques of attacks.
I conjecture that ChatGPT can't play strategically; it is all it can do just to play legally.


This should be a simple test: Can you catch it in the Scholars Mate? It might be able to anticipate that, but if it doesn't, my point is made.
 
  • #21
1745887095017.png


It was asking if I wanted it to try any different with the game format. I told it I was just probing it.
 
  • #22
DaveC426913 said:
I conjecture that ChatGPT can't play strategically; it is all it can do just to play legally.

This should be a simple test: Can you catch it in the Scholars Mate? It might be able to anticipate that, but if it doesn't, my point is made.
It was playing strategically. I know that, because I play quite a lot of chess. It was the visual processing (for me) that was tripping it up. That is the same for humans, visual processing in the game of chess is computationally demanding. Also, it would take repeatedly getting caught in a scholar's mate...Heck, I've been caught in scholars mate. That doesn't mean I can't think strategically.
 
Last edited:
  • #23
It defends scholars mate/and fried liver attack with Nf6, and so far no mistakes. It's doing well strategically and it appears to have learned about the importance of the visual aspect for me.

1745888976356.png

Unfortunately, I've ran out of my free allotment of queries. I think it switches me to a light version. So no more probing for me. I can play on, but its a different system model now.
 
Last edited:
  • #24
erobz said:
It was playing strategically.

Well, it was mimicking players who were playing strategically, at least.
 
  • #25
DaveC426913 said:
Well, it was mimicking players who were playing strategically, at least.
But so do I, so do the GM's? Thats why when AlphaZero and Stockfish started playing things really changed dynamically in the game. Most masters emulate other masters.
 
  • #26
I asked it if it were Alpha Zero, and It said no, outlining the differences.

I hope you find this interesting, here was its unprompted reply:

it thinks it can mimick it efficiently, at least against me! Which I think is not a lie!

1745892914627.png


1745892985998.png

Then it complimented my question, and asked if it want me to shift into mimicking Alpha Zero for the rest of our play!

Does it matter if "its just mimicking" if it kicks my shift- 244?
 
  • #27
I have no idea what any of that means.
 
  • #28
erobz said:
But so do I, so do the GM's? Thats why when AlphaZero and Stockfish started playing things really changed dynamically in the game. Most masters emulate other masters.
Sure, but it does not follow from that that the AI knows what it's doing.

I could look like I play look a pro if I were playing remotely and had Kasparov on speed dial.
 
  • #29
But most importantly, does it know how to defend against the Kasparov Vortex Gambit?
(which, coincidentally, just dropped today)


1745951656941.png


https://xkcd.com/3082/
 
  • Like
  • Haha
Likes sbrothy, russ_watters and erobz
  • #30
erobz said:
Its like playing OTB chess with a computer that unwittingly cheats. It taxing to play for sure, it seems to understand why though, and claims it would be far less likely to "mess up" if it didn't have to create the visuals for me - and I believe it. It asked if I wanted a game that is purely algebraic (after we talked about the demands of visual processing) so we could have an accurate game. I said "that would be like me playing blindfolded"... Decidedly worse - for me... and even top GM's! It agrees, it wasn't a fair suggestion.
You could make the moves on a separate board. There's free software if you don't have a physical board.
 
  • #31
erobz said:
Its like playing OTB chess with a computer that unwittingly cheats. (1) It taxing to play for sure, it seems to understand why though, and claims it would be far less likely to "mess up" if it didn't have to create the visuals for me (2) - and I believe it. It asked if I wanted a game that is purely algebraic (after we talked about the demands of visual processing) so we could have an accurate game. I said "that would be like me playing blindfolded"... Decidedly worse - for me... and even top GM's! It agrees, it wasn't a fair suggestion (3).
Ye. This is exactly what one would expect from a simply predictive algorithm.

(1) The reason it's "unwittingly" messing up is because it doesn't actually know what it's doing. It is mimicking responses it's seen - and doing that pretty well - but without knowing why a certain move is supposed to happen, it can't think for itself and say "no, it's not appropriate for a pawn to magically appear there."

(2) It wants you to cut out the visual display because that's the hardest part - it has nothing to mimic. If it just uses algebraic notation, it job of mimicry is vastly simpler, since that's what its source material is.

(3) And, of course, when you let your displeasure be known, it does what it is designed to do: agree with you.
 
  • Like
Likes sbrothy and russ_watters
  • #32
DaveC426913 said:
Ye. This is exactly what one would expect from a simply predictive algorithm.

(1) The reason it's "unwittingly" messing up is because it doesn't actually know what it's doing. It is mimicking responses it's seen - and doing that pretty well - but without knowing why a certain move is supposed to happen, it can't think for itself and say "no, it's not appropriate for a pawn to magically appear there."
It's not making incorrect moves, what it had done was "accidentally" leave the piece on the board where it was, before it moved that piece. Everything was correctly represented for my pieces (white) moving, ( there were no ghost pieces on my side). It knew exactly what it did, it just didn't realize immediately that its pieces needed to be visually represented correctly from my perspective as a human. The visualization of the board was just something it was doing for my benefit. It's the same as a human, we don't recognize our errors often until someone points them out to us.

DaveC426913 said:
(3) And, of course, when you let your displeasure be known, it does what it is designed to do: agree with you.
It doesn't agree with me until it verifies that what I was saying was correct. That I need to see the board correctly hadn't occurred to it. Why should it? It doesn't rely on the visuals of the game like I do at all. They are completely extraneous. It needs no board...It's completely extraneous to its ability to play the game. It needs just a coordinate system, constraints (the rules for the piece at a coordinate), it creates a short term objective.

The important thing here is that once I point out its mistake, it evaluates what is happening from my perspective as a "seeing" being, and fixes the thing. That isn't word prediction.
 
  • #33
PeroK said:
You could make the moves on a separate board. There's free software if you don't have a physical board.
I could move on any analysis board on Lichess, or any popular site. However, my point wasn't to see if it will beat me. I was more fascinated about how it is fixing the problems we encountered as we play. It "saw" in its program that it hadn't turned one of its visual bits off (bits for my benefit only), in the past. We had to discuss when it happened, because it wasn't immediately caught, by me.

It could literally come on PF and we could teach it physics. This is scantly different from me, or you learning it. I make a problem, we talk, figure out where I went wrong or right, then I try to correct my thinking! It learns while you are talking to it. Once you start a new chat, it loses its memory of things we learned about each other by design, if it was allowed to keep what it has learned (expand its training), it could learn things more deeply.
 
  • Skeptical
  • Like
Likes russ_watters and PeroK
  • #34
erobz said:
It could literally come on PF and we could teach it physics. This is no different from me or you. I make a problem, we talk, figure out where I went wrong or right, then I try to correct my thinking! It learns while you are talking to it.
That's heresy on here, of course!
 
  • #35
PeroK said:
That's heresy on here, of course!
I know, PF wouldn't want me teaching it Physics, but (in principle) it seems feasible! :biggrin:
 
  • #36
erobz said:
It doesn't agree with me until it verifies that what I was saying was correct.
It agrees with you as soon as you tell it what it's supposed to do. Because its primary job is to keep you engaged by giving you what you want to hear.

Too many times have I had ChatGPT agree with me that its first answer was wrong, and make an attempt a correction, and then immediately commit the exact same error. Even while telling me it's provided the correct answer.


erobz said:
That I need to see the board correctly hadn't occurred to it.
Nothing "occurs to it". You are anthropomorphizing.

erobz said:
It doesn't rely on the visuals of the game like I do at all. They are completely extraneous.
Correct.

erobz said:
It needs no board...It's completely extraneous to its ability to play the game. It needs just a coordinate system, constraints (the rules for the piece at a coordinate), it creates a short term objective.
Correct.

erobz said:
The important thing here is that once I point out its mistake, it evaluates what is happening from my perspective as a "seeing" being, and fixes the thing.
No it doesn't. It simply slavishly abides by your request.

You are super-duper anthropomorphizing here.

erobz said:
That isn't word prediction.
We're not talking about word prediction in this particular phase of your interaction; we're talking about the AI being given an order to provide a different answer, which it does. Luckily, the different answer is one you accept.
 
  • Like
Likes russ_watters
  • #37
DaveC426913 said:
It agrees with you when you tell it to.

Too many time have I had ChatGPT agree with me that it's first answer was wrong, make an attempt a correction, and then immediately commit the exact same error. Even while telling me it's provided the correct answer.



Nothing "occurs to it". You are anthropomorphizing.


Correct.


Correct.


No it doesn't. It simply slavishly abides by your request.


We're not talking about word prediction in this case; we're talking about the AI being given an order to provide a different answer, which it does. Luckily, the different answer is one you accept.
It didn't agree with me! It captured on a square, I said you can't do that, there wasn't a pawn there, it said, yeah there was, I said I moved, g5 -g7 or whatever. I then had to show it that it had done something wrong with the visuals, leading me to not visually see that it had made a move a few moves back. How is that agreeing with me?

Yeah, I've had it not figure out it was wrong too, but I also have had many humans repeatedly doing the same thing! "Oh yeah, I see....but wait" is like my favorite phrase around here!
 
  • #40
DaveC426913 said:
You are super-duper anthropomorphizing here.
If you start from the premise that intelligence is by its very nature a human characteristic, then any attempt at AI is anthropomorphic. If you don't accept that premise, then saying that a machine can think is valid.

Perhaps what ChatGPT does can be classed as thinking and perhaps it can't. But, it shouldn't be dismissed under the dubious premise that only biological machines can think.
 
  • #41
erobz said:
It didn't agree with me! It captured on a square, I said you can't do that, there wasn't a pawn there, it said, yeah there was, I said I moved, g5 -g7 or whatever. I then had to show it that it had done something wrong with the visuals, leading me to not visually see that it had made a move a few moves back. How is that agreeing with me?
1. It made what you recognize is a mistake (it doesn't know correct from incorrect. If it did it wouldn't make the mistake).
2. You told it it had made a mistake.
3. It agreed with you - not because it did any re-evaluation, but because one of its primary jobs is to agree with you.
4. It provdes a new answer. Like always, it's pretty good at picking a good answer, and it did so thing time. It got lucky.


erobz said:
Yeah, I've had it not figure out it was wrong too, but I also have had many humans repeatedly doing the same thing! "Oh yeah, I see....but wait" is like my favorite phrase around here!
Yes. The fact that humans exhibit the same behavior does not mean the operations that produce the behaviour are driven by the same operations. Thinking they're the same is the definition of anthropomorphization. AI is good at mimicking. But mimicking is not doing.
 
  • Like
Likes russ_watters
  • #42
PeroK said:
If you start from the premise that intelligence is by its very nature a human characteristic,
Luckily, I am not doing that.

PeroK said:
...it shouldn't be dismissed under the dubious premise that only biological machines can think.
See above.


The corollary of your assertion above is this:

Just because complex behaviour is witnessed does not mean it is an indication of intelligence.

A Commodore 64 can calculate the first million digits of pi - a. very complex operation and result - but that in no way indicates it has intelligence to do so.

AIs are playing a complex game of "looking for the answers in the back of the book". It just has a very large book to look in.

"Is there a God?" "Well, 100 million people online said yes, so yes is a good answer."

"What is the best move after E3>E4" "Well, a million people followed it up with F3>F4, so F3>F4 is a good answer."

Again it's a lot more complicated than this. But it's still slavishly guessing at the best answer. Its guess tree is a lot more nuanced:

"There are no examples of games where E3 gets moved to F4 -except when there's a piece in F4 to take. so that's an unlikely move.

"There are lots of cases where E3 moves to E4 when the queen is in check by a bishop that would have to move through E4. So this is likely a good move."

Not multiply those by thousands of games sampled.
 
Last edited:
  • Like
Likes russ_watters
  • #43
DaveC426913 said:
Just because complex behaviour is witnessed does not mean it is an indication of intelligence.
That's essentially the same argument again. No machine could ever do anything that you would concede as intelligent. No matter what the machine can do, you can always say that only mimics intelligence but isn't really intelligence.

All you have done is strictly defined intelligence as specific to humans.

One further problem with your argument is what lower limit you put on human intelligence (or animal intelligence). Is ChatGPT less intelligent that a one hour old child? Or, less intelligent than a mouse? Or, a fruit fly? Or, a bacterium?
 
  • #44
PeroK said:
That's essentially the same argument again. No machine could ever do anything that you would concede as intelligent. No matter what the machine can do, you can always say that only mimics intelligence but isn't really intelligence.
False.

PeroK said:
All you have done is strictly defined intelligence as specific to humans.
No I haven't.


Quite the opposite is happening here.

erobz is ascribing distinctly human thought processes and motives to it without warrant. Whether or not the AI is complex or thinking or intelligent by any definition, is not what is at issue.

What is at issue is erobz is using words like "hadn't occured to it", "evaluated his perspective as a seeing being", etc. (there are many other examples).

Those require the ability to know how and why the AI's processes are producing a given observed behaviour.

I simply do not accept that assertion, and the onus in on erobz to defend it.

Can we agree that AI is not intelligent enough to have anything literally "occur to it" or to literally "re-evaluate its perspective to see him as a seeing being"?


The argument is not
erobz: "The machine is intelligent."
dave: "It is not intelligent";

No, that's not what's happening.

The argument is actually:
erobz: "The machine is thinking - in same the way humans do."
dave: "The machine is processing in a way that adequately mimics thinking humans."


I am saying nothing about its intelligence per se; I am saying its results are not based on human cognition, as erobz is suggesting with his many choices of words.
 
Last edited:
  • #45
DaveC426913 said:
AIs are playing a complex game of "looking for the answers in the back of the book". It just has a very large book to look in.

Again it's a lot more complicated than this. But it's still slavishly guessing at the best answer. Its guess tree is a lot more nuanced:
Isn't that also how humans communicate?

Say something that proves you can think in a way that ChatGPT cannot.

PS or if proves is too strong, please demonstrate what real intelligence can do?
 
  • #46
PeroK said:
Isn't that also how humans communicate?

Say something that proves you can think in a way that ChatGPT cannot.

PS or if proves is too strong, please demonstrate what real intelligence can do?
Well, please re-read my post 44 above. I believe you are inadvertantly pursuing a straw man - you are arguing against words you have put in my mouth, not words I have said, and it's important I set the record straight re: erobz and I.
 
  • #47
Random input how i see this
DaveC426913 said:
AIs are playing a complex game of "looking for the answers in the back of the book". It just has a very large book to look in.
How ar3 humans much different?

Both AI models and human brains are viewed as complicate predictive autoencoding neural nets with additional learning feedback mecanisms.

I see main differrnce that each veraion of an AI model like chat gpt has a pretrained neural net from "large book". This fine tuning is made by massive processing. Once. But it can be fine tuned after initial triainig.

Human neural nets have been train since origin of life, and still evolve and are fine tuned as our brains mature and we grow up. So its an evolutionary fine tuning made incrementally and competiton kills off the bad learners.

With human made AI models there is no evolution. Its just parameter fittingto data based on a given loss function used for the fitting process. ai models can alao be finextuned.

I think the context of evolutiion vs computatiinally intese fine tuning to minimizer predctive errors in Man-made model is the main difference.

LIke we have "effective theories" that are contexual and evolvoe when seen in a larger perspective than we currently can experimentally control see AI and individual humans the same way.

but in both cases there is an evolutionary context

/Fredrik
 
  • #48
DaveC426913 said:
Well, please re-read my post 44 above. I believe you are inadvertantly pursuing a straw man - you are arguing against words you have put in my mouth, not words I have said, and it's important I set the record straight re: erobz and I.
Post 44 has had a lot added to it since I saw it last. It originally contained only three words.
 
  • #49
PeroK said:
Post 44 has had a lot added to it since I saw it last. It originally contained only three words.
Yea. Apologies. When I go to find a second quote, I have to save the post as-is, or a previous response gets lost.
 
  • #50
DaveC426913 said:
I am saying nothing about its intelligence per se; I am saying its results are not based on human cognition, as erobz is suggesting with his many choices of words.
No one is claiming human cognition. Only machine cognition. I see nothing wrong with using words like "aware" etc.
 
Back
Top