Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #181
Dale said:
She would have failed the rouge test.
I'm not sure of that. Non-human animals that cannot speak have passed the test. So I don't think Helen Keller not being able to "describe adequately" in words what it was like before she learned language is sufficient to establish that she would not have passed such a test, if it could have been given to her during that time.
 
  • Like
Likes   Reactions: russ_watters and BillTre
Physics news on Phys.org
  • #182
Dale said:
Don't you want the support? You are clearly right that she had no self-awareness. The test clearly supports your position.

I asked you for clarification because as I read @gleem 's post, I can infer that he believes she would fail the rouge test in the deepest sense - if it could be applied to her. I didn't know if you also felt that way or if you were making a HK joke. I was, in good faith, just wanting to understand your post better. I am not above a HK joke myself, I am not posturing myself as holier-than-thou with my question to you.

I guess, based on your response to me, you did mean it as a joke - ok, understood.

I haven't stated myself if I think HK had no self-awareness prior to her a-ha moment. My position (not very well thought out and maybe not defensible) is that she lacked sufficient context to have self-awareness as any of us might try to define it. Is an earthworm self-aware when it struggles to return to my lawn if stranded on the sidewalk? IMO, no, not really. I'm sure she had whatever awareness her sense of touch afforded her, but no way to abstract that to a sense of self vs others because she didn't know any others that might have similar experience to her existed.
 
  • #183
Grinkle said:
I asked you for clarification because as I read @gleem 's post, I can infer that he believes she would fail the rouge test in the deepest sense - if it could be applied to her.
Oops, sorry, I thought you were @gleem

I was just being a troublemaker. I don’t think Helen Keller’s mental state is at all relevant to the important questions about AI today, like the actual risks that they currently pose and their typical malfunctions.
 
  • Like
Likes   Reactions: javisot and russ_watters
  • #184
Dale said:
the actual risks that they currently pose and their typical malfunctions.

Fair enough - you've been asking for discussion around this and no one has so far engaged directly on that topic. Maybe its OT for this post, if the OP complains I'll open a separate thread.

I don't think you are talking about the dangers of humans using AI that is working as intended to do bad things - deepfakes, for instance? This is dangerous and bad, but its still people deciding to do bad things and then doing the bad things - that scenario imo is more like a discussion around gun control.

IMO, the greater risk is not malfunction but that it works so well at pretending (or actually doing, it doesn't matter I have the same concern either way) to do critical thinking that it trains us humans to depend on it for our critical thinking. I am considering LLM's when I say this, but that line of concern probably applies to AI other than LLM's as well - your example of equity trading software, for instance.

I am concerned in 40 or 60 years we will be a society that depends on AI and no longer really understands how the AI ecosystem even works, by and large. I am envisioning a global system that has no top-down design, but has evolved bit by bit, system by system, until understanding how it works at a high level requires reverse engineering that very few people 2-3 generations hence will have the energy to bother with. We will become a slothful stagnant society - not because AI wants us that way, if indeed it evolves into something with wants, but because that is where we will allow ourselves to sink. AI will write our books (if anyone still reads books), create our movies and our games, manage our social lives and our careers. We needn't speculate around whether AI can do physics research because no one will care about fundamental physics - a few folks may still care about technology, but I suspect not many.

I'm not sure I've given you anything worth responding to. I would comment on current risks and malfunctions, but I don't know enough about AI applications to have anything meaningful on that. I'm aware that ChatGPT will occasionally tell me things that are clearly wrong - but since I'm typically doing things like asking for orderable part numbers for a mounting bracket for my car, the errors have been fairly benign to date.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K